text
stringlengths
1
2.25M
--- abstract: 'We consider reaction-diffusion equations either posed on Riemannian manifolds or in the Euclidean weighted setting, with power-type nonlinearity and slow diffusion of porous medium time. We consider the particularly delicate case $p<m$ in problem , a case largely left open in [@GMPv] even when the initial datum is smooth and compactly supported. We prove global existence for L$^m$ data, and a smoothing effect for the evolution, i.e. that solutions corresponding to such data are bounded at all positive times with a quantitative bound on their L$^\infty$ norm. As a consequence of this fact and of a result of [@GMPv], it follows that on Cartan-Hadamard manifolds with curvature pinched between two strictly negative constants, solutions corresponding to sufficiently large L$^m$ data give rise to solutions that blow up pointwise everywhere in infinite time, a fact that has no Euclidean analogue. The methods of proof of the smoothing effect are functional analytic in character, as they depend solely on the validity of the Sobolev inequality and on the fact that the L$^2$ spectrum of $\Delta$ on $M$ is bounded away from zero (namely on the validity of a Poincaré inequality on $M$). As such, they are applicable to different situations, among which we single out the case of (mass) weighted reaction-diffusion equation in the Euclidean setting. In this latter setting, a modification of the methods of [@Sacks] allows to deal also, with stronger results for large times, with the case of globally integrable weights.' address: - - - author: - Gabriele Grillo - Giulia Meglioli - Fabio Punzo title: | Smoothing effects and infinite time blowup\ for reaction-diffusion equations:\ an approach via Sobolev and Poincaré inequalities --- Introduction ============ Let $M$ be a complete noncompact Riemannian manifold of infinite volume. Let us consider the following Cauchy problem, for any $T>0$ $$\label{problema} \begin{cases} \, u_t= \Delta u^m +\, u^p & \text{in}\,\, M\times (0,T) \\ \,\; u =u_0 &\text{in}\,\, M\times \{0\} \end{cases}$$ where $\Delta$ is the Laplace-Beltrami operator. We shall assume throughout this paper that $1<p\,<\,m$ and that the initial datum $u_0$ is nonnegative. We let L$^q(M)$ be as usual the space of those measurable functions $f$ such that $|f|^q$ is integrable w.r.t. the Riemannian measure $\mu$ and make the following basic assumptions on $M$, which amount to assuming the validity of both the Poincaré and the Sobolev inequalities on $M$: $$\label{P} (\textrm{Poincar\'e\ inequality)}\ \ \ \ \ \|v\|_{L^2(M)} \le \frac{1}{C_p} \|\nabla v\|_{L^2(M)} \quad \text{for any}\,\,\, v\in C_c^{\infty}(M);$$ $$\label{S} (\textrm{Sobolev\ inequality)}\ \ \ \ \ \ \|v\|_{L^{2^*}(M)} \le \frac{1}{C_s} \|\nabla v\|_{L^2(M)}\quad \text{for any}\,\,\, v\in C_c^{\infty}(M),$$ where $C_p$ and $C_s$ are numerical constants and $2^*:=\frac{2N}{N-2}$. The validity of , puts constraints on $M$, and we comment that it is e.g. well known that, on *Cartan-Hadamard manifolds, namely complete and simply connected manifolds that have everywhere non-positive sectional curvature, always holds. Furthermore, when $M$ is Cartan-Hadamard and, besides, $\operatorname{sec}\le -c<0$ everywhere, $\operatorname{sec}$ indicating sectional curvature, it is known that holds as well, see e.g. [@Grig; @Grig3]. Thus, both , hold when $M$ is Cartan-Hadamard and sec$\,\le -c<0$ everywhere, a case that strongly departs from the Euclidean situation but covers a wide class of manifolds, including e.g. the fundamental example of the hyperbolic space $\mathbb{H}^n$, namely that Cartan-Hadamard manifold whose sectional curvatures equal -1 everywhere (or the similar case in which $\operatorname{sec}=-k$ everywhere, for a given $k>0$).* The behaviour of solutions to is influenced by competing phenomena. First of all there is a diffusive pattern associated with the so-called *porous medium equation, namely the equation $$\label{pme} u_t \,=\, \Delta u^m \quad \textrm{in}\;\; M\times (0,T)\,,$$ where the fact that we keep on assuming $m>1$ puts us in the *slow diffusion case. It is known that when $M={\mathbb R}^n$ and, more generally, e.g. when $M$ is a Cartan-Hadamard manifold, solutions corresponding to compactly supported data have compact support for all time, in contrast with the properties valid for solutions to the heat equation, see [@V]. But it is also well-known that, qualitatively speaking, *negative curvature accelerates diffusions, a fact that is apparent first of all from the behaviour of solutions of the classical heat equation. In fact, it can be shown that the standard deviation of a Brownian particle on the hyperbolic space $\mathbb{H}^n$ behaves *linearly in time, whereas in the Euclidean situation it is proportional to $\sqrt t$. Similarly, the heat kernel decays exponentially as $t\to+\infty$ whereas one has a power-type decay in the Euclidean situation.**** In the Riemannian setting the study of has started recently, see e.g. [@GIM], [@GMhyp], [@GM2], [@GMPbd], [@GMPrm], [@GMV], [@Pu1], [@VazH], noting that in some of those papers also the case $m<1$ in , usually referred to as the *fast diffusion case, is studied. Nonlinear diffusion gives rise to speedup phenomena as well. In fact, considering again the particularly important example of the hyperbolic space $\mathbb{H}^n$ (cf. [@VazH], [@GM2]), the $L^\infty$ norm of a solution to satisfies $\|u(t)\|_\infty\asymp \left(\frac{\log t}t\right)^{1/(m-1)}$ as $t\to+\infty$, a time decay which is *faster than the corresponding Euclidean bound. Besides, if the initial datum is compactly supported, the volume $\mathsf{V}(t)$ of the support of the solution $u(t)$ satisfies $\mathsf{V}(t)\asymp t^{1/(m-1)}$ as $t\to+\infty$, while in the Euclidean situation one has $\mathsf{V}(t)\asymp t^{\beta(N,m)}$ with $\beta(N,m)<1/(m-1)$.** The second driving factor influencing the behaviour of solutions to is the *reaction term $u^p$, which has the positive sign and, thus, might drive solutions towards blow-up. This kind of problems has been widely studied in the Euclidean case $M= {\mathbb R}^N$, especially in the case $m=1$ (linear diffusion). The literature for this problem is huge and there is no hope to give a comprehensive review here, hence we just mention that blow-up occurs for all nontrivial nonnegative data when $p\le1+2/N$, while global existence prevails for $p>1+ 2/N$ (for specific results see e.g. [@CFG], [@DL], [@F], [@FI], [@H], [@L], [@Q], [@S], [@W], [@Y]). On the other hand, it is known that when $M=\mathbb{H}^N$ and $m=1$, for all $p>1$ and sufficiently small nonnegative data there exists a global in time solution, see [@BPT], [@WY], [@WY2], [@Pu3].* As concerns the slow diffusion case $m>1$, in the Euclidean setting it is shown in [@SGKM] that, when the initial datum is nonnegative, nontrivial and compactly supported, for any $p>1$, all sufficiently large data give rise to solutions blowing up in finite time. Besides, if $p\in\left(1,m+\frac2N\right)$, *all such solutions blow up in finite time. Finally, if $p>m+\frac2N$, all sufficiently small data give rise to global solutions. For subsequent, very detailed results e.g. about the type of possible blow-up and, in some case, on continuation after blow-up, see [@GV], [@MQV], [@Vaz1] and references quoted therein.* In the Riemannian setting, existence of global solutions and blow-up in finite time for problem have been first studied in [@Z], under the assumption that the volume of geodesic balls of radius $R$ grows as $R^{\alpha}$ with $\alpha\geq 2$; this kind of assumption is typically associated to *nonnegative curvature, thus the opposite situation w.r.t. the one we are studying here, in which the volume of geodesic balls grows at least exponentially as a function of the radius $R$. The results in the setting studied in [@Z] are qualitatively similar to the Euclidean ones.* The situation on negatively curved manifolds is significantly different, and the first results in this connection have been shown in [@GMPv], where only the case of nonnegative, compactly supported data is considered. Among the results of that paper, we mention the case that a *dichotomy phenomenon holds when $p>m$, in the sense that under appropriate curvature conditions, compatible with the assumptions made in the present paper, all sufficiently small data give rise to solutions existing globally in time, whereas sufficiently large data give rise to solutions blowing up in finite time. Results were only partial when $p<m$, since it has been shown that when $p\in\left(1,\frac{1+m}{2}\right]$ and again under suitable curvature conditions, all solutions corresponding to compactly supported initial data exist globally in time, and blow up everywhere pointwise in infinite time. When $p\in\left(\frac{1+m}{2},m\right)$, precise information on the asymptotic behaviour is not known, since blowup is shown to occur at worse in infinite time, but could in principle occur before.* We extend here the results of [@GMPv] in two substantial aspects. In fact, we summarize our main results as follows. - The methods of [@GMPv] rely heavily on explicit *barrier arguments, that by their very same nature are applicable to compactly supported data only and, in addition, require explicit curvature bounds in order to be applicable. We prove here global existence for L$^m$ data and prove *smoothing effects for solutions to , where by smoothing effect we mean the fact that L$^m$ data give rise to global solutions $u(t)$ such that $u(t)\in \text{L}^\infty$ for all $t>0$, with quantitative bounds on their L$^\infty$ norm. This will be a consequence *only of the validity of Sobolev and Poincaré inequalities , , see Theorem \[teoesistenza\].*** - As a consequence, combining this fact with some results proved in [@GMPv], we can prove that, on manifolds satisfying e.g. $-c_1\le \textrm{sec}\le -c_2$ with $c_1\ge c_2>0$, thus encompassing the particularly important case of the hyperbolic space $\mathbb{H}^n$ (somewhat weaker lower curvature bounds can be assumed), any solution $u(t)$ to corresponding to an initial datum $u_0\in\text{L}^m$ exists globally and, provided $u_0$ is sufficiently large, it satisfies the property $$\lim_{t\to+\infty} u(x, t)=+\infty\ \ \ \forall x\in M,$$ namely *complete blowup in infinite time occurs for such solutions to in the whole range $p\in(1,m)$, see Theorem \[blowup\].* Our results can also be seen as an extension of some of the results proved in [@Sacks]. However, the proof of the smoothing estimate given in [@Sacks Theorem 1.3] is crucially based on the assumption that the measure of the domain where the problem is posed is finite. This is not true in our setting. So, even if we use some general idea introduced in [@Sacks], our proofs and results are in general quite different from those in [@Sacks]. For detailed reference to smoothing effect for linear evolution equations see [@D], whereas we refer to [@Vsmooth] for a general treatment of smoothing effects for nonlinear diffusions, and to [@BG; @GMPo; @GM2] for connections with functional inequalities in the nonlinear setting. The main result given in Theorem \[teoesistenza\] depend essentially only on the validity of inequalities and , and as such is almost immediately generalizable to different contexts. As a particularly significant situation, we single out the case of Euclidean, mass-weighted reaction diffusion equations. In fact we consider the problem $$\label{problema2} \begin{cases} \rho\, u_t= \Delta u^m +\rho\, u^p & \text{in}\,\, {\ensuremath{\mathbb{R}}}^N\times (0,T) \\ u\,\, =u_0 &\text{in}\,\, {\ensuremath{\mathbb{R}}}^N\times \{0\}, \end{cases}$$ in the Euclidean setting, where $\rho:{\ensuremath{\mathbb{R}}}^N\to{\ensuremath{\mathbb{R}}}$ is strictly positive, continuous and bounded, and represents a *mass density . The problem is naturally posed in the weighted spaces $$L^q_{\rho}({\ensuremath{\mathbb{R}}}^N)=\left\{v:{\ensuremath{\mathbb{R}}}^N\to{\ensuremath{\mathbb{R}}}\,\, \text{measurable}\,\, , \,\, \|v\|_{L^q_{\rho}}:=\left(\int_{{\ensuremath{\mathbb{R}}}^N} \,v^q\rho(x)\,dx\right)^{1/q}<+\infty\right\},$$* This kind of models originates in a physical model provided in [@KR]. There are choices of $\rho$ ensuring that the following analogues of and hold: $$\label{P-pesi} \|v\|_{L^2_{\rho}({\ensuremath{\mathbb{R}}}^N)} \le \frac{1}{C_p} \|\nabla v\|_{L^2({\ensuremath{\mathbb{R}}}^N)} \quad \text{for any}\,\,\, v\in C_c^{\infty}({\ensuremath{\mathbb{R}}}^N)$$ and $$\label{S-pesi} \|v\|_{L^{2^*}_{\rho}({\ensuremath{\mathbb{R}}}^N)} \le \frac{1}{C_s} \|\nabla v\|_{L^2({\ensuremath{\mathbb{R}}}^N)}\quad \text{for any}\,\,\, v\in C_c^{\infty}({\ensuremath{\mathbb{R}}}^N)$$ for suitable positive constants. In fact, in order to make a relevant example, if $\rho(x)\asymp |x|^{-a}$ for a suitable $a>0$, it can be shown that holds if $a\ge2$ (see e.g. [@GMPo] and references therein), whereas also is obviously true for any $a>0$ because of the validity of the usual, unweighted Sobolev inequality and of the assumptions on $\rho$. Of course more general cases having a similar nature but where the analogue of is not a priori trivial, could be considered, but we focus on that example since it is widely studied in the literature and because of its physical significance. In [@MT; @MTS] a large class of nonlinear reaction-diffusion equations, including in particular problem under certain conditions on $\rho$, is investigated. It is proved that a global solution exists, (see [@MT Theorem 1]) provided that $\rho(x)=|x|^{-a}$ with $a\in (0,2)$, $$p>m+\frac{2-a}{N-a},$$ and $u_0\geq 0$ is small enough. In addition, a smoothing estimate holds. On the other hand, if $\rho(x)=|x|^{-a}$ or $\rho(x)=(1+|x|)^{-a}$ with $a\in [0,2)$, $u_0\not\equiv 0$ and $$1<p<m+\frac{2-a}{N-a},$$ then any nonnegative solution blows up in a suitable sense. Such results have also been generalized to more general initial data, decaying at infinity with a certain rate (see [@MTS]). Finally, in [@MT Theorem 2], it is shown that if $p>m$, $\rho(x)=(1+|x|)^{-a}$ with $a>2$, and $u_0$ is small enough, a global solution exists. Problem has also been studied in [@MP1], [@MP2], by constructing and using suitable barriers, initial data being continuous and compactly supported. In particular, in [@MP1] the case that $\rho(x)\asymp |x|^{-a}$ for $|x|\to+\infty$ with $a\in (0,2)$ is addressed. It is proved that for any $p>1$, if $u_0$ is large enough, then blowup occurs. On the other hand, if $p>\bar p$, for a certain $\bar p>m$ depending on $m, p$ and $\rho$, and $u_0$ is small enough, then global existence of bounded solutions prevails. Moreover, in [@MP2] the case that $a\geq 2$ is investigated. For $a=2$, blowup is shown to occur when $u_0$ is big enough, whereas global existence holds when $u_0$ is small enough. For $a>2$ it is proved that if $p>m$, $u_0\in L^{\infty}_{\rm{loc}}(\mathbb R^N)$ and goes to $0$ at infinity with a suitable rate, then there exists a global bounded solution. Furthermore, for the same initial datum $u_0$, if $1<p<m$, then there exists a global solution, which could blow up as $t\to +\infty$. Our main results in this setting can be summarized as follows. - We prove in Theorem \[teoesistenza2\] global existence and smoothing effects for solutions to , assuming that the weight $\rho:{\ensuremath{\mathbb{R}}}^N\to{\ensuremath{\mathbb{R}}}$ is strictly positive, smooth and bounded, so that necessarily holds, and assuming the validity of . In particular, L$^m$ data give rise to global solutions $u(t)$ such that $u(t)\in $L$^\infty$ for all $t>0$, with quantitative bounds on their L$^\infty$ norm. By constructing a specific, delicate example, we show in Proposition \[teosubsolutioncritical\] that the bound on the L$^\infty$ norm (which involves a quantity diverging as $t\to+\infty$) is qualitatively sharp, in the sense that there are examples of weights for which our running assumption holds and for which blow-up of solutions in infinite time holds pointwise everywhere (we refer to this property by saying that *complete blowup in infinite time occurs). We also prove, by similar methods which follow the lines of [@Sacks], different smoothing effects which are stronger for large times, when $\rho$ is in addition assumed to be integrable, see Theorem \[teoesistenza3\].* Let us mention that the results in [@MP2] for $1<p<m$ are improved here in various directions. In fact, now we consider a larger class of initial data $u_0$, since we do not require that they are locally bounded; moreover, in [@MP2] no smoothing estimates are addressed. Furthermore, the fact that for integrable weights $\rho$ we have global existence of bounded solutions does not have a counterpart in [@MP2], nor has the blowup results in infinite time. The paper is organized as follows. In Section \[statements\] we collect the relevant definitions and state our main results, both in the setting of Riemannian manifolds and in the Euclidean, weighted case. In Section \[elliptic\] we prove some crucial results for an auxiliary elliptic problem, that will then be used in Section \[Lp\] to show bounds on the $\text{L}^p$ norms of solutions to certain evolution problems posed on geodesic balls. In Section \[proofs\] we conclude the proof of our main results for the case of reaction-diffusion problems on manifolds. In Section \[weights\] we briefly comment on the adaptation to be done to deal with the weighted Euclidean case, and prove the additional results valid in the case of an integrable weight. We also discuss there a delicate example showing that complete blowup in infinite time may occur under the running assumptions. Preliminaries and statement of main results {#statements} =========================================== We first define the concept of solution to that we shall use hereafter. It will be meant in the very weak, or distributional, sense. \[1\] Let $M$ be a complete noncompact Riemannian manifold of infinite volume. Let $1<p<m$ and $u_0\in{\textrm L}^m(M)$, $u_0\ge0$. We say that the function $u$ is a solution to problem in the time interval $[0,T)$ if $$u\in L^m(M\times(0,T))\,,$$ and for any $\varphi \in C_c^{\infty}(M\times[0,T])$ such that $\varphi(x,T)=0$ for any $x\in M$, $u$ satisfies the equality: $$\begin{aligned} -\int_0^T\int_{M} \,u\,\varphi_t\,d\mu\,dt =&\int_0^T\int_{M} u^m\,\Delta\varphi\,d\mu\,dt\,+ \int_0^T\int_{M} \,u^p\,\varphi\,d\mu\,dt \\ & +\int_{M} \,u_0(x)\,\varphi(x,0)\,d\mu. \end{aligned}$$ \[teoesistenza\] Let $M$ be a complete, noncompact manifold of infinite volume such that the Poincaré and Sobolev inequalities and hold on $M$. Let $1<p<m$ and $u_0\in{\textrm L}^m(M)$, $u_0\ge0$. Then problem admits a solution for any $T>0$, in the sense of Definition \[1\]. Moreover for any $T>\tau>0$ one has $u\in L^{\infty}(M\times(\tau,T))$ and there exist numerical constants $c_1, c_2>0$, independent of $T$, such that, for all $t>0$ one has $$\begin{aligned}\label{smoothing} \|u(t)\|_{L^{\infty}(M)}&\le c_1e^{c_2t}\left\{ \|u_{0}\|_{L^m(M)}^{\frac{2m}{Np+(m-p)(N+2)}}+\frac{\|u_{0}\|_{L^m(M)}^{\frac{2m}{N+(m-1)(N+2)}}}{t^{\frac{2}{N+(m-1)(N+2)}}}\right\}. \end{aligned}$$ Besides, if $q>1$ and $u_0\in L^q(M)\cap L^m(M)$, then there exists $C(q)>0$ such that $$\label{Lq} \|u(t)\|_{L^q(M)}\le e^{C(q)t}\|u_{0}\|_{L^q(M)}\quad \textrm{for all }\,\, t>0\,.$$ One may wonder whether the upper bound in is qualitatively sharp, since its r.h.s. involves a function of time that tends to $+\infty$ as $t\to+\infty$. This is indeed the case, since there is a wide class of situations covered by Theorem \[teoesistenza\] in which classes of solutions do indeed satisfy $\|u(t)\|_\infty\to+\infty$ as $t\to+\infty$ and show even the much stronger property of *blowing up pointwise everywhere in infinite time. In fact, as a direct consequence of Theorem \[teoesistenza\], of known geometrical conditions for the validity of and , and of some results given in [@GMPv], we can prove the following result. We stress that this property has no Euclidean analogue for the corresponding reaction-diffusion problem.* \[blowup\] Let $M$ be a Cartan-Hadamard manifold and let $\operatorname{sec}$ denote sectional curvature, $\operatorname{Ric_o}$ denote the Ricci tensor in the radial direction with respect to a given pole $o\in M$. Assume that the following curvature bounds hold everywhere on $M$, for suitable $k_1\ge k_2>0$: $$\operatorname{Ric_o}(x)\geq - k_1;\ \ \ \operatorname{sec}\le -k_2\,.$$ Then the results of Theorem \[teoesistenza\] hold. Besides, consider any nonnegative solution $u$ to corresponding to an initial datum $u_0\in{\textrm L}^m(M)$ which is sufficiently large in the sense that $u_0\ge v_0$ for a suitable function $v_0\in C_c^{0}(M)$, $v_0>0$ in a geodesic ball $B_R$ with $R>0$ sufficiently large and, finally, $m:=\inf_{B_R}v_0$ is sufficiently large. Then $u$ satisfies $$\lim_{t\to+\infty}u(x,t)=+\infty\ \ \ \forall x\in M.$$ Weighted reaction-diffusion equations in the Euclidean space {#weight} ------------------------------------------------------------ As mentioned in the Introduction, the methods used in proving Theorem \[teoesistenza\] are general enough, being based on functional inequalities only, to be easily generalized to different contexts. We single out here the one in which reaction-diffusion equations are considered in the Euclidean setting, but in which diffusion takes place in a medium having a nonhomogeneous density, see e.g. [@KR], [@MT], [@MTS], [@MTS2] and references quoted therein. We consider a *weight $\rho:{\ensuremath{\mathbb{R}}}^N\to{\ensuremath{\mathbb{R}}}$ such that $$\label{rho2} \rho \in C({\ensuremath{\mathbb{R}}}^N)\cap L^{\infty}({\ensuremath{\mathbb{R}}}^N), \ \ \rho(x)>0 \,\, \text{for any}\,\, x\in {\ensuremath{\mathbb{R}}}^N,$$ and the associated weighted Lebesgue spaces $$L^q_{\rho}({\ensuremath{\mathbb{R}}}^N)=\{v:{\ensuremath{\mathbb{R}}}^N\to{\ensuremath{\mathbb{R}}}\,\, \text{measurable}\,\, | \,\, \|v\|_{L^q_{\rho}}<+\infty\},$$ where $ \|v\|_{L^q_{\rho}}:=\int_{{\ensuremath{\mathbb{R}}}^N} \rho(x)\,|v(x)|^q\,dx.$ Moreover, we assume that $\rho$ is such that the weighted Poincaré inequality holds. By construction and by the assumptions in it follows that the weighted Sobolev inequality also holds, as a consequence of the usual Sobolev inequality in ${\ensuremath{\mathbb{R}}}^N$ and of .* Moreover, we let $u_0:{\ensuremath{\mathbb{R}}}^N\to{\ensuremath{\mathbb{R}}}$ be such that $$\,\,\, u_0\in L^m_{\rho}({\ensuremath{\mathbb{R}}}^N),\,\,\,\,\, u_0(x)\ge 0 \,\, \text{for a.e.}\,\, x\in {\ensuremath{\mathbb{R}}}^N$$ and consider, for any $T>0$ and for any $1\,<\,p\,<\,m,$ problem . The definition of solution we use will be again the very weak one, adapted to the present case. \[2\] Let $1<p<m$ and $u_0\in{\textrm L}_{\rho}^m(\mathbb R^N)$, $u_0\ge0$. Let the weight $\rho$ satisfy . We say that the function $u$ is a solution to problem in the interval $[0, T)$ if $$u\in L^m_{\rho}(\mathbb R^N\times(0,T))\,\,\,$$ and for any $\varphi \in C_c^{\infty}(\mathbb R^N\times[0,T])$ such that $\varphi(x,T)=0$ for any $x\in \mathbb R^N$, $u$ satisfies the equality: $$\label{3a} \begin{aligned} -\int_0^T\int_{\mathbb{R}^N} \,u\,\varphi_t\,\rho(x)\,dx\,dt =&\int_0^T\int_{\mathbb R^N} u^m\,\Delta \varphi\,dx\,dt\,+ \int_0^T\int_{\mathbb R^N} \,u^p\,\varphi\,\rho(x)\,dx\,dt \\ & +\int_{\mathbb R^N} \,u_0(x)\,\varphi(x,0)\,\rho(x)\,dx. \end{aligned}$$ \[teoesistenza2\] Let $\rho$ satisfy and assume that the weighted Poincaré inequality holds. Let $1<p<m$ and $u_0\in{\textrm L}_{\rho}^m(\mathbb R^N),$ $u_0\ge0$. Then problem admits a solution for any $T>0$, in the sense of Definition \[2\]. Moreover for any $T>\tau>0$ one has $u\in L^{\infty}(\mathbb R^N\times(\tau,T))$ and there exist numerical constants $c_1, c_2>0$, independent of $T$, such that, for all $t>0$ one has $$\begin{aligned}\label{smoothingweight} \|u(t)\|_{L^{\infty}(\mathbb R^N)}&\le c_1e^{c_2t}\left\{ \|u_{0}\|_{L^m_{\rho}(\mathbb R^N)}^{\frac{2m}{Np+(m-p)(N+2)}}+\frac{\|u_{0}\|_{L^m_{\rho}(\mathbb R^N)}^{\frac{2m}{N+(m-1)(N+2)}}}{t^{\frac{2}{N+(m-1)(N+2)}}}\right\}. \end{aligned}$$ Besides, if $q>1$ and $u_0\in L^q_{\rho}(\mathbb R^N)\cap L^m_{\rho}(\mathbb R^N)$, then there exists $C(q)>0$ such that $$\|u(t)\|_{L^q_{\rho}(\mathbb R^N)}\le e^{C(q)t}\|u_{0}\|_{L^q_{\rho}(\mathbb R^N)}\quad \textrm{ for all }\,\, t>0\,.$$ Finally, there are examples of weights satisfying the assumptions of the present Theorem and such that sufficiently large initial data $u_0$ give rise to solutions $u(x,t)$ blowing up pointwise everywhere in infinite time, i.e. such that $\lim_{t\to+\infty}u(x,t)=+\infty$ for all $x\in \mathbb R^N$, so that in particular $\|u(t)\|_\infty\to+\infty$ as $t\to+\infty$ and hence the upper bound in is qualitatively sharp. One can take e.g. $\rho\asymp |x|^{-2}$ as $|x|\to+\infty$ for this to hold. In the case of *integrable weights one can adapt the methods of [@Sacks] to prove a stronger result.* \[teoesistenza3\] Let $\rho$ satisfy and $\rho\in L^1(\mathbb R^N)$. Let $1<p<m$ and $u_0\in{\textrm L}_{\rho}^1(\mathbb R^N)$, $u_0\ge0$. Then problem admits a solution for any $T>0$, in the sense of Definition \[2\]. Moreover for any $T>\tau>0$ one has $u\in L^{\infty}(\mathbb R^N\times(\tau,T))$ and there exists $C=C(m,p,N,\|\rho\|_{L^1(\mathbb R^N)})>0$, independent of the initial datum $u_0$, such that, for all $t>0$, one has $$\label{absolute} \|u(t)\|_{L^{\infty}(\mathbb R^N)}\le C \left\{1+\left[\frac{1}{(m-1)t}\right]^{\frac{1}{m-1}} \right\}.$$ - The bound cannot be replaced by a similar one in which the r.h.s. is replaced by $\frac{C}{(m-1)t}$, that would entail $\|u(t)\|_\infty\to0$ as $t\to+\infty$, as customary e.g. in the case of solutions to the Porous Medium Equation posed in bounded, Euclidean domains (see [@V]). In fact, it is possible that *stationary, bounded solutions to exist, provided a positive bounded solution $U$ to the equation $$\label{nonlin} -\Delta U=\rho U^a$$ exists, where $a=p/m<1$. If this fact holds, $V:=U^\frac1m$ is a stationary, bounded, positive solution to the differential equation in , whose $L^\infty$ norm is of course constant in time. In turn, a celebrated results of [@BK] entails that positive, bounded solutions to exist if e.g. $\rho\asymp |x|^{-2-\epsilon}$ for some $\epsilon>0$ as $|x|\to+\infty$ (in fact, a full characterization of the weights for which this holds is given in [@BK]), a condition which is of course compatible with the assumptions of Theorem \[teoesistenza3\].* - Of course, the bound , which gives stronger information when $t\to0$, continues to hold under the assumptions of Theorem \[teoesistenza3\]. Auxiliary results for elliptic problems {#elliptic} ======================================= Let $x_0,x \in M$ be given. We denote by $r(x)=\textrm{dist}\,(x_0,x)$ the Riemannian distance between $x_0$ and $x$. Moreover, we let $$B_R(x_0):=\{x\in M, \textrm{dist}\,(x_0,x)<R\}$$ be the geodesics ball with center $x_0 \in M$ and radius $R > 0$. Let $x_0\in M$ any fixed reference point. We set $B_R\equiv B_R(x_0)\,.$ As mentioned above, we denote by $\mu$ the Riemannian measure on $M$. For any given function $v$, we define for any $k\in{\ensuremath{\mathbb{R}}}^+$ $$T_k(v):=\begin{cases} &k\quad \text{if}\,\,\, v\ge k \\ &v \quad \text{if}\,\,\, |v|< k \\ &-k\quad \text{if}\,\,\, v\le -k\end{cases}\,\,.$$ For every $R>0, k>0,$ consider the problem $$\label{problemapalla} \begin{cases} \, u_t= \Delta u^m +\, T_k(u^p) & \text{in}\,\, B_R\times (0,+\infty) \\ u=0 &\text{in}\,\, \partial B_R\times (0,+\infty)\\ u=u_0 &\text{in}\,\, B_R\times \{0\}, \\ \end{cases}$$ where $u_0\in L^\infty(B_R), u_0\geq 0$. Solutions to problem are meant in the weak sense as follows. \[4\] Let $p<m$. Let $u_0\in L^\infty(B_R), u_0\geq 0$. We say that a nonnegative function $u$ is a solution to problem if $$u\in L^{\infty}(B_R\times(0,+\infty)), u^m\in L^2\big((0, T); H^1_0(B_R)\big) \quad \textrm{ for any }\, T>0,$$ and for any $T>0, \varphi \in C_c^{\infty}(B_R\times[0,T])$ such that $\varphi(x,T)=0$ for every $x\in B_R$, $u$ satisfies the equality: $$\begin{aligned} -\int_0^T\int_{B_R} \,u\,\varphi_t\,d\mu\,dt =&- \int_0^T\int_{B_R} \langle \nabla u^m, \nabla \varphi \rangle \,d\mu\,dt\,+ \int_0^T\int_{B_R} \,T_k(u^p)\,\varphi\,d\mu\,dt \\ & +\int_{B_R} \,u_0(x)\,\varphi(x,0)\,d\mu. \end{aligned}$$ We also consider elliptic problems of the type $$\label{pbella} \begin{cases} -\Delta u &= f \quad \textrm{ in }\,\, B_R\\ \;\quad u & = 0 \quad \textrm{ in }\,\, \partial B_R\,, \end{cases}$$ with $f\in L^q(B_R)$ for some $q>1$. \[defpbell\] We say that $u\in H^1_0(B_R), u\geq 0$ is a weak subsolution to problem if $$\int_{B_R}\langle \nabla u, \nabla \varphi \rangle\, d\mu \leq \int_{B_R} f\varphi\, d\mu,$$ for any $\varphi\in H^1_0(B_R), \varphi\geq 0$. The following proposition contains an estimate in the spirit of the celebrated $L^\infty$ estimate of Stampacchia (see, e.g., [@KS], [@BC] and references therein). However, the obtained bound and the proof are different. This is due to the fact that we need an estimate independent of the measure of $B_R$, in order to let $R\to +\infty$ when we apply such estimate in the proof of global existence for problem (see Remark \[remark2\] below). Indeed recall that, obviously, since $M$ has infinite measure, $\mu(B_R)\to +\infty$ as $R\to +\infty$. \[prop1\] Let $f_1\in L^{m_1}(B_R)$ and $f_2\in L^{m_2}(B_R)$ where $m_1>\frac N 2$, $m_2 >\frac{N}{2}\,.$ Assume that $v\in H_0^1(B_R)$, $v\ge 0$ is a subsolution to problem $$\label{25} \begin{cases} -\Delta v = (f_1+f_2) & \text{in}\,\, B_R\\ v=0 &\text{on}\,\, \partial B_R \end{cases}.$$ in the sense of Definition \[defpbell\]. Let $\bar k>0$. Then $$\label{eqa3} \|v\|_{L^{\infty}(B_R)}\le \left\{C_1\|f_1\|_{L^{m_1}(B_R)}+ C_2\|f_2\|_{L^{m_2}(B_R)} \right\}^{\frac{s-1}{s}} \|v\|_{L^{1}(B_R)}^{\frac{s-1}{s}}+\bar k,$$ where $$\label{37b} s=1+\frac{2}{N}-\frac{1}{l}\,,$$ $$\label{l} \frac{N}{2}<l<\min\{m_1\,,m_2\},$$ $$\label{barC} \overline C_1=\left(\frac{s}{s-1}\right)^{\frac{s}{s-1}} \frac{1}{C_s^2}\left(\frac{2}{\bar k}\right)^{\frac{1}{l}-\frac{1}{m_1}}\,, \quad \overline C_2=\left(\frac{s}{s-1}\right)^{\frac{s}{s-1}} \frac{1}{C_s^2}\left(\frac{2}{\bar k}\right)^{\frac{1}{l}-\frac{1}{m_2}}\,,$$ and $$\label{38} \begin{aligned} C_1=\overline C_1\,\|v\|_{L^1(B_R)}^{\frac{1}{l}-\frac{1}{m_1}}, \ \ \ C_2=\overline C_2\,\|v\|_{L^1(B_R)}^{\frac{1}{l}-\frac{1}{m_2}}\,. \end{aligned}$$ \[remark2\] If in Proposition \[prop1\] we further assume that there exists a constant $k_0>0$ such that $$\max(\|v \|_{L^1(B_R)}, \|f_1 \|_{L^{m_1}(B_R)},\|f_2 \|_{L^{m_2}(B_R)}) \leq k_0 \quad \textrm{ for all }\,\, R>0,$$ then from , we infer that the bound from above on $\|v\|_{L^{\infty}(B_R)}$ is independent of $R$. This fact will have a key role in the proof of global existence for problem . Proof of Proposition \[prop1\] ------------------------------ Let us first define $$\label{21} G_k(v):=v-T_k(v) \,\,,$$ $$g(k):= \int_{B_R}|G_k(v)|\,d\mu.$$ For any $R>0$, for $v\in L^1(B_R)$, we set $$\label{23ab} A_k:= \{x\in B_R:\,|v(x)|>k\}.$$ We first state two technical Lemmas. \[lemma0\] Let $v\in L^1(B_R)$. Then $g(k)$ is differentiable almost everywhere in $(0,+\infty)$ and $$g'(k)=-\mu(A_{k}).$$ We omit the proof since it is identical to the one given in [@BC]. \[lemma1\] Let $v\in L^1(B_R)$. Let $\overline k>0$. Suppose that there exist $C>0$ and $s>1$ such that $$\label{23} g(k)\le C\mu(A_k)^{s} \quad \text{for any}\,\,k\ge \bar k.$$ Then $v\in L^{\infty}(B_R)$ and $$\label{eqa1} \|v\|_{L^{\infty}(B_R)}\le C^{\frac{1}{s}}\frac{s}{s-1}\|v\|_{L^{1}(B_R)}^{1-\frac{1}{s}}+\bar k.$$ Observe that if $C$ in does not depend on $R$ and, for some $k_0>0$, $$\| v\|_{L^1(B_R)}\leq k_0 \quad \textrm{ for all }\,\, R>0,$$ then, in view of the estimate , the bound on $\|v\|_{L^{\infty}(B_R)}$ is independent of $R$. Thanks to Lemma \[lemma0\] together with hypotheses we have that $$g'(k)=-\mu(A_{k})\le -\left [C^{-1}\,g(k)\right]^{\frac 1{s}},$$ hence $$g(k) \le C\,[-g'(k)]^{s}.$$ Integrating between $\bar k$ and $k$ we get $$\label{24b} \int_{\bar k}^k\left(-\frac 1{C^{\frac 1 s}}\right )\, d\tau \ge \int_{\bar k}^k g'(\tau)\,g(\tau)^{-\frac{1}{s}} \, dg,$$ that is: $$-C^{-\frac 1 s}( k-\bar k) \ge \frac{s}{s-1} \left [g(k)^{1-\frac 1 s} - g(\bar k)^{1-\frac 1 s} \right].$$ Using the definition of $g$, this can be rewritten as $$\begin{aligned} g(k)^{1-\frac 1 s} &\le g\left(\bar k\right)^{1-\frac 1 s} - \frac{s-1}{s}\,C^{-\frac 1 s} (k-\bar k)\,\\ &\le \|v\|_{L^1(B_R)}^{1-\frac 1 s} - \frac{s-1}{s}\,C^{-\frac 1 s} (k-\bar k) \quad \text{for any}\,\, k>\bar k. \end{aligned}$$ Choose $$k=k_0=C^{\frac 1 s}\|v\|_{L^1(B_R)}^{1-\frac 1 s}\frac{s}{s-1}+\bar k,$$ and substitute it in the last inequality. Then $g(k_0)\le0.$ Due to the definition of $g$ this is equivalent to $$\int_{B_R}|G_{k_0}(v)|\,d\mu =0 \,\,\,\iff \,\,\, |G_{k_0}(v)|=0 \,\,\,\iff \,\,\, |v|\le k_0.$$ As a consequence we have $$\|v\|_{L^{\infty}(B_R)}\le k_0 = \frac{s}{s-1}C^{\frac 1 s}\|v\|_{L^1(B_R)}^{1-\frac 1 s}+\bar k.$$ Take $G_k(v)$ as in and $A_k$ as in . From now one we write, with a slight abuse of notation, $$\|f\|_{L^q(B_R)}=\|f\|_{L^q}\,.$$ Since $G_k(v)\in H^1_0(B_R)$ and $G_k(v)\geq 0$, we can take $G_k(v)$ as test function in problem . Then, by means of , we get $$\label{32} \begin{aligned} \int_{B_R}\nabla u\cdot \nabla G_k(v)\, d\mu &\ge \int_{A_k}|\nabla v|^2\,d\mu \\ &\ge \int_{B_R}|\nabla G_k(v)|^2\,d\mu \\ &\ge C_s^2\left(\int_{B_R}| G_k(v)|^{2^*}\,d\mu\right )^{\frac{2}{2^*}}\,. \end{aligned}$$ If we now integrate on the right hand side of , thanks to Hölder inequality, we get $$\label{33} \begin{aligned} \int_{B_R}(f_1+f_2)\,G_k(v)\,d\mu &= \int_{A_k}f_1\,G_k(v)\,d\mu + \int_{A_k}f_2\,G_k(v)\,d\mu \\ &\le\left(\int_{A_k}|G_k(v)|^{2^*}\,d\mu\right)^{\frac{1}{2^*}}\left [\left(\int_{A_k}|f_1|^{\frac{2N}{N+2}}\,d\mu\right)^{\frac{N+2}{2N}} + \left(\int_{A_k}|f_2|^{\frac{2N}{N+2}}\,d\mu\right)^{\frac{N+2}{2N}} \right]\\ &\le\left(\int_{B_R}|G_k(v)|^{2^*}\,d\mu\right)^{\frac{1}{2^*}}\left [\|f_1\|_{L^{m_1}}\mu(A_k)^{\frac{N+2}{2N}\left(1-\frac{2N}{m_1(N+2)}\right)}\right.\\ &\left.+ \|f_2\|_{L^{m_2}}\mu(A_k)^{\frac{N+2}{2N}\left(1-\frac{2N}{m_2(N+2)}\right)} \right]\,. \end{aligned}$$ Combining and we have $$\label{34} \begin{aligned} C_s^2\left(\int_{B_R}|G_k(v)|^{2^*}\,d\mu\right)^{\frac{1}{2^*}} &\le\left [\|f_1\|_{L^{m_1}}\mu(A_k)^{\frac{N+2}{2N}\left(1-\frac{2N}{m_1(N+2)}\right)}\right.\\&\left.+ \|f_2\|_{L^{m_2}}\mu(A_k)^{\frac{N+2}{2N}\left(1-\frac{2N}{m_2(N+2)}\right)} \right]\,. \end{aligned}$$ Observe that $$\label{35} \int_{B_R}|G_k(v)|\,d\mu\le \left(\int_{B_R}|G_k(v)|^{2^*}\,d\mu\right)^{\frac{1}{2^*}} \mu(A_k)^{\frac{N+2}{2N}}\,.$$ We substitute in and we obtain $$\int_{B_R}|G_k(v)|\,d\mu\le\frac{1}{C_s^2}\left[\|f_1\|_{L^{m_1}}\mu(A_k)^{1+\frac{2}{N}-\frac{1}{m_1}}+\|f_2\|_{L^{m_2}}\mu(A_k)^{1+\frac{2}{N}-\frac{1}{m_2}}\right].$$ Using the definition of $l$ in , for any $k\ge \overline k$, we can write $$\label{37} \begin{aligned} \int_{B_R}|G_k(v)|\,d\mu&\le\frac{1}{C_s^2}\,\mu(A_k)^{1+\frac{2}{N}-\frac{1}{l}}\left[\|f_1\|_{L^{m_1}}\mu(A_{\overline k})^{\frac{1}{l}-\frac{1}{m_1}}+\|f_2\|_{L^{m_2}}\mu(A_{\overline k})^{\frac{1}{l}-\frac{1}{m_2}}\right] \end{aligned}$$ Set $$C=\frac{1}{C_s^2}\left[\|f_1\|_{L^{m_1}}\left(\frac{2}{\bar k}\|v\|_{L^1(B_R)}\right)^{\frac{1}{l}-\frac{1}{m_1}}+\|f_2\|_{L^{m_2}}\left(\frac{2}{\bar k}\|v\|_{L^1(B_R)}\right)^{\frac{1}{l}-\frac{1}{m_2}}\right]\,.$$ Hence, by means of Chebychev inequality, reads, for any $k\ge \bar k$, $$\label{37a} \int_{B_R}|G_k(v)|\,d\mu \le C\,\mu(A_k)^{s}\,,$$ where $s$ has been defined in . Now, corresponds to the hypotheses of Lemma \[lemma1\], hence the thesis of such lemma follows and we have $$\|v\|_{L^{\infty}}\le C^{1-\frac{1}{s}} \frac{s}{s-1}\,\|v\|_{L^1}^{1-\frac{1}{s}}+\bar k\,.$$ Then the thesis follows thanks to . $L^q$ and smoothing estimates {#Lp} ============================= \[lemma3\] Let $1<p< m$. Let $M$ be such that inequality holds. Suppose that $u_0\in L^{\infty}(B_R)$, $u_0\ge0$. Let $u$ be the solution of problem . Then, for any $1<q<+\infty$, for some constant $C=C(q)>0$, one has $$\label{47} \|u(t)\|_{L^q(B_R)} \le e^{C(q) t}\|u_0\|_{L^q(B_R)}\quad \textrm{ for all }\,\, t>0\,.$$ Let $x\in{\ensuremath{\mathbb{R}}}$, $x\ge0$, $1<p< m$, $\varepsilon >0$. Then, for any $1<q<+\infty$, due to Young’s inequality, it follows that $$\label{48} \begin{aligned} x^{p+q-1}&=x^{(m+q-1)(\frac{p-1}{m-1})}x^{q(\frac{m-p}{m-1})}\\ &\le\varepsilon x^{(m+q-1)(\frac{p-1}{m-1})(\frac{m-1}{p-1})} + \left(\frac{1}{\varepsilon}\frac{p-1}{m-1}\right)^{\frac{p-1}{m-p}}x^{q(\frac{m-p}{m-1})(\frac{m-1}{m-p})}\\ &=\varepsilon x^{m+q-1}+\left(\frac{1}{\varepsilon}\frac{p-1}{m-1}\right)^{\frac{p-1}{m-p}}x^{q}. \end{aligned}$$ Since $u_0$ is bounded and $T_k(u^p)$ is a bounded and Lipschitz function, by standard results, there exists a unique solution of problem in the sense of Definition \[4\]; moreover, $u\in C\big([0, T]; L^q(B_R)\big)$. We now multiply both sides of the differential equation in problem by $u^{q-1}$ and integrate by parts. This can be justified by standard tools, by an approximation procedure. Using the fact that $$T_k(u^p)\leq u^p,$$ thanks to the Poincaré inequality, we obtain for all $t>0$ $$\frac{1}{q}\frac{d}{dt} \|u(t)\|_{L^q(B_R)}^q\le -\frac{4m(q-1)}{(m+q-1)^2}C_p^2 \|u(t)\|_{L^{m+q-1}(B_R)}^{m+q-1}+ \|u(t)\|_{L^{p+q-1}(B_R)}^{p+q-1}.$$ Now, using inequality , we obtain $$\frac{1}{q}\frac{d}{dt} \|u(t)\|_{L^q(B_R)}^q\le -\frac{4m(q-1)}{(m+q-1)^2}C_p^2 \|u(t)\|_{L^{m+q-1}(B_R)}^{m+q-1}+ \varepsilon \|u(t)\|_{L^{m+q-1}(B_R)}^{m+q-1} + C(\varepsilon)\|u(t)\|_{L^q(B_R)}^q,$$ where $C(\varepsilon)=\left(\frac{1}{\varepsilon}\frac{p-1}{m-1}\right)^{\frac{p-1}{m-p}}.$ Thus, for every $\varepsilon>0$ so small that $$0<\varepsilon<\frac{4m(q-1)}{(m+q-1)^2}C_p^2,$$ we have $$\frac{1}{q}\frac{d}{dt} \|u(t)\|_{L^q(B_R)}^q\le C(\varepsilon)\|u(t)\|_{L^q(B_R)}^q\,.$$ Hence, we can find $C=C(q)>0$ such that $$\frac{d}{dt} \|u(t)\|_{L^q(B_R)}^q\le C(q)\|u(t)\|_{L^q(B_R)}^q \quad \textrm{ for all }\,\, t>0\,.$$ If we set $y(t):=\|u(t)\|_{L^q(B_R)}^q$, the previous inequality reads $$y'(t)\le C(q)y(t) \quad \textrm{ for all }\,\, t\in (0, T)\,.$$ Thus the thesis follows. Note that for the constant $C(q)$ in Lemma \[lemma3\] does not depend on $R$ and $k>0$; moreover, we have that $$C(q)\to +\infty \quad \textrm{ as }\,\, q\to +\infty\,.$$ We shall use the following Aronson-Benilan type estimate (see [@AB]; see also [@Sacks Proposition 2.3]). \[prop2\] Let $1<p< m$, $u_0\in H_0^1(B_R) \cap L^{\infty}(B_R)$, $u_0\ge 0$. Let $u$ be the solution to problem . Then, for a.e. $t\in(0,T)$, $$-\Delta u^m(\cdot,t) \le u^p(\cdot, t)+\frac{1}{(m-1)t} u(\cdot,t) \quad \text{in}\,\,\,\mathfrak{D}'(B_R).$$ By arguing as in [@AB], [@Sacks Proposition 2.3] we get $$-\Delta u^m(\cdot,t) \le T_k[u^p(\cdot, t)]+\frac{1}{(m-1)t} u(\cdot,t) \leq u^p(\cdot, t)+\frac{1}{(m-1)t} u(\cdot,t) \quad \text{in}\,\,\,\mathfrak{D}'(B_R),$$ since $T_k(u^p)\leq u^p\,.$ \[teo2\] Let $1<p<m$, $R>0, u_0\in L^{\infty}(B_R)$, $u_0\ge 0$. Let $u$ be the solution to problem . Let $M$ be such that inequality holds. Then there exists $\Gamma=\Gamma(p, m, N, C_s)>0$ such that, for all $t>0$, $$\label{eqa7} \begin{aligned} \|u(t)\|_{L^{\infty}(B_R)}&\le \Gamma \left\{ \left [e^{Ct}\|u_{0}\|_{L^m(B_R)}\right ]^{\frac{2m}{mN +2(m-p)}}\right.\\&\left.+\left [e^{Ct}\|u_{0}\|_{L^m(B_R)}\right ]^{\frac{2m}{mN+2(m-1)}} \left [\frac{1}{(m-1)t}\right]^{\frac{2}{mN+2(m-1)}}\right\}\,; \end{aligned}$$ here the constant $C=C(m)>0$ is the one given in Lemma \[lemma3\]. \[remark3\] If in Proposition \[teo2\], in addition, we assume that for some $k_0>0$ $$\|u_0\|_{L^m(B_R)}\leq k_0\quad \textrm{ for every }\,\, R>0\,,$$ then the bound from above for $\|u(t)\|_{L^{\infty}(B_R)}$ in is independent of $R$. Let us set $w=u(\cdot,t)$. Observe that $w^m\in H_0^1(B_R)$ and $w\ge0$. Due to Proposition \[prop2\] we know that $$\label{50a} -\Delta(w^m) \le \left [w^p+\frac{w}{(m-1)t} \right].$$ Observe that, since $u_0\in L^{\infty}(B_R)$ also $w\in L^{\infty}(B_R)$. Let $q\ge1$ and $$r_1>\max\left\{\frac{q}{p}, \frac N2\right\}, \quad r_2>\max\left\{q, \frac N2\right\}\,.$$ We can apply to Proposition \[prop1\] with $$r_1=m_1, \quad r_2=m_2, \quad \frac{N}{2}<l<\min\{m_1\,,m_2\}\,.$$ So, we have that $$\label{50} \|w\|_{L^{\infty}(B_R)}^m\le \left\{C_1(r_1)\|w^p\|_{L^{r_1}(B_R)}+\gamma C_2(r_2)\|w\|_{L^{r_2}(B_R)} \right\}^{\frac{s-1}{s}} \|w\|_{L^{m}(B_R)}^{m\frac{s-1}{s}}+\bar k\,,$$ where $ s=1+\frac 2 N\,. $ Thanks to Hölder inequality and Young’s inequality with exponents $$\alpha_1=\frac{sm}{\left(p-\frac{q}{r_1}\right)(s-1)}\,>1,\quad\quad \beta_1=\frac{sm}{sm-(s-1)\left(p-\frac{q}{r_1}\right)}>1.$$ we obtain, for any $\varepsilon_1>0$ $$\label{eq2} \begin{aligned} \|w^p\|_{L^{r_1}(B_R)} &= \left\|w^{p-q/r_1+q/r_1}\right\|_{L^{r_1}(B_R)} =\left[ \int_{B_R}w^{r_1(p-q/r_1)} w^{q} d\mu\right]^{\frac{1}{r_1}}\\ &\le\left[ \|w^{p-q/r_1}\|_{L^{\infty}(B_R)} \|w^{q}\|_{L^1(B_R)} \right]^{\frac{1}{r_1}}\\ &=\|w\|_{L^{\infty}(B_R)}^{p-q/r_1}\left(\int_{B_R}w^{q}\,d\mu\right)^{\frac{1}{r_1}}=\left\|w\right\|_{L^{\infty}(B_R)}^{p-q/r_1}\left \|w\right\|_{L^{q}(B_R)}^{q/r_1} \\ &\le\frac{\varepsilon_1^{\alpha_1}}{\alpha_1}\left\|w\right\|_{L^{\infty}(B_R)}^{\frac{m}{p-q/r_1}(p-q/r_1) \frac{s}{s-1}} + \frac{\alpha_1-1}{\alpha_1}\varepsilon_1^{-\frac{\alpha_1}{\alpha_1-1}}\left\|w\right\|_{L^{q}(B_R)}^{\frac{\beta_1q}{r_1}} . \end{aligned}$$ We set $$\delta_1:=\frac{\varepsilon_1^{\alpha_1}}{\alpha_1},\ \ \ \eta(x)=\dfrac{x-1}{x^{\frac{x}{x-1}}}.$$ Thus from we obtain $$\label{49} \left\|w^p\right\|_{L^{r_1}(B_R)} \le\delta_1\left\|w\right\|_{L^{\infty}(B_R)}^{m \frac{s}{s-1}} + \frac{\eta(\alpha_1)}{\delta_1^{\frac{1}{\alpha_1-1}}}\left\|w\right\|_{L^{q}(B_R)}^{\frac{smq}{r_1}\frac{1}{s(m-p+q/r_1)+(p-q/r_1)}}$$ Similarly, again thanks to Hölder inequality and Young’s inequality with exponents $$\alpha_2=\frac{sm}{\left(1-\frac{q}{r_2}\right)(s-1)}\,>1,\quad\quad \beta_2=\frac{sm}{sm-(s-1)\left(1-\frac{q}{r_2}\right)}>1.$$ we obtain, for any $\varepsilon_2>0$ $$\begin{aligned} \left\|w\right\|_{L^{r_2}(B_R)} &\le \left\|w^{1-q/r_2+q/r_2}\right\|_{L^{r_2}(B_R)}\le\left\|w\right\|_{L^{\infty}(B_R)}^{1-q/r_2}\left\|w\right\|_{L^{q}(B_R)}^{q/r_2} \\ &\le\frac{\varepsilon_2^{\alpha_2}}{\alpha_2}\left\|w\right\|_{L^{\infty}(B_R)}^{\frac{m}{1-q/r_2}\left(1-\frac{q}{r_2}\right) \frac{s}{s-1}} + \frac{\alpha_2-1}{\alpha_2}\varepsilon_2^{-\frac{\alpha_2}{\alpha_2-1}}\left\|w\right\|_{L^{q}(B_R)}^{\frac{\beta_2q}{r_2}} . \end{aligned}$$ We set $\delta_2:=\frac{\varepsilon_2^{\alpha_2}}{\alpha_2}$ and thus we obtain $$\label{49a} \left\|w\right\|_{L^{r_2}(B_R)} \le\delta_2\left\|w\right\|_{L^{\infty}(B_R)}^{m \frac{s}{s-1}} + \frac{\eta(\alpha_2)}{\delta_2^{\frac{1}{\alpha_2-1}}}\left\|w\right\|_{L^{q}(B_R)}^{\frac{smq}{r_2}\frac{1}{s(m-1+q/r_2)+(1-q/r_2)}}$$ Plugging and into we obtain $$\begin{aligned} \|w\|_{L^{\infty}(B_R)}^{m\frac{s}{s-1}} &\le 2^{\frac{1}{s-1}}\left\{\left[C_1\|w^p\|_{L^{r_1}(B_R)}+\gamma C_2\|w\|_{L^{r_2}(B_R)} \right] \|w\|_{L^{m}(B_R)}^m +\bar k^{\frac{s}{s-1}}\right\}\\ &\le2^{\frac{1}{s-1}}\left\{C_1\left[\delta_1\left\|w\right\|_{L^{\infty}(B_R)}^{m \frac{s}{s-1}} + \frac{\eta(\alpha_1)}{\delta_1^{\frac{1}{\alpha_1-1}}}\left\|w\right\|_{L^{q}(B_R)}^{\frac{smq}{r_1}\frac{1}{s(m-p+q/r_1)+(p-q/r_1)}} \right] \right .\\ &\left.+ \gamma C_2 \left[ \delta_2\left\|w\right\|_{L^{\infty}(B_R)}^{m \frac{s}{s-1}} + \frac{\eta(\alpha_2)}{\delta_2^{\frac{1}{\alpha_2-1}}}\left\|w\right\|_{L^{q}(B_R)}^{\frac{smq}{r_2}\frac{1}{s(m-1+q/r_2)+(1-q/r_2)}} \right] \right\} \|w\|_{L^{m}(B_R)}^m+2^{\frac{1}{s-1}}\bar k^{\frac{s}{s-1}}. \end{aligned}$$ Without loss of generality we can assume that $\|w\|_{L^{m}(B_R)}^m\neq 0$. Choosing $\varepsilon_1, \varepsilon_2$ such that $$\delta_1=\dfrac{1}{4C_1\|w\|_{L^{m}(B_R)}^m } \ \ \ \delta_2=\dfrac{1}{4\gamma\,C_2\|w\|_{L^{m}(B_R)}^m}$$ we thus have $$\begin{aligned} \frac 1 2 \|w\|_{L^{\infty}(B_R)}^{m\frac{s}{s-1}} &\le 2^{\frac{1}{s-1}}\eta(\alpha_1)\left(4C_1^{\alpha_1}\|w\|_{L^{m}(B_R)}^{m\alpha_1}\right)^{\frac{1}{\alpha_1-1}}\left\|w\right\|_{L^{q}(B_R)}^{\frac{smq}{r_1}\frac{1}{s(m-p+q/r_1)+(p-q/r_1)}} \\ & + 2^{\frac{1}{s-1}}\eta(\alpha_2)\left(4\gamma^{\alpha_2}C_2^{\alpha_2} \|w\|_{L^{m}(B_R)}^{m\alpha_2}\right)^{\frac{1}{\alpha_2-1}}\left\|w\right\|_{L^q(B_R)}^{\frac{smq}{r_2}\frac{1}{s(m-1+q/r_2)+(1-q/r_2)}}\\ &+2^{\frac{1}{s-1}}\bar k^{\frac{s}{s-1}}. \end{aligned}$$ This reduces to $$\label{eq3} \begin{aligned} \|w\|_{L^{\infty}(B_R)} &\le \left[2^{\frac{s}{s-1}}\eta(\alpha_1)(4C_1^{\alpha_1})^{\frac{1}{\alpha_1-1}} \right]^{\frac 1 m \frac{s-1}{s}}\|w\|_{L^{m}(B_R)}^{\frac{\alpha_1}{\alpha_1-1}\frac{s-1}{s}}\left\|w\right\|_{L^{q}(B_R)}^{\frac{smq}{r_1}\frac{1}{s(m-p+q/r_1)+(p-q/r_1)}} \\ &+ \left[2^{\frac{s}{s-1}}\eta(\alpha_2)(4\gamma^{\alpha_2}C_2^{\alpha_2})^{\frac{1}{\alpha_2-1}} \right]^{\frac 1 m \frac{s-1}{s}}\|w\|_{L^{m}(B_R)}^{\frac{\alpha_2}{\alpha_2-1}\frac{s-1}{s}}\left\|w\right\|_{L^{q}(B_R)}^{\frac{smq}{r_2}\frac{1}{s(m-1+q/r_2)+(1-q/r_2)}}\\ &+\left(2\bar k\right)^{\frac{1}{m}}. \end{aligned}$$ Now we use the definitions of $C_1$, $C_2, \overline C_1, \overline C_2$ introduced in and , obtaining $$\begin{aligned} \|w\|_{L^{\infty}(B_R)} &\le \left[2^{\frac{s}{s-1}}\eta(\alpha_1)(4\overline C_1^{\alpha_1})^{\frac{1}{\alpha_1-1}} \right]^{\frac 1 m \frac{s-1}{s}}\|w\|_{L^{m}(B_R)}^{\frac{\alpha_1}{\alpha_1-1}\frac{s-1}{s}\left(1+\frac{1}{l}-\frac{1}{r_1}\right)}\left\|w\right\|_{L^{q}(B_R)}^{\frac{smq}{r_1}\frac{1}{s(m-p+q/r_1)+(p-q/r_1)}} \\ &+ \left[2^{\frac{s}{s-1}}\eta(\alpha_2)(4\gamma^{\alpha_2}\overline C_2^{\alpha_2})^{\frac{1}{\alpha_2-1}} \right]^{\frac 1 m \frac{s-1}{s}}\|w\|_{L^{m}(B_R)}^{\frac{\alpha_2}{\alpha_2-1}\frac{s-1}{s}\left(1+\frac{1}{l}-\frac{1}{r_2}\right)}\left\|w\right\|_{L^{q}(B_R)}^{\frac{smq}{r_2}\frac{1}{s(m-1+q/r_2)+(1-q/r_2)}}\\ &+\left(2\bar k\right)^{\frac{1}{m}}. \end{aligned}$$ By taking limits as $r_1\longrightarrow +\infty$ and $r_2\longrightarrow +\infty$ we have $$\begin{aligned} &\frac{\alpha_1}{\alpha_1-1}\longrightarrow \frac{m}{m-p+\frac{p}{s}};\\ &\frac{\alpha_2}{\alpha_2-1}\longrightarrow \frac{m}{m-1+\frac{1}{s}};\\ &\eta(\alpha_1)\longrightarrow\left[\frac{p(s-1)}{ms}\right]^{\frac{p(s-1)}{ms-p(s-1)}}\left\{1-\frac{p(s-1)}{ms}\right\};\\ &\eta(\alpha_2)\longrightarrow\left[\frac{s-1}{ms}\right]^{\frac{s-1}{ms-(s-1)}}\left\{1-\frac{s-1}{ms}\right\}. \end{aligned}$$ Moreover we define $$\begin{aligned} &\tilde \Gamma_1:=\left\{2^{\frac{s}{s-1}}\left[\frac{p(s-1)}{ms}\right]^{\frac{p(s-1)}{ms-p(s-1)}} \left[4\overline C_1^{\frac{ms}{p(s-1)}}\right]^{\frac{p(s-1)}{ms-p(s-1)}}\right\}^{\frac{s-1}{ms}}, \\ &\tilde \Gamma_2:=\left\{2^{\frac{s}{s-1}}\left[\frac{s-1}{ms}\right]^{\frac{s-1}{ms-(s-1)}} \left[4\overline C_2^{\frac{ms}{s-1}}\right]^{\frac{s-1}{ms-(s-1)}}\right\}^{\frac{s-1}{ms}}, \\ &\tilde \Gamma:=\max\{\tilde\Gamma_1\,\,,\tilde\Gamma_2\}. \end{aligned}$$ Hence by we get $$\label{51} \|w\|_{L^{\infty}(B_R)} \le \tilde \Gamma \left [ \|w\|_{L^{m}(B_R)}^{\frac{m}{m-p+p/s}\frac{s-1}{s}\left(1+\frac{1}{l}\right)} + \|w\|_{L^{m}(B_R)}^{\frac{m}{m-1+1/s}\frac{s-1}{s}\left(1+\frac{1}{l}\right)} \gamma^{\frac{s-1}{ms-(s-1)}}\right ]+\left(2\bar k\right)^{\frac{1}{m}}.$$ Letting $l\to +\infty$ in , we can infer that $$\label{eqa5} \|w\|_{L^{\infty}(B_R)} \le \Gamma \left [ \|w\|_{L^{m}(B_R)}^{\frac{2m}{mN +2(m-p)}} + \|w\|_{L^{m}(B_R)}^{\frac{2m}{mN+2(m-1)}} \gamma^{\frac{2}{mN+2(m-1)}}\right ]+\left(2\bar k\right)^{\frac{1}{m}},$$ where $$\Gamma_1=\left\{2^{\frac{N+2}2}\left[\frac{2p}{m(N+2)} \right]^{\frac{2p}{mN + 2 (m-p)}}\left[4 \left(\frac{N+2}{N} \right)^{\frac{N+2}N}\frac 1{C_s^2} \right]^{\frac{m(N+2)}{mN+2(m-p)}} \right\}^{\frac{2}{m(N+2)}}\,,$$ $$\Gamma_2=\left\{2^{\frac{N+2}{2}}\left[\frac 2{m(N+2)} \right]^{\frac 2{mN+2(m-1)}}\left[4 \left(\frac{N+2}{N} \right)^{\frac{N+2}N}\frac 1{C_s^2} \right]^{\frac{m(N+2)}{mN+2(m-1)}} \right\}^{\frac{2}{m(N+2)}}\,.$$ $$\Gamma:=\max\{\Gamma_1, \Gamma_2\}\,.$$ Letting $\bar k\to 0$ in we obtain $$\label{eqa6} \|w\|_{L^{\infty}(B_R)} \le \Gamma \left [ \|w\|_{L^{m}(B_R)}^{\frac{2m}{mN +2(m-p)}} + \|w\|_{L^{m}(B_R)}^{\frac{2m}{mN+2(m-1)}} \gamma^{\frac{2}{mN+2(m-1)}}\right]\,.$$ Finally, since $u_0\in L^{\infty}(B_R)$, we can apply Lemma \[lemma3\] to $w$ with $q=m$. Thus from with $q=m$ and , the thesis follows. Proof of Theorems \[teoesistenza\], \[blowup\] {#proofs} ============================================== Let $\{u_{0,h}\}_{h\ge 0}$ be a sequence of functions such that $$\begin{aligned} &u_{0,h}\in L^{\infty}(M)\cap C_c^{\infty}(M) \,\,\,\text{for all} \,\,h\ge 0, \\ &u_{0,h}\ge 0 \,\,\,\text{for all} \,\,h\ge 0, \\ &u_{0, h_1}\leq u_{0, h_2}\,\,\,\text{for any } h_1<h_2, \\ &u_{0,h}\longrightarrow u_0 \,\,\, \text{in}\,\, L^m(M)\quad \textrm{ as }\, h\to +\infty\,. \end{aligned}$$ For any $R>0, k>0, h>0,$ consider the problem $$\label{5} \begin{cases} u_t= \Delta u^m +T_k(u^p) &\text{in}\,\, B_R\times (0,+\infty)\\ u=0& \text{in}\,\, \partial B_R\times (0,\infty)\\ u=u_{0,h} &\text{in}\,\, B_R\times \{0\}\,. \\ \end{cases}$$ From standard results it follows that problem has a solution $u_{h,k}^R$ in the sense of Definition \[4\]; moreover, $u^R_{h,k}\in C\big([0, T]; L^q(B_R)\big)$ for any $q>1$. Hence, it satisfies the inequalities in Lemma \[lemma3\] and in Proposition \[teo2\], i.e., for any $t\in(0,+\infty)$, $$\label{6} \|u_{h,k}^R(t)\|_{L^m(B_R)}\,\le\, e^{ C t}\|u_{0,h}\|_{L^m(B_R)};$$ $$\label{7} \begin{aligned} \|u_{h, k}^R(t)\|_{L^{\infty}(B_R)}&\le \Gamma \left\{ \left [e^{Ct}\|u_{0, h}\|_{L^m(B_R)}\right ]^{\frac{2m}{mN +2(m-p)}}\right.\\&\left.+\left [e^{Ct}\|u_{0, h}\|_{L^m(B_R)}\right ]^{\frac{2m}{mN+2(m-1)}} \left [\frac{1}{(m-1)t}\right]^{\frac{2}{mN+2(m-1)}}\right\}\,. \end{aligned}$$ In addition, for any $\tau\in (0, T), \zeta\in C^1_c((\tau, T)), \zeta\geq 0$, $\max_{[\tau, T]}\zeta'>0$, $$\label{eqcont1} \begin{aligned} \int_{\tau}^T \zeta(t) \left[\big((u^R_{h, k})^{\frac{m+1}2}\big)_t\right]^2 d\mu dt &\leq \max_{[\tau, T]}\zeta' \bar C \int_{B_R}(u_{h, k}^R)^{m+1}(x, \tau)d\mu\\ &+ \bar C \max_{[\tau, T]}\zeta \int_{B_R} F\big(u^{R}_{h, k}(x,T)\big)d\mu\\ &\leq \max_{[\tau, T]}\zeta'(t)\bar C \|u^R_{h, k}(\tau)\|_{L^\infty(B_R)}\|u^R_{h, k}(\tau)\|_{L^m(B_R)}^m \\ &+\frac {\bar C}{m+p}\|u^R_{h, k}(T)\|^p_{L^\infty(B_R)}\|u^R_{h, k}(T)\|_{L^m(B_R)}^m \end{aligned}$$ where $$F(u)=\int_0^u s^{m-1+p} \, ds\,,$$ and $\bar C>0$ is a constant only depending on $m$. Inequality is formally obtained by multiplying the differential inequality in problem by $\zeta(t)[(u^m)_t]$, and integrating by parts; indeed, a standard approximation procedure is needed (see [@GMPo Lemma 3.3] and [@ACP Theorem 13]). Moreover, as a consequence of Definition \[4\], for any $\varphi \in C_c^{\infty}(B_R\times[0,T])$ such that $\varphi(x,T)=0$ for any $x\in B_R$, $u_{h,k}^R$ satisfies $$\label{8} \begin{aligned} -\int_0^T\int_{B_R}u_{h,k}^R\,\varphi_t\,d\mu\,dt =&\int_0^T\int_{B_R} (u_{h,k}^R)^m\,\Delta\varphi\,d\mu\,dt\,+ \int_0^T\int_{B_R} T_k[(u_{h,k}^R)^p]\,\varphi\,d\mu\,dt \\ & +\int_{B_R} u_{0,h}(x)\,\varphi(x,0)\,d\mu. \end{aligned}$$ Observe that all the integrals in are finite. Indeed, due to , $u_{h,k}^R \in L^m(B_R\times(0,T))$ hence, since $p<m$, $u_{h,k}^R \in L^p(B_R\times(0,T))$ and $u_{h,k}^R \in L^1(B_R\times(0,T))$. Moreover, observe that, for any $h>0$ and $R>0$ the sequence of solutions $\{u_{h,k}^R\}_{k\ge0}$ is monotone increasing in $k$ hence it has a pointwise limit for $k\to \infty$. Let $u_h^R$ be such limit so that we have $$u_{h,k}^R\longrightarrow u_{h}^R \quad \text{as} \,\,\, k\to\infty \,\,\text{pointwise}.$$ In view of , , the right hand side of is independent of $k$. So, $(u^R_h)^{\frac{m+1}2}\in H^1((\tau, T); L^2(B_R))$. Therefore, $(u^R_h)^{\frac{m+1}2}\in C\big([\tau, T]; L^2(B_R)\big)$. We can now pass to the limit as $k\to +\infty$ in inequalities and arguing as follows. From inequality , thanks to the Fatou’s Lemma, one has for all $t>0$ $$\label{10} \begin{aligned} \|u_{h}^R(t)\|_{L^m(B_R)}\leq e^{ C t}\|u_{0,h}\|_{L^m(B_R)}. \end{aligned}$$ On the other hand, from , since $u_{h,k}^R\longrightarrow u_{h}^R$ as $k\to \infty$ pointwise and the right hand side of is independent of $k$, one has for all $t>0$ $$\label{11} \begin{aligned} \|u_{h}^R(t)\|_{L^{\infty}(B_R)}&\le \Gamma \left\{ \left [e^{Ct}\|u_{0, h}\|_{L^m(B_R)}\right ]^{\frac{2m}{mN +2(m-p)}}\right.\\&\left.+\left [e^{Ct}\|u_{0, h}\|_{L^m(B_R)}\right ]^{\frac{2m}{mN+2(m-1)}} \left [\frac{1}{(m-1)t}\right]^{\frac{2}{mN+2(m-1)}}\right\}\,. \end{aligned}$$ Note that both and hold [*for all*]{} $t>0$, in view of the continuity property of $u$ deduced above. Moreover, thanks to Beppo Levi’s monotone convergence Theorem, it is possible to compute the limit as $k\to +\infty$ in the integrals of equality and hence obtain that, for any $\varphi \in C_c^{\infty}(B_R\times(0,T))$ such that $\varphi(x,T)=0$ for any $x\in B_R$, the function $u_h^R$ satisfies $$\label{9} \begin{aligned} -\int_0^T\int_{B_R} u_{h}^R\,\varphi_t\,d\mu\,dt =&\int_0^T\int_{B_R} (u_{h}^R)^m\,\Delta\varphi\,d\mu\,dt+ \int_0^T\int_{B_R} (u_{h}^R)^p\,\varphi\,d\mu\,dt \\ & +\int_{B_R} u_{0,h}(x)\,\varphi(x,0)\,d\mu. \end{aligned}$$ Observe that, due to inequality , all the integrals in are finite, hence $u_h^R$ is a solution to problem , where we replace $T_k(u^p)$ with $u^p$ itself, in the sense of Definition \[4\]. Let us now observe that, for any $h>0$, the sequence of solutions $\{u_h^R\}_{R>0}$ is monotone increasing in $R$, hence it has a pointwise limit as $R\to+\infty$. We call its limit function $u_h$ so that $$u_{h}^R\longrightarrow u_{h} \quad \text{as} \,\,\, R\to+\infty \,\,\text{pointwise}.$$ In view of , , , , the right hand side of is independent of $k$ and $R$. So, $(u_h)^{\frac{m+1}2}\in H^1((\tau, T); L^2(M))$. Therefore, $(u_h)^{\frac{m+1}2}\in C\big([\tau, T]; L^2(M)\big)$. Since $u_0\in L^m(M)$, there exists $k_0>0$ such that $$\label{eqag1} \|u_{0h}\|_{L^m(B_R)}\leq k_0 \quad \forall\,\, h>0, R>0\,.$$ Note that, in view of , the norms in and do not depend on $R$ (see Proposition \[teo2\], Lemma \[lemma3\] and Remark \[remark3\]). Therefore, we pass to the limit as $R\to+\infty$ in and . By Fatou’s Lemma, $$\label{12} \begin{aligned} \|u_{h}(t)\|_{L^m(M)}\leq e^{Ct}\|u_{0,h}\|_{L^m(M)}; \end{aligned}$$ furthermore, since $u_{h}^R\longrightarrow u_{h} $ as $R\to +\infty$ pointwise, $$\label{13} \begin{aligned} \|u_{h}(t)\|_{L^{\infty}(M)}&\le \Gamma \left\{ \left [e^{Ct}\|u_{0, h}\|_{L^m(M)}\right ]^{\frac{2m}{mN +2(m-p)}}\right.\\&\left.+\left [e^{Ct}\|u_{0, h}\|_{L^m(M)}\right ]^{\frac{2m}{mN+2(m-1)}} \left [\frac{1}{(m-1)t}\right]^{\frac{2}{mN+2(m-1)}}\right\}\,. \end{aligned}$$ Note that both and hold [*for all*]{} $t>0$, in view of the continuity property of $u^R_h$ deduced above. Moreover, again by monotone convergence, it is possible to compute the limit as $R\to +\infty$ in the integrals of equality and hence obtain that, for any $\varphi \in C_c^{\infty}(M\times(0,T))$ such that $\varphi(x,T)=0$ for any $x\in M$, the function $u_h$ satisfies, $$\label{14} \begin{aligned} -\int_0^T\int_{M} u_{h}\,\varphi_t\,d\mu\,dt =&\int_0^T\int_{M} (u_{h})^m\,\Delta\varphi\,d\mu\,dt+ \int_0^T\int_{M} (u_{h})^p\,\varphi\,d\mu\,dt \\ & +\int_{M} u_{0,h}(x)\,\varphi(x,0)\,d\mu. \end{aligned}$$ Observe that, due to inequality , all the integrals in are well posed hence $u_h$ is a solution to problem , where we replace $u_0$ with $u_{0,h}$, in the sense of Definition \[1\]. Finally, let us observe that $\{u_{0,h}\}_{h\ge0}$ has been chosen in such a way that $$u_{0,h}\longrightarrow u_0 \,\,\, \text{in}\,\, L^m(M)$$ Observe also that $\{u_{h}\}_{h\ge0}$ is a monotone increasing function in $h$ hence it has a limit as $h\to+\infty$. We call $u$ the limit function. In view , , , , , , the right hand side of is independent of $k, R$ and $h$. So, $u^{\frac{m+1}2}\in H^1((\tau, T); L^2(M))$. Therefore, $u^{\frac{m+1}2}\in C\big([\tau, T]; L^2(M)\big)$. Hence, we can pass to the limit as $h\to +\infty$ in and and similarly to what we have seen above, we get $$\label{15} \|u(t)\|_{L^m(M)}\le e^{Ct}\|u_{0}\|_{L^m(M)},$$ and $$\label{16} \begin{aligned} \|u(t)\|_{L^{\infty}(M)}\, &\le\, \Gamma \left \{ \left (e^{mCt}\|u_{0}\|_{L^m(M)}^m\right )^{\frac{1}{m+p/s-p}\frac{s-1}{s}}\right . \\& \left .+\left (e^{mCt}\|u_{0}\|_{L^m(M)}^m\right )^{\frac{m(s-1)}{s(m-1)+1}} \left (\frac{1}{(m-1)t}\right)^{\frac{s-1}{1+s(m-1)}}\right\}. \end{aligned}$$ Note that both and hold [*for all*]{} $t>0$, in view of the continuity property of $u$ deduced above. Moreover, again by monotone convergence, it is possible to compute the limit as $h\to+\infty$ in the integrals of equality and hence obtain that, for any $\varphi \in C_c^{\infty}(M\times(0,T))$ such that $\varphi(x,T)=0$ for any $x\in M$, the function $u$ satisfies, $$\label{17} \begin{aligned} -\int_0^T\int_{M} u\,\varphi_t\,d\mu\,dt =&\int_0^T\int_{M} u^m\,\Delta\varphi\,d\mu\,dt+ \int_0^T\int_{M} u^p\,\varphi\,d\mu\,dt \\ & +\int_{M} u_{0}(x)\,\varphi(x,0)\,d\mu. \end{aligned}$$ Observe that, due to inequality , all the integrals in are finite, hence $u$ is a solution to problem in the sense of Definition \[1\]. Finally, let us discuss . Let $q>1$. If $u_0\in L^q(M)\cap L^m(M)$, we choose the sequence $u_{0h}$ so that it further satisfies $$u_{0h}\to u_0 \quad \textrm{ in }\,\, L^q(M)\,\quad \textrm{ as }\, h\to +\infty\,.$$ We have that $$\label{6a} \|u_{h,k}^R(t)\|_{L^q(B_R)}\,\le\, e^{ C t}\|u_{0,h}\|_{L^q(B_R)}.$$ Hence, due to , letting $k\to +\infty, R\to +\infty, h\to +\infty$, by Fatou’s Lemma we deduce . We note in first place that the geometrical assumptions on $M$, in particular the upper curvature bound sec$\,\le -k_2<0$, ensure that inequalities and both hold on $M$, see e.g. [@Grig; @Grig3]. Hence, all the result of Theorem \[teoesistenza\] hold, in particular solutions corresponding to data $u_0\in{\textrm L}^m(M)$ exist globally in time. Besides, it has been shown in [@GMPv] that if $u_0$ is a continuous, nonnegative, nontrivial datum, which is sufficiently large in the sense given in the statement, under the lower curvature bound being assumed here the corresponding solution $v$ satisfies the bound $$u(x,t)\ge C \zeta(t) \left[ 1- \frac r a \eta(t) \right]_+^{\frac 1{m-1}}\,\qquad \forall t\in(0,S),\ \forall x\in M,$$ possibly up to a finite time explotion time $S$, which has however been proved in the present paper not to exist. Here, the functions $\eta, \zeta$ are given by: $$\zeta(t):=(\tau+t)^{\alpha}\,, \quad \eta(t):=(\tau +t)^{-\beta} \quad \text{for every } t\in [0, \infty) \, ,$$ where $C, \tau, R_0,\inf_{B_{R_0}}u_0$ must be large enough and one can take $0<\alpha<\frac 1{m-1} \, , \beta=\frac{\alpha(m-1)+1}2$. Clearly, $v$ then satisfies $\lim_{t\to+\infty}v(x,t)=+\infty$ for all $x\in M$, and hence $u$ enjoys the same property by comparison. Proof of Theorems \[teoesistenza2\], \[teoesistenza3\] {#weights} ====================================================== For any $R>0$ we consider the following approximate problem $$\label{problemapalla2} \begin{cases} \, \rho(x) u_t= \Delta u^m +\, \rho(x) u^p & \text{in}\,\, B_R\times (0,T) \\ u=0 &\text{in}\,\, \partial B_R\times (0,T)\\ u =u_0 &\text{in}\,\, B_R\times \{0\}\,, \end{cases}$$ here $B_R$ denotes the Euclidean ball with radius $R$ and centre in $O$. We shall use the following Aronson-Benilan type estimate (see [@AB]; see also [@Sacks Proposition 2.3]). \[prop2a\] Let $1<p< m$, $u_0\in H_0^1(B_R) \cap L^{\infty}(B_R)$, $u_0\ge 0$. Let $u$ be the solution to problem . Then, for a.e. $t\in(0,T)$, $$-\Delta u^m(\cdot,t) \le \rho u^p(\cdot, t)+ \frac{\rho}{(m-1)t} u(\cdot,t) \quad \text{in}\,\,\,\mathfrak{D}'(B_R).$$ The conclusion follows using step by step the same arguments given in the proof of Theorem \[teoesistenza\], since the necessary functional inequalities are being assumed. We use Proposition \[prop2a\] instead of \[prop2\]. The last statement of the Theorem will be proved later on in Section \[sec4\] In order to prove Theorem \[teoesistenza3\] we adapt the strategy of [@Sacks] to the present case, so we shall be concise and limit ourselves to identifying the main steps and differences. Define $$d\mu:=\rho(x) dx\,.$$ For any $R>0$, for $v\in L_{\rho}^1(B_R)$, we set $$A_k:= \{x\in B_R:\,|v(x)|>k\}.$$ \[lemma1pesi\] Let $v\in L_{\rho}^1(B_R)$. Suppose that there exist $C>0$ and $s>1$ such that $$g(k)\le C\mu(A_k)^{s} \quad \text{for any}\,\,k\in {\ensuremath{\mathbb{R}}}^+.$$ Then $v\in L^{\infty}(B_R)$ and $$\|v\|_{L^{\infty}(B_R)}\le C\left(\frac{s}{s-1}\right)^{s}\|\rho\|_{L^1(\mathbb R^N)}^{s-1}.$$ Arguing as in the proof of Lemma \[lemma1\], we integrate inequality between $0$ and $k$ and using the definition of $g$, we obtain $$g(k)^{1-\frac 1 s} \le \|v\|_{L^1_{\rho}(B_R)}^{1-\frac 1 s} - \frac{s-1}{s}\,C^{-\frac 1 s} k \quad \text{for any}\,\, k\in{\ensuremath{\mathbb{R}}}^+\,.$$ Choose $$k=k_0=C^{\frac 1 s}\|v\|_{L^1_{\rho}(B_R)}^{1-\frac 1 s}\frac{s}{s-1},$$ and substitute it in the last inequality. Then we have $$\begin{aligned} g(k_0)\le0 &\iff \int_{B_R}|G_{k_0}(v)|\,d\mu =0 \iff |G_{k_0}(v)|=0\\& \iff |v|\le k_0\iff|v|\le C^{\frac 1 s}\|v\|_{L_{\rho}^1(B_R)}^{1-\frac 1 s}\frac{s}{s-1}. \end{aligned}$$ Thanks to the assumption that $\rho\in L^1(\mathbb R^N)$, we can apply the weighted Hölder inequality to get $$\|v\|_{L^{\infty}(B_R)}\le \frac{s}{s-1}C^{\frac 1 s}\|v\|_{L^{\infty}(B_R)}^{1-\frac 1 s}\|\rho\|^{1-\frac 1 s}.$$ Rearranging the terms in the previous inequality we obtain the thesis. \[prop1-pesi\] Let $\rho$ satisfy and $\rho\in L^1(\mathbb R^N)$. Let $f_1\in L^{m_1}_{\rho}(B_R)$ and $f_2\in L^{m_2}_{\rho}(B_R)$ where $$m_1>\frac N 2,\quad \,m_2 >\frac{N}{2}\,.$$ Assume that $v\in H_0^1(B_R)$, $v\ge 0$ is a subsolution to problem $$\begin{cases} -\Delta v = \rho (f_1+f_2) & \text{in}\,\, B_R\\ v=0 &\text{on}\,\, \partial B_R \end{cases}.$$ Then $$\label{eqa10pesi} \|v\|_{L^{\infty}(B_R)}\le C_1\|f_1\|_{L_{\rho}^{m_1}(B_R)}+C_2\|f_2\|_{L_{\rho}^{m_2}(B_R)},$$ where $$\label{38b} \begin{aligned} &C_1= \frac{1}{C_s^2}\left(\frac{s}{s-1} \right)^s\,\|\rho\|_{L^1(\mathbb R^N)}^{\frac 2 N-\frac{1}{m_1}}\,, \\ &C_2=\frac{1}{C_s^2}\left(\frac{s}{s-1} \right)^s\,\|\rho \|_{L^1(\mathbb R^N)}^{\frac 2 N-\frac{1}{m_2}}\,, \end{aligned}$$ with $s$ given by . If in Lemma \[prop1-pesi\] we further assume that there exists a constant $k_0>0$ such that $$\|f_1 \|_{L_{\rho}^{m_1}(B_R)}\leq k_0, \quad \|f_2 \|_{L_{\rho}^{m_2}(B_R)}\leq k_0 \quad \textrm{ for all }\,\, R>0,$$ then from , we infer that the bound from above on $\|v\|_{L^{\infty}(B_R)}$ is independent of $R$. This fact will have a key role in the proof of global existence for problem . By arguing as in the proof of Proposition \[prop1\], we get $$\int_{B_R}|G_k(v)|\, d\mu \le\frac{1}{C_s^2}\left[\|f_1\|_{L_{\rho}^{m_1}}\mu(A_k)^{1+\frac{2}{N}-\frac{1}{m_1}}+\|f_2\|_{L_{\rho}^{m_2}}\mu(A_k)^{1+\frac{2}{N}-\frac{1}{m_2}}\right].$$ Thus $$\int_{B_R}|G_k(v)|\, d\mu \le\frac{1}{C_s^2}\,\mu(A_k)^{1+\frac{2}{N}-\frac{1}{l}}\left[\|f_1\|_{L_{\rho}^{m_1}}\|\rho\|_{L^1(\mathbb R^N)}^{\frac{1}{l}-\frac{1}{m_1}}+\|f_2\|_{L_{\rho}^{m_2}}\|\rho\|_{L^1(\mathbb R^N)}^{\frac{1}{l}-\frac{1}{m_2}}\right].$$ Now, defining $$\bar C=\frac{1}{C_s^2}\left[\|f_1\|_{L^{m_1}(B_R)}\|\rho\|_{L^1(\mathbb R^N)}^{\frac{1}{l}-\frac{1}{m_1}}+\|f_2\|_{L^{m_2}(B_R)}\|\rho\|_{L^1(\mathbb R^N)}^{\frac{1}{l}-\frac{1}{m_2}}\right]\,,$$ the last inequality is equivalent to $$\int_{B_R}|G_k(v)|\,d\mu \le \bar C\,\mu(A_k)^{s}\,, \quad \text{for any}\,\,k\in{\ensuremath{\mathbb{R}}}^+\,,$$ where $s$ has been defined in . Hence, it is possible to apply Lemma \[lemma1pesi\]. By using the definitions of $C_1$ and $C_2$ in , we thus have $$\|v\|_{L^{\infty}(B_R)}\le C_1\,\|f_1\|_{L_{\rho}^{m_1}(B_R)}+C_2\,\|f_2\|_{L_{\rho}^{m_2}(B_R)}\,.$$ \[teo2pesi\] Let $1<p<m$, $R>0, u_0\in L^{\infty}(B_R)$, $u_0\ge 0$. Let $u$ be the solution to problem . Let inequality holds. Then there exists $C=C(p, m, N, C_s, \|\rho\|_{L^1(\mathbb R^N)})>0$ such that, for all $t>0$, $$\|u(t)\|_{L^{\infty}(B_R)}\le C \left[1+\left(\frac{1}{(m-1)t}\right)^{\frac{1}{m-1}} \right].$$ We proceed as in the proof of Proposition \[teo2\], up to inequality . Thanks to the fact that $\rho\in L^1(\mathbb R^N)$, we can apply to the thesis of Lemma \[prop1-pesi\]. Thus we obtain $$\label{eq4pesi} \|w\|_{L^{\infty}(B_R)}^m\le C_1\|w^p\|_{L_{\rho}^{r_1}(B_R)}+\gamma C_2\|w\|_{L_{\rho}^{r_2}(B_R)}.$$ Now the constants are $$\begin{aligned} &\alpha_1=\frac{m}{p-\frac{q}{r_1}}; \\ &\alpha_2=\frac{m}{1-\frac{q}{r_2}}; \\ &\varepsilon_1 \text{ such that}\,\,\delta_1=\frac{1}{4C_1}; \\ &\varepsilon_2 \text{ such that}\,\,\delta_2=\frac{1}{4\gamma C_2}. \end{aligned}$$ Plugging and into we obtain $$\label{eq5pesi} \begin{aligned} \|w\|_{L^{\infty}(B_R)}^{m} &\le C_1\|w^p\|_{L_{\rho}^{r_1}(B_R)}+\gamma C_2\|w\|_{L_{\rho}^{r_2}(B_R)} \\ &\le C_1\left[\delta_1\left\|w\right\|_{L^{\infty}(B_R)}^{m} + \frac{\eta(\alpha_1)}{\delta_1^{\frac{1}{\alpha_1-1}}}\left\|w\right\|_{L_{\rho}^{q}(B_R)}^{\frac{mq}{r_1}\frac{1}{m-p+q/r_1}} \right] \\ &+ \gamma C_2 \left[ \delta_2\left\|w\right\|_{L^{\infty}(B_R)}^{m} + \frac{\eta(\alpha_2)}{\delta_2^{\frac{1}{\alpha_2-1}}}\left\|w\right\|_{L_{\rho}^{q}(B_R)}^{\frac{mq}{r_2}\frac{1}{m-1+q/r_2}} \right] . \end{aligned}$$ Inequality can be rewritten as $$\begin{aligned} \|w\|_{L^{\infty}(B_R)} &\le \left[2\eta(\alpha_1)\left(4C_1^{\alpha_1}\right)^{\frac{1}{\alpha_1-1}}\right]^{\frac{1}{m}}\left\|w\right\|_{L_{\rho}^{q}(B_R)}^{\frac{q}{r_1}\frac{1}{m-p+q/r_1}}\\&+\left[2 \eta(\alpha_2)\left(4\gamma^{\alpha_2}C_2^{\alpha_2}\right)^{\frac{1}{\alpha_2-1}}\right]^{\frac{1}{m}}\left\|w\right\|_{L_{\rho}^{q}(B_R)}^{\frac{q}{r_2}\frac{1}{m-1+q/r_2}}. \end{aligned}$$ Computing the limits as $r_1\longrightarrow \infty$ and $r_2\longrightarrow \infty$ we have $$\begin{aligned} &\eta(\alpha_1)\longrightarrow\left[\frac{p}{m}\right]^{\frac{p}{m-p}}\left\{1-\frac{p}{m}\right\};\\ &\eta(\alpha_2)\longrightarrow\left[\frac{1}{m}\right]^{\frac{1}{m-1}}\left\{1-\frac{1}{m}\right\};\\ &\left\|w\right\|_{L_{\rho}^{q}(B_R)}^{\frac{q}{r_1}\frac{1}{(m-p+q/r_1)}} \longrightarrow 1;\\ &\left\|w\right\|_{L_{\rho}^{q}(B_R)}^{\frac{q}{r_2}\frac{1}{(m-1+q/r_2)}}\longrightarrow 1. \end{aligned}$$ Moreover we define $$\begin{aligned} &\Gamma_1:=\left[2\left(\frac{p}{m}\right)^{\frac{p}{m-p}} \left(1-\frac{p}{m}\right)\right]^{\frac 1 m}4^{\frac{mp}{m-p}}C_1^{\frac{mp}{m-p}};\\ &\Gamma_2:=\left[2\left(\frac{1}{m}\right)^{\frac{1}{m-1}} \left(1-\frac{1}{m}\right)\right]^{\frac 1 m}4^{\frac{m}{m-1}}C_1^{\frac{m}{m-1}};\\ &C:=\max\{\Gamma_1\,\,,\Gamma_2\} \end{aligned}$$ and notice that, by the above construction, the thesis follows with this choice of $C$. The conclusion follows by the same arguments as in the proof of Theorem \[teoesistenza\]. However, some minor differences are in order. We replace Proposition \[teo2\] by Proposition \[teo2pesi\]. Moreover, since $u_0\in L^1_{\rho}(\mathbb R^N)$, the family of functions $\{u_{0h}\}$ is as follows: $$\begin{aligned} &u_{0,h}\in L^{\infty}(\mathbb R^N)\cap C_c^{\infty}(\mathbb R^N) \,\,\,\text{for all} \,\,h\ge 0, \\ &u_{0,h}\ge 0 \,\,\,\text{for all} \,\,h\ge 0, \\ &u_{0, h_1}\leq u_{0, h_2}\,\,\,\text{for any } h_1<h_2, \\ &u_{0,h}\longrightarrow u_0 \,\,\, \text{in}\,\, L^1_{\rho}(\mathbb R^N)\quad \textrm{ as }\, h\to +\infty\,. \end{aligned}$$ Furthermore, instead of , , , , we use the following. By standard arguments (see, e.g. proof of [@Sacks Proposition 2.5-(i)]) we have that $$\|u^R_{h,k}(t)\|_{L^1_{\rho}(B_R)}\leq C \|u_{0h}\|_{L^1_{\rho}(B_R)}\quad \textrm{ for all}\,\, t>0\,,$$ for some positive constant $C=C(p, m, N, \|\rho\|_{L^1(\mathbb R^N)}),$ and, for any $\varepsilon\in (0, m-p)$, $$\int_0^1\int_{B_R} (u^R_{h, k})^{p+\varepsilon}\rho(x) dx dt \leq \tilde C\,,$$ for some positive constant $\tilde C=\tilde C(p, m, N, \|\rho\|_{L^1(\mathbb R^N)}, \|u_0\|_{L^1_{\rho}(\mathbb R^N)}).$ Hence, after having passed to the limit as $k\to +\infty, R\to +\infty, h\to +\infty$, for any $T>0, \varphi \in C_c^{\infty}(\mathbb R^N\times(0,T))$ such that $\varphi(x,T)=0$ for every $x\in \mathbb R^N$, we have that $$\int_0^T\int_{\mathbb R^N} u^{p+\varepsilon}\rho(x)\varphi\, dx dt\leq C\,.$$ Therefore, holds. End of proof of Theorem \[teoesistenza2\]: an example of complete blowup in infinite time {#sec4} ----------------------------------------------------------------------------------------- We recall that we are assuming $m>1$ and $1<p<m$. Let us set $r:=|x|$. We now construct a subsolution to equation $$\label{91b} \rho\, u_t= \Delta u^m +\rho\, u^p\quad \text{in}\,\, \mathbb R^N\times (0,T)\,,$$ under the hypothesis that there exist $k_1$ and $k_2$ with $k_2\geq k_1>0$ such that $$\label{95b} k_1r^2\le \frac{1}{\rho(x)}\le k_2r^2\quad \text{for any}\,\,\,x\in{\ensuremath{\mathbb{R}}}^N\setminus B_e.$$ Moreover, due to the running assumptions on the weight there exist positive constants $\rho_1,\rho_2$ such that $$\label{96b} \rho_1\le \frac{1}{\rho(x)}\le \rho_2\quad \text{for any}\,\,\,x\in B_e\,.$$ Let $$\mathfrak{s}(x):=\begin{cases} \log(|x|) &\quad \text{if}\quad x\in {\ensuremath{\mathbb{R}}}^N\setminus B_e, \\ & \\ \dfrac{|x|^2+e^2}{2e^2} &\quad\text{if}\quad x\in B_e\,. \end{cases}$$ The requested statements will follow from the following result. \[teosubsolutioncritical\] Let assumption , and be satisfied, and $1<p<m.$ If the initial datum $u_0$ is smooth, compactly supported and large enough, then problem has a solution $u(t)\in L^\infty(\mathbb R^N)$ for any $t\in (0,\infty)$ that blows up in infinite time, in the sense that $$\label{eq22} \lim_{t\to+\infty}u(x, t)= +\infty \ \ \ \forall x\in {\mathbb R}^N.$$ More precisely, if $C>0$, $a>0$, $\alpha>0$, $\beta>0$, $T>0$ verify $$\label{98b} 0<T^{-\beta}<\frac{a}{2}.$$ $$\label{alphabeta} 0<\alpha<\frac{1}{m-1}\,,\quad\quad \beta=\frac{\alpha(m-1)+1}{2}\,,$$ and $$u_0(x)\ge CT^{\alpha}\left[1-\frac{\mathfrak{s}(x)}{a}\,T^{-\beta}\right]^{\frac{1}{m-1}}_{+}\,, \quad \text{for any}\,\, x\in {\ensuremath{\mathbb{R}}}^N\,,$$ then the solution $u$ of problem satisfies and the bound from below $$u(x,t) \ge C (T+t)^{\alpha}\left [1- \frac{\mathfrak{s}(x)}{a}\, (T+t)^{-\beta} \right ]_{+}^{\frac{1}{m-1}}, \,\, \text{for any}\,\, (x,t) \in {\ensuremath{\mathbb{R}}}^N\times(0,+\infty)\,.$$ We construct a suitable subsolution of . Define, for all $(x,t)\in {\ensuremath{\mathbb{R}}}^N$, $$w(x,t)\equiv w(r(x),t) := \begin{cases} u(x,t) \quad \text{in } [{\ensuremath{\mathbb{R}}}^N \setminus B_{e}] \times (0,T), \\ v(x,t) \quad \text{in } B_{e} \times (0,T), \end{cases}$$ where $${{u}}(x,t)\equiv {u}(r(x),t):=C(T+t)^{\alpha}\left [1-\frac{\log(r)}{a}(T+t)^{-\beta}\right]_{+}^{\frac{1}{m-1}}, $$ and $$v(x,t) \equiv v(r(x),t):= C(T+t)^{\alpha} \left [ 1-\frac{r^2+e^2}{2e^2} \frac{(T+t)^{-\beta}}{a} \right ]^{\frac{1}{m-1}}_{+}\,. $$ Moreover, let $$F(r,t):= 1-\frac{\log(r)}{a}(T+t)^{-\beta}\,,$$ and $$G(r,t):= 1-\frac{r^2+e^2}{2e^2}\frac{(T+t)^{-\beta}}{a}\,.$$ For any $(x,t) \in ({\ensuremath{\mathbb{R}}}^N\setminus B_e) \times (0,T)$, we have: $$\begin{aligned} {u}_t =C\alpha(T+t)^{\alpha-1} F^{\frac{1}{m-1}} - C\beta(T+t)^{\alpha-1} \frac{1}{m-1}F^{\frac{1}{m-1}} + C\beta(T+t)^{\alpha-1} \frac{1}{m-1} F^{\frac{1}{m-1}-1}. \\ \end{aligned} $$ $$({u}^m)_r=-\frac{C^m}{a} (T+t)^{m\alpha} \frac{m}{m-1} F^{\frac{1}{m-1}} \frac{1}{r}(T+t)^{-\beta}. $$ $$\begin{aligned} ({u}^m)_{rr}&=\frac{C^m}{a} (T+t)^{m\alpha} \frac{m}{m-1} F^{\frac{1}{m-1}} \frac{(T+t)^{-\beta}}{r^2}\\&+ \frac{C^m}{a^2} (T+t)^{m\alpha} \frac{m}{(m-1)^2} F^{\frac{1}{m-1}-1} \frac{(T+t)^{-2\beta}}{r^2}. \end{aligned} $$ For any $(x,t) \in B_e \times (0,T)$, we have: $$\begin{aligned} v_t =C\alpha(T+t)^{\alpha-1}G^{\frac{1}{m-1}} - C\beta(T+t)^{\alpha-1} \frac{1}{m-1} G^{\frac{1}{m-1}} + C\beta(T+t)^{\alpha-1} \frac{1}{m-1}G^{\frac{1}{m-1}-1}\,. \end{aligned} $$ $$(v^m)_r=-\frac{C^m}{a} (T+t)^{m\alpha} \frac{m}{m-1} G^{\frac{1}{m-1}} \frac{r}{e^2}(T+t)^{-\beta}\,. $$ $$(v^m)_{rr}=-\frac{C^m}{a} (T+t)^{m\alpha} \frac{m}{m-1} G^{\frac{1}{m-1}} \frac {(T+t)^{-\beta}}{e^2}+ \frac{C^m}{a^2} (T+t)^{m\alpha} \frac{m}{(m-1)^2} G^{\frac{1}{m-1}-1} (T+t)^{-2\beta}\frac{r^2}{e^4}\,. $$ Set $$\textit{D}_1:=\left \{ (x,t) \in ({\ensuremath{\mathbb{R}}}^N \setminus B_e) \times (0,T)\,\, |\,\, 0<F(r,t)<1 \right \}.$$ For every $(x,t)\in D_1$, by the previous computations we have $$\label{104} \begin{aligned} u_t-&\frac{1}{\rho}\Delta u^m-u^p\\ &=C\alpha(T+t)^{\alpha-1} F^{\frac{1}{m-1}} - C\beta(T+t)^{\alpha-1} \frac{1}{m-1}F^{\frac{1}{m-1}}+ C\beta(T+t)^{\alpha-1} \frac{1}{m-1} F^{\frac{1}{m-1}-1}\\ &+\frac{1}{\rho}\left\{-\frac{C^m}{a} (T+t)^{m\alpha-\beta} \frac{m}{m-1} F^{\frac{1}{m-1}}\frac{1}{r^2}- \frac{C^m}{a^2} (T+t)^{m\alpha-2\beta} \frac{m}{(m-1)^2} F^{\frac{1}{m-1}-1} \frac{1}{r^2}\right.\\ &\left. +(N-1)\frac{C^m}{a} (T+t)^{m\alpha-\beta} \frac{m}{m-1} F^{\frac{1}{m-1}} \frac{1}{r^2}\right\} -C^p (T+t)^{p\alpha}. \end{aligned}$$ Thanks to , becomes $$\begin{aligned} u_t-&\frac{1}{\rho}\Delta u^m-u^p\\ &\le CF^{\frac{1}{m-1}-1}\left\{F\left[\alpha(T+t)^{\alpha-1}-\frac{\beta}{m-1}(T+t)^{\alpha-1}+(N-2)k_2\frac{C^{m-1}}{a}\frac{m}{m-1}(T+t)^{m\alpha-\beta}\right]\right.\\ &\left.+\frac{\beta}{m-1}(T+t)^{\alpha-1}-\frac{C^{m-1}}{a^2}\frac{m}{(m-1)^2}k_1(T+t)^{m\alpha-2\beta}-C^{p-1}(T+t)^{p\alpha}F^{\frac{p+m-2}{m-1}}\right\}\\ & \leq CF^{\frac{1}{m-1}-1}\left\{\sigma(t)F-\delta(t)-\gamma(t)F^{\frac{p+m-2}{m-1}}\right\} \end{aligned}$$ where $$\varphi(F):=\sigma(t)F-\delta(t)-\gamma(t)F^{\frac{p+m-2}{m-1}},$$ with $$\begin{aligned} & \sigma(t) = \left [ \alpha-\frac{\beta}{m-1}\right](T+t)^{\alpha-1}+\frac{C^{m-1}}{a} \frac{m}{m-1}k_2\left(N-2\right)(T+t)^{m\alpha-\beta}\,,\\ & \delta(t) = -\frac{\beta}{m-1} (T+t)^{\alpha-1}+ \frac{C^{m-1}}{a^2} \frac{m}{(m-1)^2}k_1(T+t)^{m\alpha-2\beta}\,, \\ & \gamma(t)=C^{p-1} (T+t)^{p\alpha }\,, \\ \end{aligned}$$ Our goal is to find suitable $C>0$, $a>0$, such that $$\varphi(F) \le 0\,, \quad \text{for all}\,\, F \in (0,1)\,.$$ To this aim, we impose that $$\sup_{F\in (0,1)}\varphi(F)=\max_{F\in (0,1)}\varphi(F)= \varphi (F_0)\leq 0\,,$$ for some $F_0\in(0,1)$. We have $$\begin{aligned} \frac{d \varphi}{dF}=0 &\iff \sigma(t) - \frac{p+m-2}{m-1} \gamma(t) F^{\frac{p-1}{m-1}} =0 \\ & \iff F_0= \left [\frac{m-1}{p+m-2} \frac{\sigma(t)}{\gamma(t)} \right ]^{\frac{m-1}{p-1}}\,. \end{aligned}$$ Then $$\varphi(F_0)= K \dfrac{\sigma(t)^{\frac{p+m-2}{p-1}}}{\gamma(t)^{\frac{m-1}{p-1}}} - \delta(t)$$ where $K=\left(\frac{m-1}{p+m-2}\right)^{\frac{m-1}{p-1}}-\left(\frac{m-1}{p+m-2}\right)^{\frac{p+m-2}{p-1}}>0 $. The two conditions we must verify are $$\label{107} \begin{aligned} K[\sigma(t)]^{\frac{p+m-2}{p-1}} \le \delta(t) \gamma(t)^{\frac{m-1}{p-1}}\,,\ \ (m-1) \sigma(t) \le (p+m-2) \gamma(t) \,. \end{aligned}$$ Observe that, thanks to the choice in and by choosing $$\frac{C^{m-1}}{a} \ge 2\beta\,\frac{(m-1)}{m}\frac{1}{k_1},$$ we have $$\begin{aligned} & \sigma(t)\le \frac{C^{m-1}}{a} \frac{m}{m-1}k_2\left(N-2\right)(T+t)^{m\alpha-\beta}\,,\\ & \delta(t)\ge \frac{C^{m-1}}{2a^2} \frac{m}{(m-1)^2}k_1(T+t)^{m\alpha-2\beta}\, \end{aligned}$$ and conditions in follow. So far, we have proved that $${u}_t-\frac{1}{\rho(x)}\Delta({u}^m)-{u}^p \le 0 \quad \text{ in } D_1\,.$$ Furthermore, since ${u}^m\in C^1([{\ensuremath{\mathbb{R}}}^N\setminus B_e]\times[0,T))$ it follows that $ u$ is a subsolution to equation in $ [{\ensuremath{\mathbb{R}}}^N \setminus B_e]\times (0,T)$. Now, we consider equation in $B_e\times(0,T)$. We observe that, due to condition , $$\label{99} \frac{1}{2}<G<1\,\,\,\text{for all}\,\,(x,t)\in B_e\times(0,T).$$ Similarly to the previous computation we obtain $$v_t-\frac{1}{\rho}\Delta v^m-v^p\le CG^{\frac{1}{m-1}-1}\psi(G)\,,$$ where $$\psi(G):=\sigma_0G-\delta_0-\gamma G^{\frac{p+m-2}{m-1}}\,,$$ with $$\begin{aligned} & \sigma_0(t) = \left [ \alpha-\frac{\beta}{m-1}\right](T+t)^{\alpha-1}+\rho_2\frac{N}{e^2} \frac{m}{m-1}\frac{C^{m-1}}{a} (T+t)^{m\alpha-\beta} \,,\\ & \delta_0(t) = -\frac{\beta}{m-1} (T+t)^{\alpha-1}\,\\ & \gamma(t)=C^{p-1} (T+t)^{p\alpha }\,. \end{aligned}$$ Due to , $v$ is a subsolution of for every $(x,t)\in B_e\times(0,T)$, if $$2^{\frac{p+m-2}{m-1}}\left(\sigma_0-\delta_0\right)\le\gamma\,.$$ This last inequality is always verified thanks to . Hence we have proved that $$v_t-\frac{1}{\rho(x)}\Delta(v^m)-v^p \le 0 \quad \text{ in }\,\, B_e\times(0,T)\,,$$ Moreover, $w^m \in C^1({\ensuremath{\mathbb{R}}}^N \times [0,T))$, indeed, $$({u}^m)_r = (v^m)_r = -C^m \zeta(t)^m \frac{m}{m-1} \frac{1}{e}\frac{\eta(t)}{a} \left [ 1- \frac{\eta(t)}{a} \right ]_+^{\frac{1}{m-1}} \quad \text{in}\,\, \partial B_e\times (0,T)\,.$$ Hence, $w$ is a subsolution to equation in ${\ensuremath{\mathbb{R}}}^N\times(0,T)$. [5]{} D. G. Aronson, P. Bénilan, *Regularité des solutions de l’éequation des milieux poreus dans $\mathbb R^N$*, C. R. Acad. Sci. Paris Ser. A-B [**288**]{} (1979), 103–105. D. Aronson, M.G. Crandall, L.A. Peletier, *Stabilization of solutions of a degenerate nonlinear diffusion problem*, Nonlinear Anal. **6 (1982), 1001–1022.** L. Boccardo, G. Croce, “Elliptic partial differential equations. Existence and regularity of distributional solutions”, De Gruyter, Studies in Mathematics, 55, 2013. C. Bandle, M.A. Pozio, A. Tesei, *The Fujita exponent for the Cauchy problem in the hyperbolic space, J. Differential Equations **251 (2011), 2143–2163.*** M. Bonforte, G. Grillo, *Asymptotics of the porous media equations via Sobolev inequalities, J. Funct. Anal. [**225**]{} (2005), 33-62.* H. Brezis, S. Kamin, *Sublinear elliptic equations in $\mathbb{R}^n$, Manuscripta Math. **74 (1992), 87–106.*** X. Chen, M. Fila, J.S. Guo, *Boundedness of global solutions of a supercritical parabolic equation*, Nonlinear Anal. [**68**]{} (2008), 621–628. E.B. Davies, “Heat kernels and spectral theory”, Cambridge Tracts in Mathematics, 92. Cambridge University Press, Cambridge, 1990. K. Deng, H.A. Levine, *The role of critical exponents in blow-up theorems: the sequel, J. Math. Anal. Appl. **243 (2000), 85–126.*** H. Fujita, *On the blowing up of solutions of the Cauchy problem for $u_t=\Delta u+u^{1+\alpha}$, J. Fac. Sci. Univ. Tokyo Sect. I **13** (1966), 109–124.* Y. Fujishima, K. Ishige, *Blow-up set for type I blowing up solutions for a semilinear heat equation, Ann. Inst. H. Poincaré Anal. Non Linéaire **31 (2014), 231–247. V.A. Galaktionov, J.L. Vázquez, *Continuation of blowup solutions of nonlinear heat equations in several dimensions, Comm. Pure Appl. Math. **50 (1997), 1–67.****** A. Grigor’yan, *Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds*, Bull. Amer. Math. Soc. [**36**]{} (1999), 135–249. A. Grigor’yan, “Heat Kernel and Analysis on Manifolds”, AMS/IP Studies in Advanced Mathematics, 47, American Mathematical Society, Providence, RI; International Press, Boston, MA, 2009. G. Grillo, K. Ishige, M. Muratori, *N*onlinear characterizations of stochastic completeness, J. Math. Pures Appl. **139 (2020), 63-82.** G. Grillo, M. Muratori, *Radial fast diffusion on the hyperbolic space*, Proc. Lond. Math. Soc. [**109**]{} (2014), 283–317. G. Grillo, M. Muratori, *Smoothing effects for the porous medium equation on Cartan-Hadamard manifolds*, Nonlinear Anal. [**131**]{} (2016), 346–362. G. Grillo, M. Muratori, M.M. Porzio, *Porous media equations with two weights: smoothing and decay properties of energy solutions via Poincaré inequalities, Discrete Contin. Dyn. Syst. **33 (2013), 3599–3640.*** G. Grillo, M. Muratori, F. Punzo, [*The porous medium equation with large initial data on negatively curved Riemannian manifolds*]{}, J. Math. Pures Appl. **113 (2018), 195–226.** G. Grillo, M. Muratori, F. Punzo, *The porous medium equation with measure data on negatively curved Riemannian manifolds*, J. European Math. Soc. **20 (2018), 2769-2812.** G. Grillo, M. Muratori, F. Punzo, *B*low-up and global existence for the porous medium equation with reaction on a class of Cartan-Hadamard manifolds, J. Diff. Eq. **266 (2019), 4305-4336.** G. Grillo, M. Muratori, J.L. Vázquez, *The porous medium equation on Riemannian manifolds with negative curvature. The large-time behaviour*, Adv. Math. **314 (2017), 328–377.** K. Hayakawa, *On nonexistence of global solutions of some semilinear parabolic differential equations, Proc. Japan Acad. **49** (1973), 503–505.* S. Kamin, P. Rosenau, *Nonlinear thermal evolution in an inhomogeneous medium, J. Math. Phys. **23 (1982), 1385–1390.*** D. Kinderlehrer, G. Stampacchia, “An Introduction to Variational Inequalities and Their Applications”, Academic Press, New York, 1980. H.A. Levine, *The role of critical exponents in blow-up theorems, SIAM Rev. **32 (1990), 262–288.*** A.V. Martynenko, A. F. Tedeev, *On the behavior of solutions of the Cauchy problem for a degenerate parabolic equation with nonhomogeneous density and a source, (Russian) Zh. Vychisl. Mat. Mat. Fiz. **48 (2008), no. 7, 1214-1229; transl. in Comput. Math. Math. Phys. **48 (2008), no. 7, 1145-1160.***** A.V. Martynenko, A.F. Tedeev, V.N. Shramenko, *The Cauchy problem for a degenerate parabolic equation with inhomogenous density and a source in the class of slowly vanishing initial functions (Russian) Izv. Ross. Akad. Nauk Ser. Mat. **76 (2012), no. 3, 139-156; transl. in Izv. Math. **76 (2012), no. 3, 563-580.***** A.V. Martynenko, A.F. Tedeev, V.N. Shramenko, *On the behavior of solutions of the Cauchy problem for a degenerate parabolic equation with source in the case where the initial function slowly vanishes, Ukrainian Math. J. **64 (2013), 1698–1715.*** G. Meglioli, F. Punzo, *Blow-up and global existence for solutions to the porous medium equation with reaction and slowly decaying density, J. Diff. Eq., to appear. G. Meglioli, F. Punzo, *Blow-up and global existence for solutions to the porous medium equation with reaction and fast decaying density, preprint (2019). N. Mizoguchi, F. Quirós, J.L. Vázquez, *Multiple blow-up for a porous medium equation with reaction, Math. Ann. **350 (2011), 801–827.***** F. Punzo, [*Support properties of solutions to nonlinear parabolic equations with variable density in the hyperbolic space*]{}, Discrete Contin. Dyn. Syst. Ser. S **5** (2012), 657–670. F. Punzo, [*Blow-up of solutions to semilinear parabolic equations on Riemannian manifolds with negative sectional curvature*]{}, J. Math. Anal. Appl. [**387**]{} (2012), 815–827. P. Quittner, [*The decay of global solutions of a semilinear heat equation*]{}, Discrete Contin. Dyn. Syst. [**21**]{} (2008), 307–318. P. Souplet, *Morrey spaces and classification of global solutions for a supercritical semilinear heat equation in $\mathbb R^n$*, J. Funct. Anal. [**272**]{} (2017), 2005–2037. P.E. Sacks, *Global behavior for a class of nonlinear evolution equations*, SIAM J. Math Anal. **16 (1985), 233–250. A.A. Samarskii, V.A. Galaktionov, S.P. Kurdyumov, A.P. Mikhailov, “Blow-up in Quasilinear Parabolic Equations”, De Gruyter Expositions in Mathematics, 19. Walter de Gruyter & Co., Berlin, 1995. J.L. Vázquez, [*The problems of blow-up for nonlinear heat equations. Complete blow-up and avalanche formation*]{}, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei Mat. Appl. **15** (2004), 281–300.** J.L. Vázquez, “Smoothing and decay estimates for nonlinear diffusion equations. Equations of porous medium type”. Oxford Lecture Series in Mathematics and its Applications, 33. Oxford University Press, Oxford, 2006. J.L. Vázquez, “The Porous Medium Equation. Mathematical Theory”, Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 2007. J.L. Vázquez, [*Fundamental solution and long time behavior of the porous medium equation in hyperbolic space*]{}, J. Math. Pures Appl. [**104**]{} (2015), 454–484. Z. Wang, J. Yin, [*A note on semilinear heat equation in hyperbolic space*]{}, J. Differential Equations [**256**]{} (2014), 1151–1156. Z. Wang, J. Yin, [*Asymptotic behaviour of the lifespan of solutions for a semilinear heat equation in hyperbolic space*]{}, Proc. Roy. Soc. Edinburgh Sect. A [**146**]{} (2016) 1091–1114. F.B. Weissler, *$L^p$-energy and blow-up for a semilinear heat equation, Proc. Sympos. Pure Math. **45 (1986), 545–551.*** E. Yanagida, *Behavior of global solutions of the Fujita equation*, Sugaku Expositions [**26**]{} (2013), 129–147. Q.S. Zhang, *Blow-up results for nonlinear parabolic equations on manifolds, Duke Math. J. **97 (1999), 515–539.***
--- abstract: 'We report static and dynamic properties of the antiferromagnetic compound Zn$_{2}$(VO)(PO$_{4}$)$_{2}$, and the consequences of non-magnetic Ti$^{4+}$ doping at the V$^{4+}$ site. $^{31}$P nuclear magnetic resonance (NMR) spectra and spin-lattice relaxation rate ($1/T_1$) consistently show the formation of the long-range antiferromagnetic order below $T_N= 3.8-3.9$K. The critical exponent $\beta=0.33 \pm 0.02$ estimated from the temperature dependence of the sublattice magnetization measured by $^{31}$P NMR at 9.4MHz is consistent with universality classes of three-dimensional spin models. The isotropic and axial hyperfine couplings between the $^{31}$P nuclei and V$^{4+}$ spins are $A_{\rm hf}^{\rm iso} = (9221 \pm 100)$ Oe/$\mu_{\rm B}$ and $A_{\rm hf}^{\rm ax} = (1010 \pm 50)$ Oe/$\mu_{\rm B}$, respectively. Magnetic susceptibility data above 6.5K and heat capacity data above 4.5K are well described by quantum Monte-Carlo simulations for the Heisenberg model on the square lattice with $J\simeq 7.7$K. This value of $J$ is consistent with the values obtained from the NMR shift, $1/T_1$ and electron spin resonance (ESR) intensity analysis. Doping Zn$_2$VO(PO$_4)_2$ with non-magnetic Ti$^{4+}$ leads to a marginal increase in the $J$ value and the overall dilution of the spin lattice. In contrast to the recent *ab initio* results, we find neither evidence for the monoclinic structural distortion nor signatures of the magnetic one-dimensionality for doped samples with up to 15% of Ti$^{4+}$. The Néel temperature $T_{\rm N}$ decreases linearly with increasing the amount of the non-magnetic dopant.' author: - 'A. Yogi' - 'N. Ahmed' - 'A. A. Tsirlin' - 'S. Kundu' - 'A. V. Mahajan' - 'J. Sichelschmidt' - 'B. Roy' - 'Y. Furukawa' - 'R. Nath' title: 'Antiferromagnetism of Zn$_2$VO(PO$_4)_2$ and the dilution with Ti$^{4+}$' --- Introduction ============ Square lattice of antiferromagnetically coupled Heisenberg spins is the simplest spin model in two dimensions (2D).[@chakravarty1989; @manousakis1992] Its properties are nowadays well established by extensive numerical studies.[@makivic1991; @sandvik1997; @kim1998] The case of spin-$\frac12$ entails strong quantum effects that reduce the sublattice magnetization[@sandvik1997] and have an impact on the correlation length[@cuccoli1996; @*elstner1995] and spin dynamics.[@carretta2000; @ronnow2001] The ideal 2D model lacks long-range order (LRO) above zero temperature, following the Mermin-Wagner theorem.[@mermin1966] However, any real material features a non-negligible interplane coupling that triggers the LRO at a non-zero temperature $T_N$.[@yasuda2005] When interplane couplings are frustrated and inactive, the LRO is driven by anisotropy terms in the spin Hamiltonian.[@yildirim1994] Suppression of the LRO in square-lattice-based magnets is possible via two mechanisms, frustration of in-plane couplings or dilution of the spin lattice. The former mechanism is revealed by the model of the $J_1-J_2$ frustrated square lattice, where the competition between nearest-neighbor couplings $J_1$ and second-neighbor couplings $J_2$ destroys the magnetic order in the vicinity of the quantum critical point at $J_2/J_1=0.5$ for . This is the well-established theoretically but hitherto never observed experimentally case of the spin-liquid ground state in 2D. The majority of the $J_1-J_2$ systems, mostly layered V$^{4+}$ phosphates, feature columnar antiferromagnetic (AFM) order[@bombardi2004; @skoulatos2009; @nath2009] induced by $J_2>J_1$. Materials with $J_2/J_1<0.5$ developing Néel AFM order remain low in number and sometimes challenging to investigate.[@tsirlin2008; @*oka2008] The second mechanism is the dilution of the spin lattice with non-magnetic impurity atoms. Diluted systems are largely classical even for spin-$\frac12$.[@sandvik2002] The LRO vanishes at the classical percolation threshold of $x_c=0.41$,[@kato2000; @sandvik2002] where $x$ is the doping level. The doping leads to a gradual suppression of the Néel temperature $T_N$,[@vajk2002] but in many spin-$\frac12$ materials the $T_N$ drops much faster than expected, because non-magnetic impurity atoms introduce magnetic frustration that contributes to the suppression of the LRO.[@liu2009; @carretta2011] On the other hand, Li$_2$VOSiO$_4$, which is a frustrated square-lattice antiferromagnet even without dilution,[@melzi2000; @*melzi2001; @rosner2002; @*rosner2003; @bombardi2004] exhibits a weaker effect on the sublattice magnetization and $T_N$ when diluted with non-magnetic Ti$^{4+}$.[@papinutto2005] Apparently, the dilution of real materials never follows the idealized models and entails a modification of individual exchange couplings. Here, we address magnetic properties and dilution behavior of the spin-$\frac12$ antiferromagnet Zn$_2$VO(PO$_4)_2$, where magnetic V$^{4+}$ ions can be replaced by the non-magnetic Ti$^{4+}$. The crystal structure of Zn$_2$VO(PO$_4)_2$ features V$^{4+}$O$_5$ pyramids that are linked into layers via PO$_4$ tetrahedra in the $ab$ plane (Fig. \[fig:structure\]).[@lii1991] Given the small size of the interlayer Zn$^{2+}$ cations, the interplane V–V distance (4.52A) is shorter than the distance in the $ab$ plane (6.31A). This led earlier studies[@bayi1993] to conclude that Zn$_2$VO(PO$_4)_2$ is a quasi-one-dimensional (1D) magnet with $J_{\perp}\gg J$, where $J$ and $J_{\perp}$ stand for the in-plane and interplane couplings, respectively (Fig. \[fig:structure\]). A careful evaluation of thermodynamic data put forward the opposite, quasi-2D scenario with $J\gg J_{\perp}$.[@kini2006] Magnetic order observed below $T_N\simeq 3.7$K is AFM in the $ab$ plane and ferromagnetic (FM) along the $c$ direction.[@yusuf2010] It is consistent with *ab initio* results by Kanungo *et al.*,[@kanungo2013] who also addressed the diluted, Ti-doped case and proposed that the 25% Ti doping should induce a monoclinic distortion reinstating the 1D physics, but this time in the $ab$ plane and not along the $c$ direction. In this study, we attempt to verify the prediction[@kanungo2013] concerning the Ti-doped Zn$_2$VO(PO$_4)_2$ experimentally, and show that within the feasible doping levels, neither the monoclinic distortion nor the 1D physics are observed. Instead, thermodynamics of Ti-doped Zn$_2$VO(PO$_4)_2$ above $T_N$ is largely consistent with expectations for the diluted square lattice of Heisenberg spins. The Néel temperature is systematically suppressed upon the dilution, and the rate of suppression is similar to that in Li$_2$VOSiO$_4$. We provide accurate estimates of the in-plane exchange coupling in order to assess this effect quantitatively, and discuss our results in the light of available experimental data on diluted AFM square lattices. We also report additional characterization for thermodynamic properties, ground state and spin dynamics of the parent Zn$_2$VO(PO$_4)_2$ compound. These data will serve as a starting point for detailed studies of the doped material. ![\[fig:structure\] Left panel: crystal structure of Zn$_2$VO(PO$_4)_2$. Right panel: magnetic layer in the $ab$ plane. Dotted lines show hyperfine couplings between the P atom and the neighboring V$^{4+}$ spins; each P atom is coupled to two magnetic ions with opposite spin directions. The spins are along the $c$ direction.[@yusuf2010]](fig1) Methods {#sec:methods} ======= Polycrystalline samples of Zn$_2$(V$_{1-x}$Ti$_x$)O(PO$_4$)$_2$ ($x$ = 0%, 5%, 10%, and 15%) were prepared by a conventional solid-state reaction route. In the first step, Zn$_2$P$_2$O$_7$ was synthesized using ZnO (Aldrich, 99.999%) and NH$_4$H$_2$PO$_4$ (Aldrich, 99.999%) as starting materials. The stoichiometric mixtures were fired at $600$$^{\circ}$C in air with one intermediate grinding. In the second step, Zn$_2$P$_2$O$_7$ was mixed with VO$_2$ (Aldrich, 99.999%) and TiO$_2$ (Aldrich, 99.999%) and then the stoichiometric mixtures were fired in flowing Ar-gas atmosphere with several intermediate grindings and palletizations at $850$$^{\circ}$C. To check the sample purity, powder x-ray diffraction (XRD, PANalytical powder diffractometer and CuK$_{\alpha}$ radiation, $\lambda_{\rm ave}=1.54182$Å) was performed at room temperature. The samples with $x$ = 0%, 5%, and 10%, were single-phase, but at higher doping concentrations several impurity phases including Ti$_4$O$_3$(PO$_4$)$_3$ emerged. Our repeated attempts to achieve higher doping levels by increasing or lowering the firing temperature were unsuccessful. Therefore, we focus on studying the samples with $x\leq 15$%, where a minor amount of non-magnetic Ti-containing impurities does not hinder the data analysis. Le Bail fit of the powder XRD data was performed using the `FullProf` software package based on the tetragonal structure with space group $I4cm$ to determine the lattice parameters.[@fullprof] No indications of a symmetry lowering were observed. All the data sets could be fitted using structural data of the parent compound as the initial parameters. The refined lattice parameters and the goodness of fits ($\chi^2$) are listed in Table \[tab:latticeparameters\]. No significant change in lattice constants ($a$ and $c$) and unit cell volume ($V$) was observed with increasing $x$. Given the fact that Ti$^{4+}$ features nearly the same ionic radius (0.51 Å) as V$^{4+}$ (0.53 Å), we do not expect any substantial changes in the cell volume. Thus our experimental observation is consistent with expectations based on the ionic radii. $x$ $a$ (A) $c$ (A) $V$ (A)$^{3}$ $\chi^2$ (%) ------ ----------- ----------- --------------- -------------- 0.00 8.9221(2) 9.0376(2) 719.44(2) 1.83 0.05 8.9218(2) 9.0326(2) 718.99(2) 2.11 0.10 8.9243(3) 9.0287(3) 719.07(4) 4.75 0.15 8.9251(3) 9.0292(3) 719.24(4) 5.49 : \[tab:latticeparameters\] Lattice parameters ($a$, $c$, and $V$) and the goodness of fit ($\chi^2$) obtained from the Le Bail fit of the powder XRD data for Zn$_2$V$_{1-x}$Ti$_x$O(PO$_4)_2$. Temperature ($T$) dependent magnetic susceptibility $\chi(T)$ and heat capacity $C_p(T)$ measurements were performed using a commercial Physical Property Measurement System (PPMS, Quantum Design). For the $\chi(T)$ measurement, the vibrating sample magnetometer (VSM) attachment to the PPMS was used. $C_p (T)$ was measured by the relaxation technique on a pressed pellet using the heat capacity option of the PPMS. Electron spin resonance (ESR) experiments were carried out on a fine-powdered sample with a standard continuous-wave spectrometer between 5K and 300K. We measured the power $P$ absorbed by the sample from a transverse magnetic microwave field (X-band, $\nu\simeq 9.4$GHz) as a function of an external, static magnetic field $H$. A lock-in technique was used to improve the signal-to-noise ratio which yields the derivative of the resonance signal $dP/dB$. The NMR measurements were carried out using pulsed NMR techniques on $^{31}$P (nuclear spin $I=1/2$ and gyromagnetic ratio $\gamma_{N}/2\pi = 17.237$MHz/T) nuclei in the temperature range 1.5K $\leq T \leq $ 250K. We have carried out the NMR measurements at two different radio frequencies of $75.5$MHz and $9.4$MHz that correspond to applied fields of about $4.38$T and $0.545$T, respectively. Spectra were obtained either by Fourier transform of the NMR echo signals or by sweeping the field at a fixed frequency. The NMR shift $K(T)=(H_{\rm ref}-H(T))/H(T)$ was determined by measuring the resonance field of the sample \[$H(T)$\] with respect to nonmagnetic reference H$_{3}$PO$_{4}$ (resonance field $H_{\rm ref}$). The $^{31}$P spin-lattice relaxation rate $1/T_{1}$ was measured by the conventional single saturation pulse method. Magnetic susceptibility and specific heat of the pristine and diluted AFM square lattice of Heisenberg spins was obtained from quantum Monte-Carlo (QMC) simulations performed by the `loop` algorithm[@loop] of the `ALPS` simulation package.[@alps] Simulations were performed on $L\times L$ finite lattices with periodic boundary conditions and $L$ up to 80. For the three-dimensional (3D) model of coupled square planes, the $16\times 16\times 8$ finite lattice was used. Finite-size effects are negligible in the temperature range considered ($T/J\geq 0.6$). Pure Zn$_2$VO(PO$_4$)$_2$ ========================= Thermodynamic properties {#sec:thermo} ------------------------ ![\[fig:chi\] (Color online) Magnetic susceptibility ($\chi$) of Zn$_2$VO(PO$_4$)$_2$ measured in the applied fields $\mu_0H=0.5$T, 1T, and 2T. The dashed line is the QMC fit of the 1T data with the 2D model. The solid line is the QMC fit with the 3D model featuring the interplane coupling $J_{\perp}/J=-0.1$ (see text for details). The arrow marks the Néel temperature $T_N$, where the data measured at 0.5T and at higher fields diverge because of the spin-flop transition.](fig2) In order to analyze the effect of doping on the exchange couplings, we first consider thermodynamic properties of the parent compound. Our $\chi$ and $C_p$ data are similar to those reported by Kini *et al.*[@kini2006] Magnetic susceptibility (Fig. \[fig:chi\]) shows a broad maximum around 6.9K corresponding to the short-range order in 2D. The LRO transition manifests itself by a kink at $T_N\simeq 3.8$K in the susceptibility data measured at 1T and 2T. This effect is due to the spin-flop transition that increases the susceptibility below $T_N$. Specific heat reveals a $\lambda$-type anomaly at $T_N$ (Fig. \[fig:heat\]). The hump above $T_N$ is a signature of the broad maximum related to the 2D short-range order. The $C_p(T)$ flattens out around 10K and increases at higher temperatures because of the growing phonon contribution. Applied magnetic field suppresses the hump and shifts the entropy to the transition anomaly at $T_N$. However, the value of $T_N=3.8-3.9$K remains unchanged. ![\[fig:heat\] (Color online) Specific heat ($C_p$) of Zn$_2$VO(PO$_4$)$_2$ measured in the applied fields $\mu_0H=0$T and 9T. The dashed line is the QMC fit of the zero-field data. Magnetic field shifts the entropy from the broad maximum above $T_N$ to the transition anomaly at $T_N$.](fig3) Magnetic susceptibility of Zn$_2$VO(PO$_4)_2$ is well described by the AFM square-lattice model. The susceptibility simulated by QMC was fitted to the experimental curve using the expression: $$\chi=\chi^*\times\left(\frac{N_Ak_Bg^2}{J}\right),$$ where $\chi^*$ is the reduced susceptibility calculated by QMC, $N_A$ is Avogadro’s number, $k_B$ is Boltzmann constant, and $g$ is the $g$-factor. We fit the data with $J=7.7$K and $g=1.95$ down to 6.5K (Fig. \[fig:chi\]). At lower temperatures, experimental susceptibility lies above the simulated curve. This deviation can be mitigated by decreasing the $J$ value to 7.4K. However, the description of the high-temperature part is deteriorated, and the $g$-value drops to 1.91, which is below our ESR estimate (Sec. \[sec:esr\]) and below the typical range of powder-averaged $\bar g=1.94-1.98$ reported for V$^{4+}$ oxide compounds.[@tsirlin2011; @forster2013; @forster2014] A Curie-like impurity contribution also improves the fit in the low-temperature region, but introduces discrepancies at higher temperatures. Moreover, the low-field data measured at 0.5T do not show any signatures of a Curie-like upturn down to 2K (Fig. \[fig:chi\]). Specific heat above $T_N$ is also consistent with the predictions of the square-lattice model. For a proper comparison magnetic ($C_{{\rm mag}}$) and phonon ($C_{{\rm phon}}$) contributions to the specific heat should be separated. Unfortunately, a non-magnetic reference compound for our case is not available, because not more than 15% of Ti$^{4+}$ can be doped into Zn$_2$VO(PO$_4)_2$, and the hypothetic end member Zn$_2$TiO(PO$_4)_2$ does not exist. Kini *et al*.[@kini2006] approximated $C_{{\rm phon}}$ with a series of Debye functions and demonstrated that $C_{{\rm phon}}<C_{{\rm mag}}$ below 10K. By using the data from Ref. , we verified that in this temperature range of our interest $C_{{\rm phon}}$ follows the $T^3$ behavior. Therefore, we fitted our data as: $$C_p^{\rm exp}=C_p^{\rm QMC}R+\beta T^3,$$ where $R$ is the gas constant, and $\beta$ is treated as an adjustable parameter, because in doped samples it may change following the change in the atomic masses and the formation of defects having influence on phonons. This way, we compare specific heat of Zn$_2$VO(PO$_4)_2$ to the QMC result and find best agreement for $J=7.8$K (Fig. \[fig:heat\]) that is nearly equal to $J=7.7$K from the susceptibility fit. For the sake of completeness, let us discuss possible deviations from the idealized square-lattice model. The ratio $T_N/J\simeq 0.51$ implies $|J_{\perp}|/J\simeq 0.1$.[@yasuda2005] Although the Néel temperature of Zn$_2$VO(PO$_4)_2$ is rather high for a quasi-2D magnet, strong signatures of the 2D physics have been observed experimentally at $T>T_N$. Apart from the excellent description of both magnetic susceptibility (Fig. \[fig:chi\]) and specific heat (Fig. \[fig:heat\]) with the purely 2D models, neutron studies revealed Warren-type diffuse scattering above $T_N$, which is indicative of 2D spin correlations.[@yusuf2010] Therefore, Zn$_2$VO(PO$_4)_2$ can be classified as an intermediate case between the quasi-2D and spatially anisotropic 3D magnets. However, even with the realistic interlayer coupling ($|J_{\perp}|/J\simeq 0.1$) included in the model, no improvement of the susceptibility fit could be achieved. The susceptibility of the 3D model deviates from that of the 2D model only below 5K when the magnetic ordering transition at $T_N$ is approached (Fig. \[fig:chi\]). The in-plane square lattice in Zn$_2$VO(PO$_4)_2$ can be weakly frustrated by the second-neighbor coupling $J_2$. Frustrated spin models are not amenable to QMC simulations because of the notorious sign problem. Therefore, we resort to the high-temperature series expansion (HTSE) of the frustrated square lattice model[@rosner2002; @*rosner2003] for the magnetic susceptibility that is generally valid at temperatures exceeding individual magnetic couplings $J_i$. The data above 10K yield $J\simeq 7.8$K, $J_2\simeq 0.3$K, and $g\simeq 1.96$ in excellent agreement with Ref. . Therefore, the frustration of the square lattice in Zn$_2$VO(PO$_4)_2$ is extremely weak, $J_2/J\simeq 0.04$ to be compared with the *ab initio* result $J_2/J=(t_2/t_1)^2\simeq 0.03$ from Ref. . We do not expect that this weak frustration affects thermodynamic properties. The remaining source of the marginal discrepancy between the square-lattice model and the experimental magnetic susceptibility is the magnetic anisotropy. However, we do not find any strong signatures of the anisotropy in the NMR data reported below. ESR {#sec:esr} --- ![\[esr\] (Color online) Temperature dependent ESR intensity, $I_{\rm ESR}(T)$, obtained by double integration of the ESR spectra of powdered Zn$_2$VO(PO$_4$)$_2$ sample; solid line represents the fit described in the text. Upper right inset shows a typical spectrum (symbols) together with a Lorentzian shape (solid line) powder-averaged for a uniaxial $g$-factor anisotropy. Bottom left inset shows the relation between $I_{\rm ESR}(T)$ and $\chi$ measured at a field of 0.5 T and temperatures between 9 K and 295 K.](fig4){width="3.5in"} Results of the ESR experiment are presented in Fig. \[esr\]. In the right inset of Fig. \[esr\], a typical ESR spectrum at room temperature is shown. The shape of the spectra can be well described by a powder-averaged Lorentzian line for the case of an easy-axis anisotropy of the $g$-tensor, as shown by the solid line, yielding the parallel $g_{\parallel}=1.94(6)$ and perpendicular $g_{\perp}=1.98(7)$ components at $T=295$K. The isotropic $g=\sqrt{(g^{2}_{\parallel}+2g^{2}_{\perp})/3}$ was calculated to be $\sim1.97$. Regardless of $g_{\parallel} < g_{\perp}$ (as expected for an easy-axis anisotropy), these V$^{4+}$ $g$-factors are similar to those reported for Pb$_2$VO(PO$_4$)$_2$ (Ref. ) or SrZnVO(PO$_4$)$_2$ (Ref. ). The integrated ESR intensity \[$I_{\rm ESR}(T)$\] increases with decreasing temperature and then exhibits a broad maximum at about 7K as observed in $\chi(T)$ (Fig. \[fig:chi\]) and $K(T)$ (Fig. \[K\]). Below $T_{\rm N}$, it decreases rapidly towards zero. $I_{\rm ESR}(T)$, as obtained by integrating the whole spectrum, linearly depends on the uniform static susceptibility $\chi(T)$ of the V$^{4+}$ spins probed by ESR. Hence, one can get an estimate of the exchange couplings by fitting $I_{\rm ESR}(T)$ data to the HTSE of the square lattice model. We fitted the data above 8K to $I_{\rm ESR}(T) = A+B\times \chi_{\rm spin}$, where $A$ and $B$ are arbitrary constants and $\chi_{\rm spin}$ is the expression for HTSE (valid over $\frac{k_B T}{J} \gtrsim 0.7$) of $\chi(T)$ for the 2D $S = 1/2$ HAF square lattice given by Rushbrooke and Wood[@rushbrooke1958] which can be written as $$\begin{aligned} \chi_{\rm spin}(T) &=& \frac{N_A\mu_B^2g^2}{J} \nonumber\\ &\times& [(4x+4+2.00025x^{-1}+0.66656x^{-2}+0.06286x^{-3} \nonumber\\ &-& 0.060434x^{-4}+0.000237x^{-5}]^{-1}, \label{2D}\end{aligned}$$ where $x=\frac{k_{B}T}{J}$. By fitting the data in the high-$T$ regime ($T > 8$ K), the exchange coupling was estimated to be $J=(8.7 \pm 0.2)$K which agrees with the values estimated from $\chi$ and NMR shift (discussed later) analysis. In an attempt to see how $I_{\rm ESR}$ scales with $\chi$, we plotted $I_{\rm ESR}$ vs. $\chi$ with temperature as an implicit parameter (see bottom left inset of Fig. \[esr\]). A nearly linear behavior down to 9K reflects that $I_{\rm ESR}(T)$ tracks $\chi(T)$ of the V$^{4+}$ spins very well. The influence of critical spin fluctuations on the temperature dependencies of linewidth and resonance field become noticeable below 30 K. However, we refrained from using these temperature dependencies to obtain information on the critical spin dynamics for which an accurate determination of the parallel and perpendicular line components is needed. For this purpose, our powder spectra are too broad compared to the difference between $g_{\parallel}$ and $g_{\perp}$. Investigations of single crystals would certainly provide the required accuracy as the ESR results on Pb$_2$VO(PO$_4$)$_2$ have shown in Ref. . $^{31}$P NMR Shift ------------------ ![\[spk\] (Color online) Field-sweep $^{31}$P NMR spectra at different temperatures $T$ ($T > T_{\rm N}$) for polycrystalline Zn$_{2}$(VO)(PO$_{4}$)$_{2}$ measured at 75.5MHz. The vertical dashed line corresponds to the $^{31}$P resonance frequency of the reference sample H$_{3}$PO$_{4}$. Inset shows the $^{31}$P NMR spectrum at $12.5$K (open circles). The solid line is the fit. The NMR shift values obtained from the fitting are $K_{\rm iso} \simeq 2.47\%$ and $K_{\rm ax} \simeq 0.27\%$.](fig5){width="3in"} ![\[K\] (Color online) (a) Temperature-dependent NMR shift $K$ vs. $T$. The solid line is the fit of $K_{\rm iso}$ by Eq. . (b) $^{31}$P shift $K$ vs. $\chi$ measured at 2T is plotted with temperature as an implicit parameter for both $K_{\rm iso}$ and $K_{\rm ax}$. The solid lines are linear fits.](fig6){width="4.5in"} According to Ref. , the structure of Zn$_{2}$VO(PO$_{4}$)$_{2}$ features one P site. We observed a narrow spectral line above $T_{\rm N}$ as is expected for an $I=1/2$ nucleus.[@nath2005; @*nath2008; @*nath2008b] Figure \[spk\] shows the $^{31}$P NMR spectra measured at different temperatures. The line shape was found to be asymmetric because of the anisotropy in $\chi(T)$ and/or in the hyperfine coupling constant between the P nucleus and the V$^{4+}$ spins. The line position was found to shift with temperature. Temperature dependence of the NMR shift $K$ extracted by fitting the spectra (see inset of Fig. \[spk\]) are presented in Fig. \[K\](a), which shows a strong anisotropy along different directions. At high temperatures, both isotropic ($K_{\rm iso}$) and axial ($K_{\rm ax}$) parts of the NMR shift vary in a Curie-Weiss manner and then pass through a broad maximum at around 9K reflecting the 2D short-range order, similar to the $\chi(T)$ data (Fig. \[fig:chi\]). The NMR shift $K(T)$ is related to the spin susceptibility $\chi_{\rm spin}(T)$ by the relation $$K(T)=K_{0}+\frac{A_{\rm hf}}{N_{\rm A}} \chi_{\rm spin}(T), \label{shift}$$ where $K_{0}$ is the temperature-independent chemical shift, and $A_{\rm hf}$ is the hyperfine coupling constant between the P nuclei and the V$^{4+}$ electronic spins. The $K$ vs. $\chi$ plot with $T$ as an implicit parameter is fitted very well by a straight line \[Fig. \[K\](b)\] over the whole temperature range ($T > T_{\rm N}$) yielding the isotropic and axial parts of the hyperfine coupling $A_{\rm hf}^{\rm iso} = (9221 \pm 100)$ and $A_{\rm hf}^{\rm ax} = (1010 \pm 50)$ Oe/$\mu_{\rm B}$, respectively. Since the NMR shift is a direct measure of $\chi _{\rm spin}$ and is free from extrinsic impurities, it serves as an independent test for the bulk susceptibility $\chi(T)$. We fitted the temperature dependence of $K_{\rm iso}$ above 6 K by Eq.  where the expression for $\chi_{\rm spin}$ is given in Eq. . During the fitting process, $g$ and $A_{\rm hf}^{\rm iso}$ were fixed to the values $g \simeq 1.97$ and $A_{\rm hf}^{\rm iso} \simeq 9221$ Oe/$\mu_{\rm B}$, obtained from the ESR experiments and $K_{\rm iso}$ vs. $\chi$ analysis, respectively. In this way, we obtained $K_{0} = (0.025 \pm 0.001)$ % and $J/k_{\rm B} = (8.4 \pm 0.3)$ K. The fit is shown in Fig. \[K\](a) as a solid line. The resulting $J$ value is close to the values estimated from the $\chi(T)$ analysis[@kini2006] and neutron diffraction experiments.[@yusuf2010] ![\[belowTn\] (Color online) Temperature-dependent $^{31}$P NMR spectra measured at 9.4MHz. The solid lines are the fits to the spectra at different temperatures as in Ref. . The spectra in the paramagnetic state broaden below $T_{N}$ and take a rectangular shape, due to the internal field $H_{\rm int}$.](fig7){width="3in"} NMR spectra below $T_{N}$ ------------------------- Below $T_{\rm N}$, the $^{31}$P spectra measured at 75.5MHz were found to broaden abruptly. In order to precisely probe the intrinsic line shape, we remeasured the $^{31}$P spectra at a lower frequency of 9.4MHz. As shown in Fig. \[belowTn\], the $^{31}$P line above $T_{N}$ remains narrow and immediately below $T_{N}$ it starts broadening indicating that the P site is experiencing the static internal field in the ordered state through the hyperfine field between the P nuclei and the ordered V$^{4+}$ moments. With decrease in temperature, the spectrum takes a nearly rectangular shape but the central peak still persists down to the lowest measured temperature. The relative intensity of the central peak with respect to the broad rectangular spectra decreases with decreasing temperatures. As discussed later, this central peak is found to be intrinsic to the sample.[@vonlanthen2002] ![\[sublattice\] (Color online) Temperature dependence of the internal field $H_{\rm int}$ obtained from NMR spectra measured at 9.4MHz in the ordered state. $H_{\rm int}$ is proportional to the V$^{4+}$ sublattice magnetization. The solid line is the fit by Eq.  as described in the text. Inset: $H_{\rm int}$ vs. $\tau$ and the solid line is the simulation of $0.046 \times \tau^{0.33}$ taking $T_{\rm N} \simeq 3.90$ K.](fig8){width="3in"} The internal field $H_{\rm int}$, which is proportional to the V$^{4+}$ sublattice magnetization, was determined by taking the half width at the half maximum from the fit of the experimental spectra following the procedure adopted recently for BiMn$_2$PO$_6$ (Ref. ). The temperature dependence of $H_{\rm int}$ is plotted in Fig. \[sublattice\]. In order to extract the critical exponent ($\beta$) of the order parameter (sublattice magnetization), $H_{\rm int}(T)$ was fitted by the power law: $$H_{\rm int}(T)=H_{0}\left(1-\frac{T}{T_{\rm N}}\right)^{\beta}. \label{ms}$$ One can notice that $H_{\rm int}$ decreases sharply on approaching $T_{\rm N}$. For a precise estimation of $\beta$, one needs more data points close to $T_{\rm N}$. We have estimated $\beta$ by fitting the data points as close as possible to $T_{\rm N}$ (i.e., in the critical region) as shown in Fig. \[sublattice\]. The maximum value of $\beta = 0.33 \pm 0.02$ with $H_0 \simeq 0.046(2)$ T and $T_{\rm N} \simeq 3.9(1)$K was obtained by fitting the data points in the $T$-range 3.7K to 3.95K close to $T_{\rm N}$. By increasing the number of fitting points toward low-$T$s, the $\beta$ value was found to decrease. In order to magnify the fit in the critical region, $H_{\rm int}$ is plotted against the reduced temperature $\tau = 1-\frac{T}{T_{\rm N}}$ in the inset of Fig. \[sublattice\]. The solid line is the fit by $0.046 \times \tau^{0.33}$ where $T_{\rm N}$ is taken to be 3.90 K. At low-$T$s, $H_{\rm int}$ develops the tendency of saturation and it saturates faster than expected from the mean-field theory \[see the deviation of fits in Fig. \[sublattice\] at low-$T$s\]. Nuclear spin-lattice relaxation rate $1/T_{1}$ ---------------------------------------------- ![\[t1\] (Color online) (a) Spin-lattice relaxation rate $1/T_{1}$ vs. temperature $T$ measured at 75.5 and 9.4MHz. Two data sets at 9.4MHz correspond to the measurements at both the central peak and RHS edge positions below $T_{\rm N}$ \[see Fig. \[belowTn\]\]. The solid and dashed lines represent $T^5$ and $T^3$ behaviors, respectively. (b) $1/(\chi T_{1}T)$ is plotted as a function of $T$.](fig9){width="4in"} The $^{31}$P nuclear spin-lattice relaxation rate $1/T_{1}$ above $T_{\rm N}$ was measured at the field corresponding to the central peak position. For $T\leq T_{\rm N}$, the measurements were performed at both the central peak position as well as at the right-hand side (RHS) edge position (see Fig. \[belowTn\]). For an $I=1/2$ nucleus, the recovery of the longitudinal magnetization is expected to follow a single-exponential behavior. In Zn$_{2}$VO(PO$_{4}$)$_{2}$, the recovery of the longitudinal nuclear magnetization was indeed fitted well by the exponential function $1-\frac{M(t)}{M_{0}}=Ae^{-t/T_{1}}$, where $M(t)$ is the nuclear magnetization at a time $t$ after the saturation pulse and $M_{0}$ is the equilibrium magnetization. The temperature dependence of $1/T_{1}$ extracted from the fit is presented in Fig. \[t1\](a). The $1/T_{1}$ data measured at two different frequencies (75.5MHz and 9.4MHz) almost resemble each other at low temperatures. At high temperatures ($T \gtrsim 10$ K), $1/T_{1}$ is temperature-independent. In the high temperature limit $T\gg J/k_{\rm B}$, a temperature-independent $1/T_{1}$ behavior is typical due to random fluctuation of paramagnetic moments.[@moriya1956] With decrease in temperature, $1/T_{1}$ decreases slowly for $T<10$ K and then shows a weak anomaly around $T_{\rm N}\simeq 3.8$ K. This decrease is very similar to that observed previously in the cases of the antiferromagnetic square lattices Pb$_{2}$VO(PO$_{4}$)$_{2}$ (Ref. ), SrZnVO(PO$_4$)$_2$,(Ref. ), VOMoO$_{4}$ (Ref. ), and \[Cu(HCO$_{2}$)$_{2}$.4D$_{2}$O\], where the decrease of $1/T_{1}$ above $T_{\rm N}$ is explained by cancellation of the antiferromagnetic spin fluctuations at the probed nuclei.[@carretta2000] Below the peak, $1/T_{1}$ again decreases smoothly towards zero. As shown in Fig. \[t1\](a) no difference in $1/T_{1}$ below $T_{\rm N}$ was observed between the data measured at the central peak and RHS edge positions at 9.4MHz. Ti-doped Zn$_2$VO(PO$_4$)$_2$ ============================= ![\[fig:chi-doped\] (Color online) Magnetic susceptibility of Ti-doped Zn$_2$VO(PO$_4$)$_2$ measured at $\mu_0H=1$T. The dashed lines are QMC fits with the diluted square-lattice model, as described in the text. The arrows mark Néel temperatures $T_N$ that systematically decrease upon doping (see also Fig. \[fig:heat-doped\]). ](fig10) As mentioned in Sec. \[sec:methods\], all Ti-doped samples revealed tetragonal symmetry, similar to the parent compound. The sample with $x=0.15$ contained trace amounts of impurity phases, so its doping level may be slightly below 15%, but this minor deviation had no visible effect on the results. Magnetic susceptibility of doped samples normalized to one mole of V$^{4+}$ spins is shown in Fig. \[fig:chi-doped\]. The susceptibility maximum is systematically shifted to higher values of $\chi$ and to lower temperatures. For the sake of better presentation, we use a different scaling for the specific heat and normalize the data to one mole of the compound. Fig. \[fig:heat-doped\] presents the systematic reduction in the specific heat maximum around 4.5K following the reduced amount of the magnetic V$^{4+}$ ions. The position of the maximum is roughly unchanged up to $x=0.15$. Magnetic order persists in all Ti-doped samples. The magnetic transition is seen by a change in the slope of $\chi(T)$ (Fig. \[fig:chi-doped\]). The precise value of $T_N$ is better tracked by the $\lambda$-type anomaly in the specific heat (Fig. \[fig:heat-doped\]). The Néel temperature determined with the 0.05K uncertainty from the maximum of the transition anomaly, displays a systematic reduction from 3.8K in the parent compound to 2.9K at $x=0.15$. This corresponds to the slope of $-dT_N(x)/dx=CT_N(0)$ with $C=1.5(2)$, which is reminiscent of $C\simeq 2$ in Li$_2$VOSiO$_4$[@papinutto2005] and well below $C\simeq 2.7$ or $C\simeq 3.5$ for La$_2$CuO$_4$ doped with Mg and Zn, respectively.[@carretta2011] ------ ------ --------------- ----- ----- $x$ Specific heat $g$ $\chi_0$ $J$ $J$ 0.00 1.95 $-5$ 7.7 7.8 0.05 1.95 $-6$ 7.7 7.8 0.10 1.97 $-4$ 7.8 7.9 0.15 1.97 $-6$ 8.1 8.0 ------ ------ --------------- ----- ----- : \[tab:fits\] Parameters obtained from fitting the susceptibility and specific heat data for Zn$_2$V$_{1-x}$Ti$_x$O(PO$_4)_2$ with QMC results for the ideal ($x=0$) and diluted ($x>0$) square-lattice models. $g$ stands for the $g$-factor, $\chi_0$ is the temperature-independent contribution to the susceptibility (in $10^{-5}$emu/mol), and $J$ is the exchange coupling (in K). Néel temperature of an antiferromagnet depends on its exchange couplings. Therefore, for a proper interpretation of $T_N(x)$ and its slope, one has to evaluate the change in $J$ upon doping. To this end, we fitted magnetic susceptibility and specific heat of Ti-doped samples in the same manner as we did in Sec. \[sec:thermo\] for the parent compound. Model curves were obtained by QMC simulations for the diluted square lattice of spins-$\frac12$. Fitted parameters are listed in Table \[tab:fits\] and show a good match between the susceptibility and specific heat data. The error bar for the values of $J$ is somewhat difficult to define, because statistical errors largely depend on the temperature range of the fitting. However, even with a very optimistic error bar of 0.1K for the susceptibility fits above 7K, the change in $J$ between $x=0$ and $x=0.15$ is only marginal. Moreover, $T_N$ depends on $\ln J$,[@yasuda2005] so the 4% change in the $J$ value should have negligible effect on the $T_N$. Its reduction is, therefore, solely due to the dilution, and the slope of $T_N(x)$ reflects the dilution effect on the spin-$\frac12$ AFM square lattice in Zn$_2$VO(PO$_4)_2$. ![\[fig:heat-doped\] (Color online) Specific heat of Ti-doped Zn$_2$VO(PO$_4)_2$ measured in zero magnetic field. The dashed lines are QMC fits, and the arrows mark Néel temperatures $T_N$ depicted in the inset as a function of the doping level $x$. The solid line in the inset is the tentative linear fit $T_N=(1-Cx)T_N(0)$ with $C=1.5$](fig11) Discussion ========== Static Properties ----------------- The exchange couplings extracted from $\chi(T)$, $I_{\rm ESR}(T)$, and $K_{\rm iso}(T)$ data for Zn$_{2}$VO(PO$_{4}$)$_{2}$ are consistent with the values reported before from the $\chi(T)$ analysis[@kini2006] and neutron scattering experiments within the error bar.[@yusuf2010] According to the $J_{2}/J\simeq 0.03$ ratio, Zn$_{2}$VO(PO$_{4}$)$_{2}$ features the Néel antiferromagnetic ground state with antiparallel spins on nearest neighbors in the $ab$ plane (Fig. \[fig:structure\], right).[@yusuf2010] In the crystal structure, squares are formed via V–O–P–O–V superexchange interaction paths. In contrast to Pb$_{2}$VO(PO$_{4}$)$_{2}$ where each P atom is coupled to four V$^{4+}$ spins,[@nath2009] in Zn$_{2}$VO(PO$_{4}$)$_{2}$ each P atom is coupled to two V$^{4+}$ spins only (Fig. \[fig:structure\], right). The total hyperfine coupling constant at the P site is the sum of transferred hyperfine ($A_{\rm trans}$) and dipolar ($A_{\rm dip}$) couplings produced by V$^{4+}$ spins, i.e., $A_{\rm hf}=z^{'}A_{\rm trans}+A_{\rm dip}$, where $z^{'}=2$ is the number of nearest-neighbor V$^{4+}$ spins of the P site. The anisotropic dipolar couplings were calculated using lattice sums to be $A_{\rm a} = 210~$Oe/$\mu_{\rm B}$, $A_{\rm b} = 210$ Oe/$\mu_{\rm B}$, and $A_{c} = -420$ Oe/$\mu_{\rm B}$ along the $a$-, $b$-, and $c$-directions, respectively. Clearly, the value of dipolar coupling is almost negligible compared to the total hyperfine coupling \[$A_{\rm hf}^{\rm iso} = (9221 \pm 100)$ Oe/$\mu_{\rm B}$\] suggesting that the dominant contribution to the total hyperfine coupling is due to the transferred hyperfine coupling at the P site. The magnitude of this coupling depends on the relative orientation and the extent of overlap between the V($3d$), P($2p$), and O($2s$) orbitals. The internal field at the P site will be canceled out if the P ion is located at a symmetric position with respect to two nearest neighbor V$^{4+}$ up and down spins. However, the observation of a small remnant internal field at the P sites in the AFM ordered state indicates that the P sites are not located at the perfect symmetric position and there is a small displacement of the P sites from the perfect symmetric position. This is also consistent with the crystal structure where the P is sitting slightly above or below the line joining the neighboring up and down spins (see the right panel of Fig. \[fig:structure\]). The $^{31}$P line in the magnetically ordered state takes a typical rectangular shape, reflecting that the magnetic ordering is commensurate in nature. If the magnetic structure were incommensurate with the lattice, the internal field would be distributed and the spectrum would not exhibit the rectangular shape seen in Fig. \[belowTn\]. Our spectra are, therefore, consistent with the collinear magnetic order determined from the neutron diffraction experiments.[@yusuf2010] The central line does not disappear from the experimental spectra completely even at the lowest measured temperature. NMR experiments on many other compounds, especially on powder samples, are reported to show similar coexistence of the high-$T$ and low-$T$ phases, e.g., in BaCuP$_2$O$_7$ (Ref. ), (Li,Na)VGe$_2$O$_6$ (Refs. ), (Ca$_4$Al$_2$O$_6$)Fe$_2$(As$_{1-x}$P$_x$)$_2$ (Ref. ), BiMn$_2$PO$_6$ (Ref. ), and LiGaCr$_4$O$_8$ (Ref. ). The origin of this central line remains an open question. One could argue that the coexistence of two phases is due to a spread of the transition temperatures within the polycrystalline sample, but in such a case it is quite unlikely to observe a distinct peak in the temperature dependence of $1/T_1$, as seen in Fig. \[t1\]. One possible origin of the central line is the impurity phases. In order to check that, we measured $1/T_{1}$ below $T_{\rm N}$ at the positions corresponding to the central peak and the RHS edge of the spectra. It is to be noted that for any phosphorus containing impurity phase, the corresponding $T_1$ is expected to be different from the intrinsic $T_1$ of the sample. Moreover, if the central peak is a superposition of intrinsic and extrinsic contributions, one would observe a double exponential behaviour of the longitudinal recovery curves. However our recovery curves at both positions follow single exponential behaviour and the magnitude and the temperature dependence of $1/T_{1}$ at both positions are the same, which clearly suggests that the central peak is an intrinsic feature of the sample and completely rules out the contribution of impurity phases. As discussed earlier, the P site in the ordered state experiences a finite internal field due to a slightly asymmetric position with respect to the neighboring up- and down- spins. On the contrary, a perfectly symmetric position of P should results in a single narrow spectral line at the zero-shift position. Hence it appears that the central peak may be originating from some P sites which are located close to the perfect symmetric position. Another possible origin of the central line could also be the presence of crystal defects or local dislocations in the polycrystalline sample. NMR on a high-quality single crystal can probably resolve this issue. The temperature dependence of $H_{\rm int}$ in the critical region provides the critical exponent $\beta$ reflecting the universality class of the spin system. The $\beta$ values expected for different spin- and lattice-dimensionalities are listed in Table II of Ref. . The value of $\beta$ obtained from the experiment is $\approx 0.33$, which would be consistent with any of the 3D spin models (Heisenberg, Ising, or XY). Given the direction of spins along the $c$-axis in the magnetically ordered state,[@yusuf2010] the 3D Ising case looks plausible. On the other hand, the 3D behavior in the vicinity of $T_N$ should not be confused with the 2D-like behavior above $T_N$, where the data are well described by the 2D model and 2D spin correlations manifest themselves in neutron scattering.[@yusuf2010] However, the critical exponent for the 2D Ising model[@Collins1989; @*OzekiR149] $\beta=\frac18$ would not be consistent with the experiment. Given the fact that below $T_N$ spins are aligned with the $c$ direction,[@yusuf2010] we may expect a weak Ising anisotropy, but it is impossible to quantify this putative anisotropy using the data at hand. Interestingly, the critical behavior of Zn$_2$VO(PO$_4)_2$ deviates from that of other square-lattice V$^{4+}$ antiferromagnets, where $\beta\simeq 0.25$ (2D XY universality class) was systematically observed in Li$_2$VOSiO$_4$ and Li$_2$VOGeO$_4$ (Refs. ), Pb$_2$VO(PO$_4)_2$ (Refs. ), SrZnVO(PO$_4)_2$ (Refs. ), and other compounds.[@carretta2009] The origin of this difference should be addressed in future studies. Dynamic Properties ------------------ As shown in Fig. \[t1\](b), $1/(\chi T_{1}T)$ above $\sim 10$K is $T$-independent and increases slowly below 10K where the system begins to show antiferromagnetic short-range order. The general expression for $\frac{1}{T_{1}T}$ in terms of the dynamic susceptibility $\chi_{M}(\vec{q},\omega_{0})$ is[@moriya1963; @mahajan1998] $$\frac{1}{T_{1}T} = \frac{2\gamma_{N}^{2}k_{B}}{N_{\rm A}^{2}} \sum\limits_{\vec{q}}\mid A(\vec{q})\mid ^{2}\frac{\chi^{''}_{M}(\vec{q},\omega_{0})}{\omega_{0}}, \label{t1form}$$ where the sum is over wave vectors $\vec{q}$ within the first Brillouin zone, $A(\vec{q})$ is the form factor of the hyperfine interactions as a function of $\vec{q}$, and $\chi^{''}_{M}(\vec{q},\omega _{0})$ is the imaginary part of the dynamic susceptibility at the nuclear Larmor frequency $\omega _{0}$. For $q=0$ and $\omega_{0}=0$, the real component of $\chi_{M}^{'}(\vec{q},\omega _{0})$ corresponds to the uniform static susceptibility $\chi$. Thus the temperature-independent $1/(\chi T_{1}T)$ above 10K in Fig. \[t1\](b) demonstrates the dominant contribution of $\chi$ to $1/T_{1}T$. On the other hand, a slight increase in $1/(\chi T_{1}T)$ below 10K indicates the growth of antiferromagnetic correlations with decreasing $T$. The symmetric location of phosphorous between the two V$^{4+}$ spins implies that Néel-type AFM spin fluctuations \[$\vec{q}=(\pm \pi/a, \pm \pi/b)$\] from neighboring spins should be largely filtered out (${|A(\vec{q})|}^2=0$) because the P nuclei interact with V$^{4+}$ spins having opposite directions (Fig. \[fig:structure\], right). When the coupling to the two V$^{4+}$ spins is equivalent, the AFM fluctuations do not contribute to $1/(\chi T_{1}T)$. The residual enhancement of $1/(\chi T_{1}T)$ below 10K reflects the asymmetry of the hyperfine couplings. This asymmetry is consistent with the crystal structure of Zn$_2$VO(PO$_4)_2$, where the P atoms are located on mirror planes running perpendicular to the (${\mathbf a}+{\mathbf b}$) or (${\mathbf a}-{\mathbf b}$) crystallographic directions. The tensor of hyperfine couplings may change its orientation upon the reflection in the mirror plane, thus leading to non-equivalent interactions between P and the up- and down-spins on the neighboring V$^{4+}$ ions. In the AFM ordered state, $1/T_{1}$ is mainly driven by scattering of magnons, leading to a power-law temperature dependence.[@beeman1968; @belesi2006] For $T \gg \Delta/k_{\rm B}$, where $\Delta/k_{\rm B}$ is the gap in the spin-wave spectrum, $1/T_{1}$ either follows a $T^{3}$ behavior due to a two-magnon Raman process or a $T^{5}$ behavior due to a three-magnon process, while for $T \ll \Delta/k_{\rm B}$, it follows an activated behavior $1/T_{1} \propto T^{2}\exp(-\Delta/k_{\rm B}T)$. As seen from Fig. \[t1\](a), our $^{31}$P $1/T_{1}$ data in the lowest temperature region (1.5K $\leq T \leq$ 3.25K) follow the $T^{5}$ behavior rather than the $T^{3}$ behavior suggesting that the relaxation is mainly governed by the three-magnon process. The lack of activated behavior down to 1.5K indicates that the upper limit of $\Delta/k_{\rm B}$ is 1.5K. At sufficiently high temperatures, $1/T_{1}$ due to local moments is $T$-independent and can be expressed within the Gaussian approximation of the auto-correlation function of the electronic spin as:[@moriya1956] $$\left(\frac{1}{T_1}\right)_{T\rightarrow\infty} = \frac{(\gamma_{N} g\mu_{\rm B})^{2}\sqrt{2\pi}z^\prime S(S+1)}{3\,\omega_{ex}} {\left(\frac{A_{hf}}{z'}\right)^{2}}, \label{t1inf}$$ where $\omega_{ex}=\left(|J_{\rm max}|k_{\rm B}/\hbar\right)\sqrt{2zS(S+1)/3}$ is the Heisenberg exchange frequency, $z$ is the number of nearest-neighbor spins of each V$^{4+}$ ion, and $z^\prime$ is the number of nearest-neighbor V$^{4+}$ spins for a given P site. The $z^\prime$ in the numerator takes into account the number of nearest-neighbor V$^{4+}$ spins responsible for producing fluctuations at the P site. Using the relevant parameters, $A_{\rm hf} \simeq 9221$Oe/$\mu_{\rm B}$, $\gamma_N = 1.08 \times 10^8\,{\rm rad}$ s$^{-1}$T$^{-1}$, $z=4$, $z^\prime=2$, $g=2$, $S=\frac12$, and the high-temperature (150K) relaxation rate of $\left(\frac{1}{T_1}\right)_{T\rightarrow\infty}\simeq 7270.6$ s$^{-1}$ for the P site in Eq. , the magnitude of the exchange coupling is calculated to be $J\simeq 9$K in good agreement with $J\simeq 7.7$K determined from the thermodynamic data (Sec. \[sec:thermo\]). ![\[T1vsH\] (Color online) $1/T_1$ vs. $H$ (in log scale) measured at $T=15$ K for Zn$_2$V$_{1-x}$Ti$_x$O(PO$_4$)$_2$ ($x$ = 0 and 0.10) samples. The solid line is the fit by $1/T_1=a+b\log(1/H)$.](fig12){width="3in"} One can see in Fig. \[t1\](a) that for $T>10$K a slight increase in $1/T_1$ was observed at 9.4MHz compared to the data measured at 75.5MHz. In order to check whether this difference is due to the effect of spin diffusion, we measured $1/T_1$ at different applied fields at $T=15$K. Long-wavelength ($q\sim0$) spin fluctuations in a Heisenberg magnet show diffusive dynamics. In 1D compounds, such spin diffusion results in a $1/\sqrt{H}$ magnetic field dependence of $1/T_1$, which has been observed in (CH$_3$)$_4$NMnCl$_3$, CuCl$_2$.2NC$_5$H$_5$, and Sr$_2$CuO$_3$.[@hone1974; @ajiro1978; @takigawa1996] On the other hand, in 2D materials $1/T_1$ varies as $\log(1/H)$.[@furukawa1996; @ajiro1978] In Fig \[T1vsH\], $1/T_1$ is plotted against $H$ (in log scale) measured at $T=15$ K for Zn$_2$V$_{1-x}$Ti$_x$O(PO$_4$)$_2$ ($x$ = 0 and 0.10) samples. Both the data sets resemble with each other and show the same field dependency. They can be fitted by the form $1/T_1=a+b\log(1/H)$ where $a$ and $b$ are constants. The linearity of the $1/T_1$ vs. $\log(H)$ dependence is indicative of the 2D nature of both the parent and 10% Ti$^{4+}$ doped samples above $T_N$. Effect of doping ---------------- Zn$_2$VO(PO$_4)_2$ reveals a very clean case of a diluted antiferromagnet. We have shown that the change in the nearest-neighbor coupling $J$ is marginal (Table \[tab:fits\]), and the frustration by second-neighbor couplings $J_2$ is negligible in the parent compound. However, the Néel temperature of Zn$_2$VO(PO$_4)_2$ drops much slower than expected for the diluted AFM square lattice of spins-$\frac12$. In a diluted system, the $T_N$ can be written as follows:[@chen2000; @papinutto2005] $$k_BT_N(x)=J_{\perp}(1-x)^2\xi(x,T_N)^2\left(\frac{M(x)}{M(0)}\right)^2,$$ where $J_{\perp}(1-x)^2$ reflects the reduction in the interlayer coupling (the probability to find two coupled spins in the adjacent layers), $\xi(x,T_N)$ is the in-plane correlation length, and $M(x)$ is the staggered magnetization at a given value of $x$. All these factors taken together should yield the slope $C\simeq 3.2$ (Ref. ) for the linear dependence of $T_N(x)$ and spin-$\frac12$. Experimentally, Carretta *et al.*[@carretta2011] report $C\simeq 2.7$ and 3.5 for Mg- and Zn-doped La$_2$CuO$_4$, respectively.[@delannoy2009] This finding can be rationalized by assuming that Zn atoms introduce additional frustration, whereas Mg atoms do not.[@liu2009; @carretta2011] In Ti-doped Li$_2$VOSiO$_4$, the slope of $T_N(x)$ is $C\simeq 2$, only. Papinutto *et al.*[@papinutto2005] proposed that this slope is solely due to the first term $J_{\perp}(1-x)^2$, while $M(x)$ is only weakly influenced by doping because the effect of dilution is countered by the change in the frustration ratio. This explanation looks plausible for Li$_2$VOSiO$_4$ indeed, because the physics of this compound is determined by the competing nearest-neighbor and second-neighbor couplings on the square lattice.[@rosner2002; @*rosner2003; @melzi2000; @*melzi2001] Ti-doped Zn$_2$VO(PO$_4)_2$ reveals an even lower $C\simeq 1.5$, and in this compound frustration is clearly inactive. We have shown that the frustration is vanishingly small ($J_2/J\simeq 0.03$) in the pristine Zn$_2$VO(PO$_4)_2$, while its increase (if any) will have an opposite effect on the system and increase $C$ above 3.2 instead of decreasing it to the experimental $C\simeq 1.5$ value. The different doping behavior of Zn$_2$VO(PO$_4)_2$ and Li$_2$VOSiO$_4$ on one hand and La$_2$CuO$_4$ on the other can be ascribed to a different magnitude of their interlayer couplings. While Zn$_2$VO(PO$_4)_2$ shows signatures of the 2D physics above $T_N$, the Néel temperature of this compound is quite high, $T_N/J\simeq 0.5$, hence $|J_{\perp}|/J\simeq 10^{-1}$. In Li$_2$VOSiO$_4$, the lower Néel temperature of $T_N/J\simeq 0.32$ corresponds to an order-of-magnitude weaker interlayer coupling $|J_{\perp}|/J\simeq 10^{-2}$,[@melzi2000; @*melzi2001; @yasuda2005] which is still much stronger than in La$_2$CuO$_4$ with its $T_N/J\simeq 0.21$ and $J_{\perp}/J\ll 10^{-3}$. Magnetic anisotropy could be another reason for the different evolution of $T_N$ upon doping, but its effect is difficult to quantify. In La$_2$CuO$_4$, Dzyaloshinsky-Moriya (DM) terms, the leading component of the anisotropy in spin-$\frac12$ magnets, are about 1.5% of $J$.[@birgeneau1999] Crystallographic symmetries of both Li$_2$VOSiO$_4$ and Zn$_2$VO(PO$_4)_2$ allow for non-zero DM couplings as well, but their magnitude is presently unknown. Regarding Zn$_2$VO(PO$_4)_2$, our NMR data provide an upper threshold of about 1.5K for the anisotropy gap. This value is, however, nearly 20% of $J$ and exceeds typical DM anisotropies in V$^{4+}$ oxides.[@lumsden2001; @*ivanshin2003] The variable interlayer coupling is a plausible reason for the different doping evolution of $T_N$ in square-lattice antiferromagnets. In La$_2$CuO$_4$, the long-range order emerges only at low temperatures where the in-plane correlation length is about 100 lattice spacings,[@birgeneau1999; @greven1995] and the magnetic order is vulnerable to the dilution and disorder. In Li$_2$VOSiO$_4$ and especially in Zn$_2$VO(PO$_4)_2$, the in-plane correlation length at $T_N$ is on the order of several lattice spacings, and interlayer couplings have larger influence on the long-range ordering, thus reducing the slope of $T_N(x)$ compared to the ideal 2D case where $J_{\perp}\ll J$. Therefore, the doping scenario of Zn$_2$VO(PO$_4)_2$ may be of 3D type and will require one to view this compound as a spatially anisotropic 3D antiferromagnet, even though the physics above $T_N$ is 2D-like.[@kini2006; @yusuf2010] Finally, we note that our data do not support the *ab initio* predictions by Kanungo *et al.*[@kanungo2013] regarding the 1D physics of doped Zn$_2$VO(PO$_4)_2$. While probably correct for the ordered monoclinic structure at the 25% doping level, their results do not apply to our case, where magnetic V$^{4+}$ and non-magnetic Ti$^{4+}$ ions are randomly distributed in the structure, and the overall tetragonal symmetry is retained. Summary and conclusions ======================= Zn$_2$VO(PO$_4)_2$ is an antiferromagnetic compound with the in-plane coupling of $J\simeq 7.7$K, negligible in-plane frustration, and long-range magnetic order below $T_N\simeq 3.75$K. Thermodynamic properties above $T_N$ are well described by the Heisenberg model on the AFM square lattice. NMR results confirm the commensurate nature of the magnetic order. The spin-lattice relaxation rate $1/T_1$ below $T_{\rm N}$ follows the $T^{5}$ behavior reflecting that the relaxation is governed by the three-magnon process. $1/T_1$ at 15K varies as $\log(1/H)$ and supports the presence of strong 2D spatial anisotropy in both the parent and 10% Ti$^{4+}$ doped compounds above $T_N$. On the other hand, the critical exponent for the sublattice magnetization is consistent with any of the 3D universality classes and may reflect the sizeable interlayer exchange in Zn$_2$VO(PO$_4)_2$. Ti$^{4+}$ doping with up to 15% of Ti$^{4+}$ leads to a uniform dilution of the spin lattice and only a marginal change in the in-plane exchange coupling. $T_N$ goes down in a linear manner, but its slope is well below theoretical expectations for the diluted Heisenberg antiferromagnet on the square lattice of spins-$\frac12$ and may indiciate the importance of the interlayer exchange. AY, NA, and RCN would like to acknowledge DST India for financial support. AT was funded by the Mobilitas program of the ESF (grant No. MTT77) and by the IUT23-3 grant of the Estonian Research Agency. Work at the Ames Laboratory was supported by the Department of Energy-Basic Energy Sciences under Contact No. DE-AC02-07CH11358. Fig. \[fig:structure\] was prepared using the `VESTA` software.[@vesta] [75]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [ ]{} @noop [ ]{} @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{}
KEK-TH-2201 3.0cm [**Types of gauge groups in six-dimensional F-theory on double covers of rational elliptic 3-folds**]{} 1.2cm Yusuke Kimura$^1$ 0.6cm [*$^1$KEK Theory Center, Institute of Particle and Nuclear Studies, KEK,\ 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan*]{} 0.4cm E-mail: kimurayu@post.kek.jp Introduction ============ U(1) gauge symmetry plays an important role in realizing a grand unified theory (GUT) since the presence of a U(1) gauge group explains some characteristic properties of a GUT, such as a suppression of the proton decay, and the mass hierarchy of leptons and quarks. The presence of a U(1) gauge group relates to the realization of a GUT in F-theory model building. F-theory [@Vaf; @MV1; @MV2] provides a framework that extends the Type IIB superstrings to nonperturbative regimes. F-theory is compactified on spaces that have a torus fibration, wherein the modular parameter of tori as fibers is identified with axio-dilaton, enabling the axio-dilaton to possess an $SL(2,\Z)$ monodromy. A genus-one fibration is said to have a global section when one can choose a point in each fiber and the chosen point can be moved throughout the base. When an elliptic fibration has a global section, the set of global sections that the fibration possesses forms a group, which is referred to as the “Mordell–Weil group.” The number of U(1) factors formed in F-theory, compactified on an elliptic fibration with a section, is equal to the rank of the Mordell–Weil group of that fibration, as discussed in [@MV2]. Models of F-theory compactified on elliptic fibrations admitting a global section have been studied, e.g. in [@MorrisonPark; @MPW; @BGK; @BMPWsection; @CKP; @BGK1306; @CGKP; @CKP1307; @CKPS; @Mizoguchi1403; @AL; @EKY1410; @LSW; @CKPT; @CGKPS; @MP2; @MPT1610; @BMW2017; @CL2017; @KimuraMizoguchi; @Kimura1802; @LRW2018; @TasilectWeigand; @MizTani2018; @TasilectCL; @Kimura1810; @CMPV1811; @TT2019; @Kimura1902; @Kimura1903; @EJ1905; @LW1905; @Kimura1910; @CKS1910; @Kimura1911; @FKKMT1912; @AFHRZ2001; @Kimura2003; @KMT2003]. For recent progress in F-theory compactifications where one or more U(1) factors are formed, see, for example, [@MorrisonPark; @BMPWsection; @CKP; @CGKP; @BMPW2013; @CKPS; @MTsection; @MT2014; @KMOPR; @BGKintfiber; @CKPT; @GKK; @MPT1610; @LS2017; @Kimura1802; @TT2018; @CLLO; @CMPV1811; @TT2019; @Kimura1908; @Kimura1910; @Kimura1911; @OS2019; @AFHRZ2001; @Kimura2003]. The aim of this study is to build six-dimensional (6D) F-theory models with U(1) factors on Calabi–Yau 3-folds, by applying the general scheme developed in [@Kimura1910; @Kimura2003], to construct a family of elliptic Calabi–Yau 3-folds of positive Mordell–Weil ranks. Calabi–Yau 3-folds with $A$-type and $D$-type singularities are obtained in this work, and one to three U(1) gauge group factors are formed in 6D $N=1$ F-theory on the obtained Calabi–Yau 3-folds. We demonstrated by the construction of 6D models that the techniques introduced in [@Kimura1910; @Kimura2003] apply in studying U(1) gauge symmetries in the F-theory formulation. The singularity types of the constructed Calabi–Yau 3-folds are determined, and they correspond to the non-Abelian gauge groups [@MV2; @BIKMSV] formed on the 7-branes in 6D F-theory on the constructed Calabi–Yau 3-folds. A certain class of rational elliptic 3-folds was introduced [@Kimura1910] to generate elliptic Calabi–Yau 3-folds of various Mordell–Weil ranks, referred to as “1/2 Calabi–Yau 3-folds.’”’ Such rational elliptic 3-folds were introduced to build 6D $N=1$ F-theory models with varying numbers of U(1) factors. Because taking double covers of the 1/2 Calabi–Yau 3-folds yield elliptic Calabi–Yau 3-folds [@Kimura1910], 1/2 Calabi–Yau 3-folds can be regarded as “building blocks” of Calabi–Yau 3-folds. Up to seven U(1) factors form in 6D F-theory on the Calabi–Yau 3-folds built as double covers of 1/2 Calabi–Yau 3-folds [@Kimura1910]; explicit examples of 6D F-theory models with U(1) have been constructed [@Kimura1910]. To extract information of the non-Abelian gauge groups and matter spectra formed in 6D F-theory, the singularity types and singular fibers [^1] of Calabi–Yau 3-folds also need to be analyzed. A method to classify the singularity types of the Calabi–Yau 3-folds as double covers of the 1/2 Calabi–Yau 3-folds was discussed in [@Kimura2003]. Because the singularity type of the 1/2 Calabi–Yau 3-fold and that of the Calabi–Yau 3-fold as the double cover are identical [@Kimura1910], it suffices to classify the singularity types of the 1/2 Calabi–Yau 3-folds. In addition to this general method of classifying the singularity types, the singularity types of the 1/2 Calabi–Yau 3-folds of rank seven [^2] have also been classified, and all types of the 1/2 Calabi–Yau 3-folds with rank seven singularities were explicitly constructed [@Kimura2003]. Six-dimensional $N=1$ F-theory on the Calabi–Yau 3-folds, with rank-seven singularity types as their double covers, was also discussed [@Kimura2003]. On this note, we build Calabi–Yau 3-folds of singularity ranks six and lower as double covers of 1/2 Calabi–Yau 3-folds, discussing 6D F-theory on the resulting Calabi–Yau 3-folds. The U(1) gauge group forms in the resulting 6D F-theory models. The singularity types of the constructed Calabi–Yau 3-folds are determined and the number of U(1) factors formed in 6D F-theory are also deduced. The 6D F-theory models on Calabi–Yau 3-folds built as double covers of 1/2 Calabi–Yau 3-folds constructed in this work are new; the 6D F-theory models explicitly constructed in [@Kimura1910; @Kimura2003] do not include the models we discuss. Our analysis in this study demonstrates that, to a degree, the methods discussed in [@Kimura2003] have applications in studying U(1) gauge groups and non-Abelian gauge groups formed in 6D F-theory models. We also discuss the structures of singular fibers of 1/2 Calabi–Yau 3-folds and Calabi–Yau 3-folds as their double covers. These can be used to investigate the non-Abelian gauge symmetries formed on 7-branes. Concretely, we construct the 1/2 Calabi–Yau 3-folds with the singularity types: $D_4A_1^2$, $D_5A_1$, $A_3A_2A_1$, $A_3A_1^3$, $A_5A_1$, $A_1^5$, $A_2A_1^3$, $A_3A_1^2$, $A_1^4$. The ranks of these singularity types range from 4 to 6. As the rank of the singularity type and the Mordell–Weil rank of any 1/2 Calabi–Yau 3-fold add to seven [@Kimura1910], the 1/2 Calabi–Yau 3-folds constructed in this work have Mordell–Weil ranks ranging from 1 to 3. One to three U(1) factors form in 6D F-theory compactifications on the Calabi–Yau 3-folds as their double covers. A possible relation of 6D $N=1$ F-theory on the Calabi–Yau 3-folds constructed as double covers of 1/2 Calabi–Yau 3-folds to the swampland conditions was mentioned in [@Kimura1910; @Kimura2003]. The authors of [@Vafa05; @AMNV06; @OV06] discussed the notion of the swampland, and reviews of recent studies on the swampland criteria are given in [@BCV1711; @Palti1903]. Possible combinations of distinct gauge symmetries and matter spectra that can form in 6D quantum gravity theories with $N=1$ supersymmetry were discussed in [@KT0910; @KMT1008; @PT1110; @Taylor1104]. The structures of elliptic fibrations of 3-folds were analyzed in [@Nak; @DG; @G]. An emphasis was placed on local model buildings [@DWmodel; @BHV1; @BHV2; @DW] in recent model buildings in F-theory. Global aspects of the compactification geometry, however, need to be analyzed to discuss the issues of gravity and problems pertaining to the early universe, including inflation. In this study, the structures of the elliptic Calabi–Yau 3-folds are studied from a global viewpoint. This paper is structured as follows. In section \[sec2\], applying the discussion [@Kimura2003] of the equivalence of the singularity types of the 1/2 Calabi–Yau 3-folds and the plane quartic curves as a consequence of the method in [@Muk; @Mukai2008; @Mukai2019], we describe the construction of 1/2 Calabi–Yau 3-folds of various singularity types. Taking double covers of the 1/2 Calabi–Yau 3-folds, we construct elliptic Calabi–Yau 3-folds in section \[sec3\]. F-theory applications are also discussed. U(1) gauge groups are formed in 6D F-theory compactifications on the constructed Calabi–Yau 3-folds. The structures of the singular fibers are analyzed. We state our concluding remarks in section \[sec4\]. Construction of 1/2 Calabi–Yau 3-folds with various singularity types {#sec2} ===================================================================== Method of construction {#sec2.1} ---------------------- We construct 1/2 Calabi–Yau 3-folds of positive Mordell–Weil ranks with various singularity types. F-theory on the Calabi–Yau 3-folds constructed as their double covers have U(1) factors, as we discuss in section \[sec3\]. The 1/2 Calabi–Yau 3-folds were constructed as a blow-up of $\P^3$ at the base points of three quadrics, as introduced in [@Kimura1910]. Taking the ratio $[Q_1: Q_2: Q_3]$ of the three quadrics $Q_1, Q_2, Q_3$ gives a projection onto the base surface $\P^2$, yielding an elliptic fibration. A method to classify the singularity types of the 1/2 Calabi–Yau 3-folds was introduced in [@Kimura2003]. Utilizing method discussed in [@Muk; @Mukai2008; @Mukai2019] reveals that the singularity types of the 1/2 Calabi–Yau 3-folds are identical to those of the plane quartic curves [@Kimura2003]. The classification of the singularity types of the plane quartic curves is provided in [@DolgachevAlgGeom]. We construct 1/2 Calabi–Yau 3-folds with singularity types, $D_4A_1^2$, $D_5A_1$, $A_3A_2A_1$, $A_3A_1^3$, $A_5A_1$, $A_1^5$, $A_2A_1^3$, $A_3A_1^2$, $A_1^4$, by considering the duals of plane quartic curves with identical singularities. The curves with these singularities are shown in Figures \[imageD4sum2A1\], \[imageA3A2A1\], \[imageA5A1\], \[imageA2sum3A1\]. The equations of the three quadrics yielding 1/2 Calabi–Yau 3-folds with such singularity types are deduced in sections \[sec2.2\] through \[sec2.10\]. ![\[imageD4sum2A1\] Quartic curve reducible into a conic and two lines intersecting at a common point has a $D_4A_1^2$ singularity (left) [@DolgachevAlgGeom]. The common intersection point yields a $D_4$ singularity, and the other two intersections each yield an $A_1$ singularity. A quartic curve reducible into a cuspidal cubic and a line through the cusp has a $D_5A_1$ singularity (right) [@DolgachevAlgGeom]. The cusp yields a $D_5$ singularity, with the other intersection of the line and the cuspidal cubic yielding an $A_1$ singularity.](imageD4sum2A1.jpg){height="10cm"} ![\[imageA3A2A1\] Cuspidal cubic and a tangent have an $A_3A_2A_1$ singularity (left) [@DolgachevAlgGeom]. The tangent point yields an $A_3$ singularity and the cusp yields an $A_2$ singularity. The other intersection point of the tangent line and the cuspidal cubic yields an $A_1$ singularity. A conic, its tangent, and another line have an $A_3A_1^3$ singularity (right) [@DolgachevAlgGeom]. The tangent point yields an $A_3$ singularity, with the other three intersections each yielding an $A_1$ singularity.](imageA3A2A1.jpg){height="10cm"} ![\[imageA5A1\] Quartic curve reducible into a nodal cubic and a tangent to flex has an $A_5A_1$ singularity (left) [@DolgachevAlgGeom]. The node yields an $A_1$ singularity and the flex tangent yields an $A_5$ singularity. A conic and two lines in a general position have an $A_1^5$ singularity (right) [@DolgachevAlgGeom].](imageA5A1.jpg){height="10cm"} ![\[imageA2sum3A1\] Cuspidal cubic and a line have an $A_2A_1^3$ singularity (left) [@DolgachevAlgGeom]. The cusp yields an $A_2$ singularity, with the intersection points of the line and the cuspidal cubic yielding three $A_1$ singularities. Two conics tangent at a point have an $A_3A_1^2$ singularity (middle) [@DolgachevAlgGeom]. The tangent point yields an $A_3$ singularity, while the other two intersection points each yield an $A_1$ singularity. Two conics in a general position, intersecting in four distinct points, have an $A_1^4$ singularity (right) [@DolgachevAlgGeom]. The four intersections each yield an $A_1$ singularity.](imageA2sum3A1.jpg){height="10cm"} We used 1/2 Calabi–Yau 3-folds, the construction of which is described in section \[sec2.2\] through section \[sec2.10\], to build elliptic Calabi–Yau 3-folds, discussed in section \[sec3\], on which F-theory provides 6D $N=1$ theories. $D_4A_1^2$ singularity type {#sec2.2} --------------------------- A quartic curve in $\P^2$ with $D_4A_1^2$ singularity was realized in [@DolgachevAlgGeom] as a conic and two lines meeting in a common point. The curve is presented in Figure \[imageD4sum2A1\] (left). We construct the 1/2 Calabi–Yau 3-fold with $D_4A_1^2$ singularity type as dual of the plane quartic curve. We use $[\lambda:\mu:\nu]$ to denote $\P^2$ into which a quartic curve is embedded. Then the quartic curve with $D_4A_1^2$ singularity is given by the following equation: $$\label{quartic eqn in 2.2} (\mu\nu-\lambda^2)\, \lambda\, (\lambda-\mu)=0.$$ The common point, $[0:0:1]$, of the conic $\mu\nu-\lambda^2=0$ and the two lines, $\lambda=0$ and $\lambda-\mu=0$, yields $D_4$ singularity. The other intersection point, $[0:1:0]$, of the conic and line $\lambda=0$, and the other intersection, $[1:1:1]$, of the conic and line $\lambda-\mu=0$ yield two $A_1$ singularities. By an argument similar to that given in [@Kimura2003], the equations of the three quadrics, the blow-up of $\P^3$ at the base points of which yields the 1/2 Calabi–Yau 3-fold with $D_4A_1^2$ singularity type, can be deduced from the determinantal representation of the quartic curve (\[quartic eqn in 2.2\]). The determinantal representation of the curve (\[quartic eqn in 2.2\]) is given as: $$\label{detrepn in 2.2} \begin{pmatrix} \mu & \lambda & 0 & 0 \\ \lambda & \nu & 0 & 0 \\ 0 & 0& \lambda & 0 \\ 0 & 0& 0 & \lambda-\mu \end{pmatrix},$$ and the equations of the three quadrics $Q_1, Q_2, Q_3$ are deduced from the determinantal representation (\[detrepn in 2.2\]) as: $$\begin{aligned} \label{quadrics D42A1 in 2.2} Q_1= & z^2+2xy+w^2 \\ \nonumber Q_2= & x^2-w^2 \\ \nonumber Q_3= & y^2.\end{aligned}$$ We used $[x:y:z:w]$ to denote the homogeneous coordinates of $\P^3$. We denote the coordinates of the base surface $\P^2$ of a 1/2 Calabi–Yau 3-fold as $[a:b:c]$. Then, because the curve $c=0$ in the base is dual to the $D_4$ singularity point $[0:0:1]$ of the quartic curve (\[quartic eqn in 2.2\]), type $I_0^*$ fibers lie over the discriminant curve $c=0$. We use $l_1$ to denote the curve $c=0$. We use $l_2, l_3$ to denote the curves in the base that are dual to the two $A_1$ singularities of the quartic curve (\[quartic eqn in 2.2\]). Type $I_2$ fibers lie over the curves $l_2$ and $l_3$. We denote the conic $\mu\nu-\lambda^2=0$ by $C$, and its dual curve as $C^*$ in the base surface of the 1/2 Calabi–Yau 3-fold, then the discriminant of the 1/2 Calabi–Yau 3-fold is given as follows: $$\Delta \sim l_1^6\cdot l_2^2 \cdot l_3^2\cdot C^*.$$ $D_5A_1$ singularity type {#sec2.3} ------------------------- A quartic curve in $\P^2$ with $D_5A_1$ singularity is reducible into a cubic with a cusp [^3] and a line through the cusp [@DolgachevAlgGeom]. We construct the 1/2 Calabi–Yau 3-fold with $D_5A_1$ singularity type as dual of the plane quartic curve. The quartic curve with $D_5A_1$ singularity is given as follows: $$\label{quartic eqn in 2.3} (\lambda^3+\mu\nu^2)(\lambda-\nu)=0.$$ The cusp is at $[\lambda:\mu:\nu]=[0:1:0]$, and this yields $D_5$ singularity. The other intersection point of the cuspidal cubic $\lambda^3+\mu\nu^2=0$ and the line $\lambda-\nu=0$, $[1:-1:1]$ yields $A_1$ singularity. The determinantal representation of the quartic curve (\[quartic eqn in 2.3\]) is given as: $$\label{detrepn in 2.3} \begin{pmatrix} -\mu & 0 & \lambda & 0 \\ 0 & -\lambda & \nu & 0 \\ \lambda & \nu& 0 & 0 \\ 0 & 0& 0 & \lambda-\nu \end{pmatrix},$$ and the equations of the three quadrics are deduced from the determinantal representation (\[detrepn in 2.3\]) as: $$\begin{aligned} Q_1= & -y^2+2xz+w^2 \\ \nonumber Q_2= & -x^2 \\ \nonumber Q_3= & 2yz-w^2.\end{aligned}$$ The curve in the base surface $\P^2$ dual to the $D_5$ singularity point $[0:1:0]$ of the quartic curve (\[quartic eqn in 2.3\]) is given by $b=0$; We use $l_1$ to denote the curve $b=0$. We denote the curve dual to the $A_1$ singularity of the quartic curve at $[1:-1:1]$ by $l_2$, then type $I_2$ fibers lie over the curve $l_2$. We denote the cuspidal cubic by $B$, and we use $B^*$ to denote its dual in the base surface of the 1/2 Calabi–Yau 3-fold. The discriminant of the 1/2 Calabi–Yau 3-fold with $D_5A_1$ singularity type is given as follows: $$\Delta \sim l_1^7\cdot l_2^2 \cdot B^*.$$ $A_3A_2A_1$ singularity type {#sec2.4} ---------------------------- A plane quartic curve with $A_3A_2A_1$ singularity is reducible into a cuspidal cubic and a tangent line [@DolgachevAlgGeom]. The curve is presented in Figure \[imageA3A2A1\] (left). We construct the 1/2 Calabi–Yau 3-fold with $A_3A_2A_1$ singularity type as dual of the plane quartic curve. The quartic curve with $A_3A_2A_1$ singularity is given as follows: $$\label{quartic eqn in 2.4} (\lambda^3+\mu\nu^2)(3\lambda+\mu+2\nu)=0.$$ The cusp is at $[\lambda:\mu:\nu]=[0:1:0]$ yielding $A_2$ singularity. The point $[\lambda:\mu:\nu]=[1:-1:-1]$ where the line $3\lambda+\mu+2\nu=0$ is tangent to the cuspidal cubic yields $A_3$ singularity. The other intersection point $[2:-8:1]$ of the tangent line and cuspidal cubic yields $A_1$ singularity. The determinantal representation of the quartic curve (\[quartic eqn in 2.4\]) is given as: $$\label{detrepn in 2.4} \begin{pmatrix} -\mu & 0 & \lambda & 0 \\ 0 & -\lambda & \nu & 0 \\ \lambda & \nu& 0 & 0 \\ 0 & 0& 0 & 3\lambda+\mu+2\nu \end{pmatrix},$$ and the equations of the three quadrics are deduced from the determinantal representation (\[detrepn in 2.4\]) as: $$\begin{aligned} \label{quadrics in 2.4} Q_1= & -y^2+2xz+3w^2 \\ \nonumber Q_2= & -x^2+w^2 \\ \nonumber Q_3= & 2yz+2w^2.\end{aligned}$$ The curve dual to the tangent point is denoted as $l_1$, then the singular fibers over $l_1$ have type $I_4$. The curve in the base surface $\P^2$ dual to the cusp $[0:1:0]$ is given by $b=0$; type $I_3$ fibers lie over this curve. We denote this curve as $l_2$. The curve dual to intersection point $[\lambda:\mu:\nu]=[2:-8:1]$ is denoted as $l_3$, and type $I_2$ fibers lie over the curve $l_3$. The discriminant of the 1/2 Calabi–Yau 3-fold with $A_3A_2A_1$ singularity type is given as follows: $$\Delta \sim l_1^4\cdot l_2^3\cdot l_3^2 \cdot B^*.$$ $A_3A_1^3$ singularity type {#sec2.5} --------------------------- A quartic curve in $\P^2$ with $A_3A_1^3$ singularity is reducible into a conic and a tangent line and another line [@DolgachevAlgGeom]. The curve is presented in Figure \[imageA3A2A1\] (right). We construct the 1/2 Calabi–Yau 3-fold with $A_3A_1^3$ singularity type as dual of the plane quartic curve. The quartic curve with $A_3A_1^3$ singularity is given as follows: $$\label{quartic eqn in 2.5} (-\lambda^2+\mu\nu)\, \mu(\mu-\nu)=0.$$ The line $\mu=0$ is tangent to the conic $\mu\nu-\lambda^2=0$ at $[0:0:1]$, yielding $A_3$ singularity. Another line $\mu-\nu=0$ intersects with the conic in two points, $[1:1:1], [1:-1:-1]$, and these yield two $A_1$ singularities. Two lines $\mu=0$ and $\mu-\nu=0$ intersect in point $[1:0:0]$ yielding an $A_1$ singularity. The determinantal representation of the quartic curve (\[quartic eqn in 2.5\]) is given as: $$\label{detrepn in 2.5} \begin{pmatrix} \mu & \lambda & 0 & 0 \\ \lambda & \nu & 0 & 0 \\ 0& 0 & \mu & 0 \\ 0 & 0& 0 & \mu-\nu \end{pmatrix},$$ and the equations of the three quadrics are deduced from the determinantal representation (\[detrepn in 2.5\]) as: $$\begin{aligned} Q_1= & 2xy \\ \nonumber Q_2= & x^2+z^2+w^2 \\ \nonumber Q_3= & y^2-w^2.\end{aligned}$$ The curve $c=0$ in the base $\P^2$ dual to the tangent point $[0:0:1]$ is denoted as $l_1$, then the singular fibers over the curve $l_1$ have type $I_4$. The curves in the base surface $\P^2$ dual to the three $A_1$ singularities are denoted as $l_2, l_3, l_4$, respectively. The conic $\mu\nu-\lambda^2=0$ is denoted as $C$. The discriminant of the 1/2 Calabi–Yau 3-fold with $A_3A_1^3$ singularity type is given as follows: $$\Delta \sim l_1^4\cdot l_2^2\cdot l_3^2 \cdot l_4^2 \cdot C^*.$$ $A_5A_1$ singularity type {#sec2.6} ------------------------- A plane quartic curve with $A_5A_1$ singularity is reducible into a nodal cubic and a tangent line at a flex [@DolgachevAlgGeom]. We construct the 1/2 Calabi–Yau 3-fold with $A_5A_1$ singularity type as dual of the plane quartic curve. The quartic curve with $A_5A_1$ singularity is given as follows: $$\label{quartic eqn in 2.6} (\lambda^3+\lambda^2\nu-\mu^2\nu)\nu =0.$$ The node at $[\lambda:\mu:\nu]=[0:0:1]$ yields $A_1$ singularity. The flex $[\lambda:\mu:\nu]=[0:1:0]$ where the line $\nu=0$ is tangent to the nodal cubic yields $A_5$ singularity. The determinantal representation of the quartic curve (\[quartic eqn in 2.6\]) is given as: $$\label{detrepn in 2.6} \begin{pmatrix} \nu & 0 & \lambda & 0 \\ 0 & -\lambda & \mu & 0 \\ \lambda & \mu& -\lambda & 0 \\ 0 & 0& 0 & \nu \end{pmatrix},$$ and the equations of the three quadrics are deduced from the determinantal representation (\[detrepn in 2.6\]) as: $$\begin{aligned} Q_1= & 2xz-z^2-y^2 \\ \nonumber Q_2= & 2yz \\ \nonumber Q_3= & x^2+w^2.\end{aligned}$$ $A_1^5$ singularity type {#sec2.7} ------------------------ A plane quartic curve with $A_1^5$ singularity was realized in [@DolgachevAlgGeom] as a conic and two lines in a general position. We construct the 1/2 Calabi–Yau 3-fold with $A_1^5$ singularity type as dual of the plane quartic curve. Then the quartic curve with $A_1^5$ singularity is given by the following equation: $$\label{quartic eqn in 2.7} (\lambda^2-\mu\nu)\, \lambda(\mu-\nu)=0.$$ The line $\lambda=0$ intersects with the conic $\lambda^2-\mu\nu=0$ in two points, $[0:1:0], [0:0:1]$. The line $\mu-\nu=0$ intersects with the conic in two points, $[1:1:1], [1:-1:-1]$. The two lines intersect in a point $[0:1:1]$. The five intersection points yield five $A_1$ singularities. The determinantal representation of the curve (\[quartic eqn in 2.7\]) is given as: $$\label{detrepn in 2.7} \begin{pmatrix} \mu & \lambda& 0 & 0 \\ \lambda & \nu & 0 & 0 \\ 0 & 0& \lambda & 0 \\ 0 & 0& 0 & \nu-\mu \end{pmatrix},$$ and the equations of the three quadrics $Q_1, Q_2, Q_3$ are deduced from the determinantal representation (\[detrepn in 2.7\]) as: $$\begin{aligned} Q_1= & z^2+2xy \\ \nonumber Q_2= & x^2-w^2 \\ \nonumber Q_3= & y^2+w^2.\end{aligned}$$ We denote the five curves in the base $\P^2$ dual to the five intersection points $[0:1:0], [0:0:1], [1:1:1], [1:-1:-1], [0:1:1]$ as $l_1, l_2, l_3, l_4, l_5$, respectively. The conic $\lambda^2-\mu\nu=0$ is denoted as $C$. The singular fibers over each of the five curves, $l_1, \ldots, l_5$, have type $I_2$. Then, the discriminant of the 1/2 Calabi–Yau 3-fold with $A_1^5$ singularity type is given as follows: $$\Delta \sim l_1^2\cdot l_2^2 \cdot l_3^2\cdot l_4^2 \cdot l_5^2 \cdot C^*.$$ $A_2A_1^3$ singularity type {#sec2.8} --------------------------- A plane quartic curve with $A_2A_1^3$ singularity is reducible into a cuspidal cubic and a line in a general position [@DolgachevAlgGeom]. The curve is presented in Figure \[imageA2sum3A1\] (left). We construct the 1/2 Calabi–Yau 3-fold with $A_2A_1^3$ singularity type as dual of the plane quartic curve. The quartic curve with $A_2A_1^3$ singularity is given as follows: $$\label{quartic eqn in 2.8} (\lambda^3+\mu\nu^2)(\lambda+\mu)=0.$$ The cusp is at $[\lambda:\mu:\nu]=[0:1:0]$ yielding $A_2$ singularity. Line $\lambda+\mu$ and the cuspidal cubic intersect in three points, $[0:0:1], [1:-1:-1], [1:-1:1]$. The three intersection points yield three $A_1$ singularities. The determinantal representation of the quartic curve (\[quartic eqn in 2.8\]) is given as: $$\label{detrepn in 2.8} \begin{pmatrix} -\mu & 0 & \lambda & 0 \\ 0 & -\lambda & \nu & 0 \\ \lambda & \nu& 0 & 0 \\ 0 & 0& 0 & \lambda+\mu \end{pmatrix},$$ and the equations of the three quadrics are deduced from the determinantal representation (\[detrepn in 2.8\]) as: $$\begin{aligned} Q_1= & -y^2+2xz+w^2 \\ \nonumber Q_2= & -x^2+w^2 \\ \nonumber Q_3= & 2yz.\end{aligned}$$ Curves in the base $\P^2$ dual to the three intersection points $[0:0:1], [1:-1:-1], [1:-1:1]$ of the quartic (\[quartic eqn in 2.8\]) are denoted as $l_1, l_2, l_3$, respectively. The curve in the base surface $\P^2$ dual to the cusp $[0:1:0]$ is given by $b=0$, and this dual curve is denotes as $l_4$. Type $I_2$ fibers lie over the curves $l_1, l_2, l_3$, and type $I_3$ fibers lie over curve $l_4$. The cuspidal cubic is denoted as $B$. The discriminant of the 1/2 Calabi–Yau 3-fold with $A_2A_1^3$ singularity type is given as follows: $$\Delta \sim l_1^2\cdot l_2^2 \cdot l_3^2\cdot l_4^3 \cdot B^*.$$ $A_3A_1^2$ singularity type {#sec2.9} --------------------------- A quartic curve in $\P^2$ with $A_3A_1^2$ singularity is reducible into two conics meeting in three points, one of which is a tangent point [@DolgachevAlgGeom]. The curve is presented in Figure \[imageA2sum3A1\] (middle). We construct the 1/2 Calabi–Yau 3-fold with $A_3A_1^2$ singularity type as dual of the plane quartic curve. The quartic curve with $A_3A_1^2$ singularity is given as follows: $$\label{quartic eqn in 2.9} \big(\nu(4\mu+\nu)-\lambda^2\big)\, \big(\lambda\nu-(\mu+\nu)^2 \big)=0.$$ The tangent point of the two conics, $\nu(4\mu+\nu)-\lambda^2=0$ and $\lambda\nu-(\mu+\nu)^2=0$, is at $[\lambda:\mu:\nu]=[1:0:1]$, and this yields $A_3$ singularity. The other two intersection points yield $A_1$ singularities. The determinantal representation of the quartic curve (\[quartic eqn in 2.9\]) is given as: $$\label{detrepn in 2.9} \begin{pmatrix} 4\mu+\nu & \lambda & 0 & 0 \\ \lambda & \nu & 0 & 0 \\ 0 & 0 & \lambda & \mu+\nu \\ 0 & 0 & \mu+\nu & \nu \end{pmatrix},$$ and the equations of the three quadrics are deduced from the determinantal representation (\[detrepn in 2.9\]) as: $$\begin{aligned} Q_1= & 2xy+z^2 \\ \nonumber Q_2= & 4x^2+2zw \\ \nonumber Q_3= & x^2+y^2+w^2+2zw.\end{aligned}$$ The curve in the base surface $\P^2$ dual to the $A_3$ singularity point $[1:0:1]$ of the quartic curve (\[quartic eqn in 2.9\]) is denoted as $l_1$, and type $I_4$ fibers lie over this curve. We denote the curves dual to two $A_1$ singularities of the quartic curve by $l_2$ and $l_3$, then type $I_2$ fibers lie over the curves $l_2, l_3$. We denote the two conics by $C_1$ and $C_2$, and we use $C^*_1$, $C^*_2$ to denote their duals in the base surface of the 1/2 Calabi–Yau 3-fold. The discriminant of the 1/2 Calabi–Yau 3-fold with $A_3A_1^2$ singularity type is given as follows: $$\Delta \sim l_1^4\cdot l_2^2 \cdot l_3^2 \cdot C^*_1 \cdot C^*_2.$$ $A_1^4$ singularity type {#sec2.10} ------------------------ A plane quartic curve with $A_1^4$ singularity is reducible into two conics in a general position [@DolgachevAlgGeom]. We construct the 1/2 Calabi–Yau 3-fold with $A_1^4$ singularity type as dual of the plane quartic curve. The quartic curve with $A_1^4$ singularity is given as follows: $$\label{quartic eqn in 2.10} (\mu\nu-\lambda^2)(\lambda\nu-\mu^2)=0.$$ The two conics $\mu\nu-\lambda^2=0$ and $\lambda\nu-\mu^2=0$ intersect in four points. The four intersection points yield four $A_1$ singularities. The determinantal representation of the quartic curve (\[quartic eqn in 2.10\]) is given as: $$\label{detrepn in 2.10} \begin{pmatrix} \mu & \lambda & 0 & 0 \\ \lambda & \nu & 0 & 0 \\ 0 & 0 & \lambda & \mu \\ 0 & 0& \mu & \nu \end{pmatrix},$$ and the equations of the three quadrics are deduced from the determinantal representation (\[detrepn in 2.10\]) as: $$\begin{aligned} Q_1= & 2xy+z^2 \\ \nonumber Q_2= & x^2+2zw \\ \nonumber Q_3= & y^2+w^2.\end{aligned}$$ Curves in the base $\P^2$ dual to the four intersection points of the quartic (\[quartic eqn in 2.10\]) are denoted as $l_1, l_2, l_3, l_4$, respectively. Type $I_2$ fibers lie over the curves $l_1, l_2, l_3, l_4$. The two conics are denoted as $C_1$ and $C_2$. The discriminant of the 1/2 Calabi–Yau 3-fold with $A_1^4$ singularity type is given as follows: $$\Delta \sim l_1^2\cdot l_2^2 \cdot l_3^2\cdot l_4^2 \cdot C_1^* \cdot C_2^*.$$ 6D F-theory models and gauge groups {#sec3} =================================== We discuss applications to 6D F-theory compactifications. Taking double covers of 1/2 Calabi–Yau 3-folds built in sections \[sec2.2\] - \[sec2.10\] yields elliptic Calabi–Yau 3-folds [^4]. F-theory compactifications on the resulting Calabi–Yau 3-folds yield 6D $N=1$ theories. The base surface of the Calabi–Yau 3-folds as double covers of 1/2 Calabi–Yau 3-folds is isomorphic to del Pezzo surface of degree two [@Kimura1910] [^5]; therefore, seven tensor fields arise in 6D $N=1$ F-theory on the Calabi–Yau 3-folds as double covers of 1/2 Calabi–Yau 3-folds [@Kimura1910]. The singularity types of 1/2 Calabi–Yau 3-folds constructed in sections \[sec2.2\] through \[sec2.10\] and those of the Calabi–Yau 3-folds as their double covers are identical [@Kimura1910], thus they determine the types of the non-Abelian gauge groups [@MV2; @BIKMSV] formed in 6D F-theory on the Calabi–Yau 3-folds obtained as the double covers. The singularity types corresponding to the non-Abelian gauge groups formed on the 7-branes in 6D F-theory on the Calabi–Yau 3-folds as double covers of 1/2 Calabi–Yau 3-folds constructed in sections \[sec2.2\] through \[sec2.10\] are given as follows: $D_4A_1^2$, $D_5A_1$, $A_3A_2A_1$, $A_3A_1^3$, $A_5A_1$, $A_1^5$, $A_2A_1^3$, $A_3A_1^2$, $A_1^4$. SU(2) gauge group factors form in 6D F-theory on the Calabi–Yau 3-folds as double covers of 1/2 Calabi–Yau 3-folds whose singularity types include $A_1$, as constructed in sections \[sec2.2\] through \[sec2.10\]. For example, SU(2)$^3$ forms in F-theory on the Calabi–Yau 3-fold with $A_3A_1^3$ singularity type, as double cover of 1/2 Calabi–Yau 3-fold constructed in section \[sec2.5\]. The sum of the rank of the singularity type and the Mordell–Weil rank of any 1/2 Calabi–Yau 3-fold is seven [@Kimura1910]. The 1/2 Calabi–Yau 3-folds built in sections \[sec2.2\] through \[sec2.6\] have singularity types of rank 6, therefore, their Mordell–Weil rank is 1. The 1/2 Calabi–Yau 3-folds built in sections \[sec2.7\] through \[sec2.9\] have rank 5 singularity types, therefore, their Mordell–Weil rank is 2. The 1/2 Calabi–Yau 3-fold built in section \[sec2.10\] has rank 4 singularity, therefore, the Mordell–Weil rank is 3. The Mordell–Weil rank of Calabi–Yau 3-fold as double cover of an 1/2 Calabi–Yau 3-fold is greater than or equal to the Mordell–Weil rank of the original 1/2 Calabi–Yau 3-fold [@Kimura1910]. Calabi–Yau 3-folds constructed by taking double covers of 1/2 Calabi–Yau 3-folds built in sections \[sec2.2\] - \[sec2.6\] have Mordell–Weil ranks greater than or equal to 1. 6D $N=1$ F-theory compactifications on these Calabi–Yau 3-folds have at least one U(1) factor. A similar reasoning applied to 6D F-theory on the Calabi–Yau 3-folds constructed by taking double covers of 1/2 Calabi–Yau 3-folds built in sections \[sec2.7\] - \[sec2.9\] shows that at least two U(1) factors form in the theories. As per reasoning similar to these, at least three U(1) factors form in 6D F-theory on the Calabi–Yau 3-folds constructed by taking double covers of 1/2 Calabi–Yau 3-folds built in section \[sec2.10\]. The structure of type $I_3$ fibers corresponding to $A_2$ singularities can be seen after conducting blow-ups in our constructions in section \[sec2\]. The singular fiber corresponding to an $A_2$ singularity is given by two conic meeting in two points, where one and only one of the two meeting points is a base point of the three quadrics. When this base point is blown up, restricted to the singular fiber $\P^1$ arises from the blown-up base point. Then the singular fiber consists of three $\P^1$s, each pair of which meet in one point, and the structure of type $I_3$ fiber becomes clear. We use the 1/2 Calabi–Yau 3-fold constructed in section \[sec2.4\] as a sample to demonstrate this point. The singular fibers corresponding to $A_2$ singularity are the fibers over the curve $l_2$, which is dual to the $A_2$ singularity point $[0:1:0]$ of the quartic curve (\[quartic eqn in 2.4\]). Thus, the equation of the singular fibers is given as follows: $$\begin{aligned} \label{singular fiber of A2 before blow-up in 3} -x^2+w^2= & 0 \\ \nonumber c(-y^2+2xz+3w^2)-a(2yz+2w^2)= & 0,\end{aligned}$$ where $[a:c]$ parameterizes the curve $l_2$. We will see that the structure of type $I_3$ fiber can be seen after blow-ups. Before blow-ups, from the equation (\[singular fiber of A2 before blow-up in 3\]) an argument similar to that given in [@Kimura2003] one finds two conics intersecting in two points, where conics are contained in the hyperplanes $x-w=0$ and $x+w=0$. The conics intersect along the intersection of the hyperplanes $x-w=0$ and $x+w=0$. Therefore, the intersection points lie along the locus $x=w=0$, and they are the solutions to the following equation: $$cy^2+2ayz=0.$$ Thus, $\{x=y=w=0\}$ and $\{x=w=0, \hspace{1mm} cy+2az=0\}$ yield two intersection points of the two conics. Three quadrics (\[quadrics in 2.4\]) have five base points: $[0:0:1:0]$, $[1:1:-1:1]$, $[-1:-1:1:1]$, $[2:-4:1:2]$, $[-2:4:-1:2]$. Therefore, we find that the two intersection points of the conics contain one of the base points: $[0:0:1:0]$. When the base points are blown up, because intersection point $[0:0:1:0]$ is blown up, the blow-up separates the two intersecting conics at the intersection point $[0:0:1:0]$. (The other intersection point remains unchanged.) $\P^1$ arises from the blown-up intersection point, and the structure of type $I_3$ fiber becomes evident after the blow-up. The situation is given in Figure \[imageI3fiber\]. ![\[imageI3fiber\]When one of the intersection points of two conics meeting in two points is blown up, two conics are separated at the blown-up point. $\P^1$ arises from the blown-up point, and the blue line in the right image represents this $\P^1$. This $\P^1$ intersects with each of the two conics in one point, and the structure of type $I_3$ fiber becomes clear as described in the right image.](typeI3fiber.jpg){height="10cm"} The singularity type of 1/2 Calabi–Yau 3-fold constructed in section \[sec2.2\] includes $D_4$. By an argument similar to that given in [@Kimura2003], we can see the structure of type $I_0^*$ fiber corresponding to $D_4$ singularity after conducting blow-ups at base points of the three quadrics for the 1/2 Calabi–Yau 3-fold in section \[sec2.2\]. A double conic blown up at four base points yields this fiber type, and the situation of the appearance of a type $I_0^*$ fiber in 1/2 Calabi–Yau 3-fold in section \[sec2.2\] is analogous to the situation of the 1/2 Calabi–Yau 3-fold with $D_4A_1^3$ singularity discussed in [@Kimura2003]. The situations of the $A_3$ singularities of 1/2 Calabi–Yau 3-folds constructed in sections \[sec2.4\], \[sec2.5\], and \[sec2.9\] are analogous to the analysis of the 1/2 Calabi–Yau 3-fold with $A_3^2A_1$ singularity studied in [@Kimura2003]. Analyses similar to that given in [@Kimura2003] also reveal that the structures of type $I_4$ fibers can be seen after the blow-ups of the base points of three quadrics for the constructions in sections \[sec2.4\], \[sec2.5\], and \[sec2.9\], whose singularity types include $A_3$. As mentioned in [@Kimura2003], the structure of type $I_6$ fiber is expected to be seen after multiple stages of blow-ups when a 1/2 Calabi–Yau 3-fold has an $A_5$ singularity as constructed in section \[sec2.6\]. The structures of the singular fibers of the 1/2 Calabi–Yau 3-folds are determined via the blow-ups as we have seen. The fiber types of the Calabi–Yau 3-folds as their double covers remain invariant; therefore, the fiber types of the Calabi–Yau 3-folds as the double covers can also be deduced in our constructions. However, whether the types of the singular fibers are split/semi-split/non-split need to be determined to deduce the precise non-Abelian gauge group [@BIKMSV]. As pointed out in [@Kimura1910], even when the equations of the three quadrics yielding a 1/2 Calabi–Yau 3-fold are determined, deducing the Weierstrass equation of the Calabi–Yau 3-fold as the double cover from the equations of the three quadrics is technically difficult. Owing to this situation, determining whether a singular fiber is split/non-split/semi-split is generally a hard problem for Calabi–Yau 3-folds constructed as double covers of 1/2 Calabi–Yau 3-folds. The discriminants of the 1/2 Calabi–Yau 3-folds are also deduced in sections \[sec2.2\], \[sec2.3\], \[sec2.4\], \[sec2.5\], \[sec2.7\], \[sec2.8\], \[sec2.9\], \[sec2.10\]. The discriminants of the Calabi–Yau 3-folds constructed as double covers of 1/2 Calabi–Yau 3-folds can be determined from those of the 1/2 Calabi–Yau 3-folds [@Kimura1910]. Thus, the locations of the matter fields in 6D F-theory localized at the intersections of the 7-branes can be obtained from the discriminants of the Calabi–Yau 3-folds in our constructions. Determining the matter spectra by studying the collisions of singular fibers at the intersections of the 7-branes can be a likely target of future study. Concluding remarks {#sec4} ================== In this note, we built various elliptic Calabi–Yau 3-folds of positive Mordell–Weil ranks as double covers of 1/2 Calabi–Yau 3-folds of various singularity types via utilizing a general method discussed in [@Kimura2003]. F-theory compactifications on the Calabi–Yau 3-folds yielded 6D $N=1$ models wherein one to three U(1) gauge group factors are formed [^6]. 6D F-theory compactifications on the elliptic Calabi–Yau 3-folds with the singularity types, $D_4A_1^2$, $D_5A_1$, $A_3A_2A_1$, $A_3A_1^3$, $A_5A_1$, as double covers of 1/2 Calabi–Yau 3-folds constructed in sections \[sec2.2\] - \[sec2.6\], have at least one U(1) factor; 6D F-theory on Calabi–Yau 3-folds with the singularity types, $A_1^5$, $A_2A_1^3$, $A_3A_1^2$, as double covers of 1/2 Calabi–Yau 3-folds that we constructed in sections \[sec2.7\] - \[sec2.9\], have at least two U(1) factors; and at least three U(1) factors form in 6D F-theory on the elliptic Calabi–Yau 3-folds with the singularity type $A_1^4$ as double cover of 1/2 Calabi–Yau 3-fold as constructed in section \[sec2.10\]. Our studies here applied the method in [@Kimura2003] to investigate gauge groups formed in 6D F-theory on the Calabi–Yau 3-folds built as the double covers of 1/2 Calabi–Yau 3-folds. A similar analysis can be applied to 1/2 Calabi–Yau 3-folds possessing other singularity types and Calabi–Yau 3-folds as double covers of such 1/2 Calabi–Yau 3-folds. This particularly applies to construct 6D $N=1$ F-theory models with four or more U(1) factors. To analyze such situations, one can construct 1/2 Calabi–Yau 3-folds with singularity types of rank three or less, and consider 6D F-theory on the elliptic Calabi–Yau 3-folds constructed as their double covers. The structures of the singular fibers are analyzed through the blow-ups as discussed in section \[sec3\]. There is a chance that the matter spectra can be deduced if the structures of the singular fibers at the intersections of the 7-branes can be analyzed as mentioned in [@Kimura1910], and it might be interesting if this is achieved and the hypermultiplets at the intersections of the 7-branes are determined. This is a likely target for future studies. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Shigeru Mukai for discussions. [99]{} C. Vafa, “Evidence for F-theory”, [*Nucl. Phys.*]{} [**B 469**]{} (1996) 403 \[arXiv:hep-th/9602022\]. D. R. Morrison and C. Vafa, “Compactifications of F-theory on Calabi-Yau threefolds. 1”, [*Nucl. Phys.*]{} [**B 473**]{} (1996) 74 \[arXiv:hep-th/9602114\]. D. R. Morrison and C. Vafa, “Compactifications of F-theory on Calabi-Yau threefolds. 2”, [*Nucl. Phys.*]{} [**B 476**]{} (1996) 437 \[arXiv:hep-th/9603161\]. D. R. Morrison and D. S. Park, “F-Theory and the Mordell-Weil Group of Elliptically-Fibered Calabi-Yau Threefolds”, [*JHEP*]{} [**10**]{} (2012) 128 \[arXiv:1208.2695 \[hep-th\]\]. C. Mayrhofer, E. Palti and T. Weigand, “U(1) symmetries in F-theory GUTs with multiple sections”, [*JHEP*]{} [**03**]{} (2013) 098 \[arXiv:1211.6742 \[hep-th\]\]. V. Braun, T. W. Grimm and J. Keitel, “New Global F-theory GUTs with U(1) symmetries”, [*JHEP*]{} [**09**]{} (2013) 154 \[arXiv:1302.1854 \[hep-th\]\]. J. Borchmann, C. Mayrhofer, E. Palti and T. Weigand, “Elliptic fibrations for $SU(5)\times U(1)\times U(1)$ F-theory vacua”, [*Phys. Rev.*]{} [**D88**]{} (2013) no.4 046005 \[arXiv:1303.5054 \[hep-th\]\]. M. Cvetič, D. Klevers and H. Piragua, “F-Theory Compactifications with Multiple U(1)-Factors: Constructing Elliptic Fibrations with Rational Sections”, [*JHEP*]{} [**06**]{} (2013) 067 \[arXiv:1303.6970 \[hep-th\]\]. V. Braun, T. W. Grimm and J. Keitel, “Geometric Engineering in Toric F-Theory and GUTs with U(1) Gauge Factors,” [*JHEP*]{} [**12**]{} (2013) 069 \[arXiv:1306.0577 \[hep-th\]\]. M. Cvetič, A. Grassi, D. Klevers and H. Piragua, “Chiral Four-Dimensional F-Theory Compactifications With SU(5) and Multiple U(1)-Factors”, [*JHEP*]{} [**04**]{} (2014) 010 \[arXiv:1306.3987 \[hep-th\]\]. M. Cvetič, D. Klevers and H. Piragua, “F-Theory Compactifications with Multiple U(1)-Factors: Addendum”, [*JHEP*]{} [**12**]{} (2013) 056 \[arXiv:1307.6425 \[hep-th\]\]. M. Cvetič, D. Klevers, H. Piragua and P. Song, “Elliptic fibrations with rank three Mordell-Weil group: F-theory with U(1) x U(1) x U(1) gauge symmetry,” [*JHEP*]{} [**1403**]{} (2014) 021 \[arXiv:1310.0463 \[hep-th\]\]. S. Mizoguchi, “F-theory Family Unification”, [*JHEP*]{} [**07**]{} (2014) 018 \[arXiv:1403.7066 \[hep-th\]\]. I. Antoniadis and G. K. Leontaris, “F-GUTs with Mordell-Weil U(1)’s,” [*Phys. Lett.*]{} [**B735**]{} (2014) 226–230 \[arXiv:1404.6720 \[hep-th\]\]. M. Esole, M. J. Kang and S.-T. Yau, “A New Model for Elliptic Fibrations with a Rank One Mordell-Weil Group: I. Singular Fibers and Semi-Stable Degenerations”, \[arXiv:1410.0003 \[hep-th\]\]. C. Lawrie, S. Schäfer-Nameki and J.-M. Wong, “F-theory and All Things Rational: Surveying U(1) Symmetries with Rational Sections”, [*JHEP*]{} [**09**]{} (2015) 144 \[arXiv:1504.05593 \[hep-th\]\]. M. Cvetič, D. Klevers, H. Piragua and W. Taylor, “General U(1)$\times$U(1) F-theory compactifications and beyond: geometry of unHiggsings and novel matter structure,” [*JHEP*]{} [**1511**]{} (2015) 204 \[arXiv:1507.05954 \[hep-th\]\]. M. Cvetič, A. Grassi, D. Klevers, M. Poretschkin and P. Song, “Origin of Abelian Gauge Symmetries in Heterotic/F-theory Duality,” [*JHEP*]{} [**1604**]{} (2016) 041 \[arXiv:1511.08208 \[hep-th\]\]. D. R. Morrison and D. S. Park, “Tall sections from non-minimal transformations”, [*JHEP*]{} [**10**]{} (2016) 033 \[arXiv:1606.07444 \[hep-th\]\]. D. R. Morrison, D. S. Park and W. Taylor, “Non-Higgsable abelian gauge symmetry and $\mathrm{F}$-theory on fiber products of rational elliptic surfaces”, [*Adv. Theor. Math. Phys.*]{} [**22**]{} (2018) 177–245 \[arXiv:1610.06929 \[hep-th\]\]. M. Bies, C. Mayrhofer and T. Weigand, “Gauge Backgrounds and Zero-Mode Counting in F-Theory”, [*JHEP*]{} [**11**]{} (2017) 081 \[arXiv:1706.04616 \[hep-th\]\]. M. Cvetič and L. Lin, “The Global Gauge Group Structure of F-theory Compactification with U(1)s”, [*JHEP*]{} [**01**]{} (2018) 157 \[arXiv:1706.08521 \[hep-th\]\]. Y. Kimura and S. Mizoguchi, “Enhancements in F-theory models on moduli spaces of K3 surfaces with $ADE$ rank 17”, [*PTEP*]{} [**2018**]{} no. 4 (2018) 043B05 \[arXiv:1712.08539 \[hep-th\]\]. Y. Kimura, “F-theory models on K3 surfaces with various Mordell-Weil ranks -constructions that use quadratic base change of rational elliptic surfaces”, [*JHEP*]{} [**05**]{} (2018) 048 \[arXiv:1802.05195 \[hep-th\]\]. S.-J. Lee, D. Regalado and T. Weigand, “6d SCFTs and U(1) Flavour Symmetries”, [*JHEP*]{} [**11**]{} (2018) 147 \[arXiv:1803.07998 \[hep-th\]\]. T. Weigand, “F-theory”, [*PoS*]{} [**TASI2017**]{} (2018) 016 \[arXiv:1806.01854 \[hep-th\]\]. S. Mizoguchi and T. Tani, “Non-Cartan Mordell-Weil lattices of rational elliptic surfaces and heterotic/F-theory compactifications”, [*JHEP*]{} [**03**]{} (2019) 121 \[arXiv:1808.08001 \[hep-th\]\]. M. Cvetič and L. Lin, “TASI Lectures on Abelian and Discrete Symmetries in F-theory”, [*PoS*]{} [**TASI2017**]{} (2018) 020 \[arXiv:1809.00012 \[hep-th\]\]. Y. Kimura, “Nongeometric heterotic strings and dual F-theory with enhanced gauge groups”, [*JHEP*]{} [**02**]{} (2019) 036 \[arXiv:1810.07657 \[hep-th\]\]. F. M. Cianci, D. K. Mayorga Pena and R. Valandro, “High U(1) charges in type IIB models and their F-theory lift”, [*JHEP*]{} [**04**]{} (2019) 012 \[arXiv:1811.11777 \[hep-th\]\]. W. Taylor and A. P. Turner, “Generic matter representations in 6D supergravity theories”, [*JHEP*]{} [**05**]{} (2019) 081 \[arXiv:1901.02012 \[hep-th\]\]. Y. Kimura, “Unbroken $E_7\times E_7$ nongeometric heterotic strings, stable degenerations and enhanced gauge groups in F-theory duals” \[arXiv:1902.00944 \[hep-th\]\]. Y. Kimura, “F-theory models with 3 to 8 U(1) factors on K3 surfaces” \[arXiv:1903.03608 \[hep-th\]\]. M. Esole and P. Jefferson, “The Geometry of SO(3), SO(5), and SO(6) models” \[arXiv:1905.12620 \[hep-th\]\]. S.-J. Lee and T. Weigand, “Swampland Bounds on the Abelian Gauge Sector”, [*Phys. Rev.*]{} [**D100**]{} (2019) no.2 026015 \[arXiv:1905.13213 \[hep-th\]\]. Y. Kimura, “$\frac{1}{2}$ Calabi-Yau 3-folds, Calabi-Yau 3-folds as double covers, and F-theory with U(1)s”, [*JHEP*]{} [**02**]{} (2020) 076 \[arXiv:1910.00008 \[hep-th\]\]. C. F. Cota, A. Klemm, and T. Schimannek, “Topological strings on genus one fibered Calabi-Yau 3-folds and string dualities”, [*JHEP*]{} [**11**]{} (2019) 170 \[arXiv:1910.01988 \[hep-th\]\]. Y. Kimura, “$\frac{1}{2}$Calabi-Yau 4-folds and four-dimensional F-theory on Calabi-Yau 4-folds with U(1) factors” \[arXiv:1911.03960 \[hep-th\]\]. S. Fukuchi, N. Kan, R. Kuramochi, S. Mizoguchi and H. Tashiro, “More on a dessin on the base: Kodaira exceptional fibers and mutually (non-)local branes”, [*Phys.Lett.*]{} [**B803**]{} (2020) 135333 \[arXiv:1912.02974 \[hep-th\]\]. F. Apruzzi, M. Fazzi, J. J. Heckman, T. Rudelius, and H. Y. Zhang, “General Prescription for Global $U$(1)’s in 6D SCFTs” \[arXiv:2001.10549 \[hep-th\]\]. Y. Kimura, “Extremal 1/2 Calabi–Yau 3-folds and six-dimensional F-theory applications” \[arXiv:2003.02209 \[hep-th\]\]. N. Kan, S. Mizoguchi and T. Tani, “Half-hypermultiplets and incomplete/complete resolutions in F theory” \[arXiv:2003.05563 \[hep-th\]\]. J. Borchmann, C. Mayrhofer, E. Palti and T. Weigand, “SU(5) Tops with Multiple U(1)s in F-theory”, [*Nucl. Phys.*]{} [**B882**]{} (2014) 1–69 \[arXiv:1307.2902 \[hep-th\]\]. D. R. Morrison and W. Taylor, “Sections, multisections, and $U(1)$ fields in F-theory”, [*J. Singularities*]{} [**15**]{} (2016) 126–149 \[arXiv:1404.1527 \[hep-th\]\]. G. Martini and W. Taylor, “6D F-theory models and elliptically fibered Calabi-Yau threefolds over semi-toric base surfaces”, [*JHEP*]{} [**06**]{} (2015) 061 \[arXiv:1404.6300 \[hep-th\]\]. D. Klevers, D. K. Mayorga Pena, P. K. Oehlmann, H. Piragua and J. Reuter, “F-Theory on all Toric Hypersurface Fibrations and its Higgs Branches”, [*JHEP*]{} [**01**]{} (2015) 142 \[arXiv:1408.4808 \[hep-th\]\]. V. Braun, T. W. Grimm and J. Keitel, “Complete Intersection Fibers in F-Theory”, [*JHEP*]{} [**03**]{} (2015) 125 \[arXiv:1411.2615 \[hep-th\]\]. T. W. Grimm, A. Kapfer and D. Klevers, “The Arithmetic of Elliptic Fibrations in Gauge Theories on a Circle”, [*JHEP*]{} [**06**]{} (2016) 112 \[arXiv:1510.04281 \[hep-th\]\]. G. K. Leontaris and Q. Shafi, “Phenomenology with F-theory SU(5)”, [*Phys. Rev.*]{} [**D96**]{} (2017) no.6 066023 \[arXiv:1706.08372 \[hep-ph\]\]. W. Taylor and A. P. Turner, “An infinite swampland of U(1) charge spectra in 6D supergravity theories”, [*JHEP*]{} [**06**]{} (2018) 010 \[arXiv:1803.04447 \[hep-th\]\]. M. Cvetič, L. Lin, M. Liu and P.-K. Oehlmann, “An F-theory Realization of the Chiral MSSM with $\mathbb{Z}_2$-Parity”, [*JHEP*]{} [**09**]{} (2018) 089 \[arXiv:1807.01320 \[hep-th\]\]. Y. Kimura, “F-theory models with $U(1)\times \mathbb{Z}_2,\, \mathbb{Z}_4$ and transitions in discrete gauge groups”, [*JHEP*]{} [**03**]{} (2020) 153 \[arXiv:1908.06621 \[hep-th\]\]. P.-K. Oehlmann and T. Schimannek, “GV-Spectroscopy for F-theory on genus-one fibrations” \[arXiv:1912.09493 \[hep-th\]\]. M. Bershadsky, K. A. Intriligator, S. Kachru, D. R. Morrison, V. Sadov and C. Vafa, “Geometric singularities and enhanced gauge symmetries”, [*Nucl. Phys.*]{} [**B 481**]{} (1996) 215 \[arXiv:hep-th/9605200\]. K. Kodaira, “On compact analytic surfaces II”, [*Ann. of Math.*]{} [**77**]{} (1963), 563–626. K. Kodaira, “On compact analytic surfaces III”, [*Ann. of Math.*]{} [**78**]{} (1963), 1–40. A. Néron, “Modèles minimaux des variétés abéliennes sur les corps locaux et globaux”, [*Publications math[' e]{}matiques de l’IH[' E]{}S*]{} [**21**]{} (1964), 5–125. J. Tate, “Algorithm for determining the type of a singular fiber in an elliptic pencil”, in Modular Functions of One Variable IV, Springer, Berlin (1975), 33–52. C. Vafa, “The String landscape and the swampland”, \[arXiv:hep-th/0509212\]. N. Arkani-Hamed, L. Motl, A. Nicolis and C. Vafa, “The String landscape, black holes and gravity as the weakest force”, [*JHEP*]{} [**06**]{} (2007) 060 \[arXiv:hep-th/0601001\]. H. Ooguri and C. Vafa, “On the Geometry of the String Landscape and the Swampland”, [*Nucl. Phys.*]{} [**B766**]{} (2007) 21–33 \[arXiv:hep-th/0605264\]. T. D. Brennan, F. Carta and C. Vafa, “The String Landscape, the Swampland, and the Missing Corner”, [*PoS*]{} [**TASI 2017**]{} (2017) 015 \[arXiv:1711.00864 \[hep-th\]\]. E. Palti, “The Swampland: Introduction and Review”, [*Fortsch. Phys.*]{} [**67**]{} (2019) no.6 1900037 \[arXiv:1903.06239 \[hep-th\]\]. V. Kumar and W. Taylor, “A Bound on 6D N=1 supergravities”, [*JHEP*]{} [**12**]{} (2009) 050 \[arXiv:0910.1586 \[hep-th\]\]. V. Kumar, D. R. Morrison and W. Taylor, “Global aspects of the space of 6D N = 1 supergravities”, [*JHEP*]{} [**11**]{} (2010) 118 \[arXiv:1008.1062 \[hep-th\]\]. D. S. Park and W. Taylor, “Constraints on 6D Supergravity Theories with Abelian Gauge Symmetry”, [*JHEP*]{} [**01**]{} (2012) 141 \[arXiv:1110.5916 \[hep-th\]\]. W. Taylor, “TASI Lectures on Supergravity and String Vacua in Various Dimensions” \[arXiv:1104.2051 \[hep-th\]\]. N. Nakayama, “On Weierstrass Models”, [*Algebraic Geometry and Commutative Algebra in Honor of Masayoshi Nagata*]{}, (1988), 405–431. I. Dolgachev and M. Gross, “Elliptic Three-folds I: Ogg-Shafarevich Theory”, [*Journal of Algebraic Geometry*]{} [**3**]{}, (1994), 39–80. M. Gross, “Elliptic Three-folds II: Multiple Fibres”, [*Trans. Amer. Math. Soc.*]{} [**349**]{}, (1997), 3409–3468. R. Donagi and M. Wijnholt, “Model Building with F-Theory”, [*Adv. Theor. Math. Phys.*]{} [**15**]{} (2011) no.5, 1237–1317 \[arXiv:0802.2969 \[hep-th\]\]. C. Beasley, J. J. Heckman and C. Vafa, “GUTs and Exceptional Branes in F-theory -I”, [*JHEP*]{} [**01**]{} (2009) 058 \[arXiv:0802.3391 \[hep-th\]\]. C. Beasley, J. J. Heckman and C. Vafa, “GUTs and Exceptional Branes in F-theory - II: Experimental Predictions”, [*JHEP*]{} [**01**]{} (2009) 059 \[arXiv:0806.0102 \[hep-th\]\]. R. Donagi and M. Wijnholt, “Breaking GUT Groups in F-Theory”, [*Adv. Theor. Math. Phys.*]{} [**15**]{} (2011) 1523–1603 \[arXiv:0808.2223 \[hep-th\]\]. S. Mukai, [*An introduction to invariants and moduli*]{}, Cambridge University Press (2003). S. Mukai, “Geometric realization of root systems and the Jacobians of del Pezzo surfaces”, in Complex geometry in Osaka : in honour of Professor Akira Fujiki on the occasion of his 60th birthday, [*Osaka Math. Publ.*]{}, Osaka University, 2008. S. Mukai, “Algebraic varieties governing root systems, and the Jacobians of del Pezzo surfaces”, Proceedings of algebraic geometry symposium, held in Waseda University, November 2019. I. V. Dolgachev, [*Classical Algebraic Geometry. A modern view.*]{}, Cambridge University Press, Cambridge (2012). J. Piontkowski, “Linear symmetric determinantal hypersurfaces”, [*Michigan Math. J.*]{} [**54**]{} (2006). [^1]: The types of the singular fibers of the elliptic surfaces were classified in [@Kod1; @Kod2], and methods to determine the types of the singular fibers were discussed in [@Ner; @Tate]. [^2]: Seven is the highest rank of all the singularity types of the 1/2 Calabi–Yau 3-folds [@Kimura1910]. [^3]: [@Piontkowski] discussed a determinantal representation of the cuspidal cubic. [^4]: The double cover must be ramified over a quartic polynomial in the variables of the three quadrics, $Q_1, Q_2, Q_3$, to satisfy the Calabi–Yau condition. Detail of this can be found in [@Kimura1910]. [^5]: We choose a convention to call a del Pezzo surface that is obtained by blowing up $\P^2$ at seven points of a general position as del Pezzo surface of degree two. [^6]: Examples of 6D F-theory models on Calabi–Yau 3-folds as double covers of 1/2 Calabi–Yau 3-folds with four to six U(1) factors are constructed in [@Kimura1910]. Seven U(1) factors form in 6D F-theory on Calabi–Yau 3-folds constructed as double covers of 1/2 Calabi–Yau 3-folds, when the three quadrics are generically chosen [@Kimura1910].
--- abstract: 'This articles surveys the existing literature on the methods currently used by web services to track the user online as well as their purposes, implications, and possible user’s defenses. A significant majority of reviewed articles and web resources are from years 2012 – 2014. Privacy seems to be the Achilles’ heel of today’s web. Web services make continuous efforts to obtain as much information as they can about the things we search, the sites we visit, the people with who we contact, and the products we buy. Tracking is usually performed for commercial purposes. We present 5 main groups of methods used for user tracking, which are based on sessions, client storage, client cache, fingerprinting, or yet other approaches. A special focus is placed on mechanisms that use web caches, operational caches, and fingerprinting, as they are usually very rich in terms of using various creative methodologies. We also show how the users can be identified on the web and associated with their real names, e-mail addresses, phone numbers, or even street addresses. We show why tracking is being used and its possible implications for the users. For example, we describe recent cases of price discrimination, assessing financial credibility, determining insurance coverage, government surveillance, and identity theft. For each of the tracking methods, we present possible defenses. Some of them are specific to a particular tracking approach, while others are more universal (block more than one threat) and they are discussed separately. Apart from describing the methods and tools used for keeping the personal data away from being tracked, we also present several tools that were used for research purposes – their main goal is to discover how and by which entity the users are being tracked on their desktop computers or smartphones, provide this information to the users, and visualize it in an accessible and easy to follow way. Finally, we present the currently proposed future approaches to track the user and show that they can potentially pose significant threats to the users’ privacy.' author: - 'Tomasz Bujlow,  Valentín Carela-Español, Josep Solé-Pareta, and Pere Barlet-Ros[^1][^2]' bibliography: - 'bibliography.bib' title: 'Web Tracking: Mechanisms, Implications, and Defenses' --- [Bujlow : Web Tracking: Mechanisms, Implications, and Defenses]{} web tracking, tracking mechanisms, tracking implications, defenses against tracking, user identification, tracking discovery, future of tracking. Acknowledgment {#acknowledgment .unnumbered} ============== This work was funded by Spanish Ministry of Economy and Competitiveness under contract EUIN2013-51199 (Arquitectura con Conocimiento del Entorno de la Futura Internet project), Spanish Ministry of Science and Innovation under contract TEC2011-27474 (NOMADS project), and by AGAUR (ref. 2014-SGR-1427). [Tomasz Bujlow]{} received the M.Sc. and Ph.D. degrees in Computer Science from the Silesian University of Technology in 2008 and Aalborg University in 2014, respectively. He is currently a Postdoctoral Researcher at the Broadband Communications Research Group (CBA) that belongs to the Computer Architecture Department (DAC) at the UPC BarcelonaTech. His research interests are in the field of traffic analysis and network measurements, focusing on network traffic classification. He is a participant on numerous scientific projects related to traffic classification and he holds the Cisco Certified Network Professional (CCNP) certification since 2010. See more: <http://tomasz.bujlow.com>. [Valentín Carela-Español]{} received a B.Sc. degree in Computer Science from the Universitat Politècnica de Catalunya (UPC) in 2007, a M.Sc. degree in Computer Architecture, Networks, and Systems from UPC in 2009, and a Ph.D. degree from UPC in 2014. He is currently a Postdoctoral Researcher at the Broadband Communications Research Group (CBA) that belongs to the Computer Architecture Department (DAC) at the UPC BarcelonaTech. His research interests are in the field of traffic analysis and network measurements, focusing on network traffic classification. His key research area is the study of the identification of applications in network traffic based on Machine Learning and Deep Packet Inspection techniques and the aspects related to the application of those techniques in backbone networks. See more: <http://people.ac.upc.es/vcarela>. [Josep Solé-Pareta]{} obtained his M.Sc. degree in Telecom Engineering in 1984, and his Ph.D. in Computer Science in 1991, both from the Universitat Politècnica de Catalunya (UPC). In 1984 he joined the Computer Architecture Department of the UPC. Currently he is Full Professor with this department. He did a Postdoc stage (summers of 1993 and 1994) at the Georgia Institute of Technology. He is co-founder of the UPC-CCABA (<http://www.ccaba.upc.edu>). His publications include several book chapters and more than 100 papers in relevant research journals (&gt;25), and refereed international conferences. His current research interests are in Nanonetworking Communications, Traffic Monitoring and Analysis and High Speed and Optical Networking, with emphasis on traffic engineering, traffic characterization, MAC protocols and QoS provisioning. He has participated in many European projects dealing with Computer Networking topics. See more: <http://people.ac.upc.es/pareta>. [Pere Barlet-Ros]{} received the M.Sc. and Ph.D. degrees in Computer Science from the Universitat Politècnica de Catalunya (UPC) in 2003 and 2008, respectively. He is currently an Associate Professor with the Computer Architecture Department of UPC and co-founder of Talaia Networks, a University spin-off that develops innovative network monitoring products. His research interests are in the fields of network monitoring, traffic classification, and anomaly detection. See more: <http://people.ac.upc.es/pbarlet>. [^1]: The authors are with the Broadband Communications Research Group, Department of Computer Architecture, Universitat Politècnica de Catalunya, Barcelona, 08034, Spain. [^2]: E-mails: tomasz@bujlow.com (T. Bujlow), vcarela@ac.upc.edu (V. Carela-Español), pareta@ac.upc.edu (J. Solé-Pareta), pbarlet@ac.upc.edu (P. Barlet-Ros).
= 1.5ex ‘=11 [DFPD/93/TH/72]{} [hep-th/9402081]{} [**NONPERTURBATIVE MODEL OF LIOUVILLE GRAVITY**]{} [^1] *Department of Physics “G. Galilei” - Istituto Nazionale di Fisica Nucleare* *University of Padova* *Via Marzolo, 8 - 35131 Padova, Italy* ABSTRACT We obtain nonperturbative results in the framework of continuous Liouville theory. In particular, we express the specific heat ${\cal Z}$ of pure gravity in terms of an expansion of integrals on moduli spaces of punctured Riemann spheres. The integrands are written in terms of the Liouville action. We show that ${\cal Z}$ satisfies the Painlevé I. [**1.**]{} In this paper we introduce models of Liouville theory in the continuum which are based on the Riemann sphere with punctures. The models include pure gravity. In particular we will show that $${\cal Z}(t)=t^{-12}\sum_{k=4}^\infty t^{5k}\int_{\overline{\cal M}_{0,k}} \left(i\overline \partial \partial S_{cl}^{(k)}\right)^{k-4}\wedge \omega^{F_0}-{t^3\over 2} \label{main1}$$ is the specific heat of pure gravity, namely ${\cal Z}$ satisfies the Painlevé I $${\cal Z}^2(t)-{1\over 3}{\cal Z}''(t)=t. \label{PI}$$ $S_{cl}^{(k)}$ in (\[main1\]) denotes the [*classical*]{} Liouville action on the $k$-punctured Riemann sphere. The class $[\omega^{F_0}]$ is the Poincaré dual of a divisor on the compactified moduli space $\overline{\cal M}_{0,k}$ which is given in terms of the $(2k-8)$-cycles defining the Deligne-Knudsen-Mumford boundary of $\overline{\cal M}_{0,k}$. The basic tools to obtain (\[main1\]) are classical Liouville theory and intersection theory. This result reproduces in the continuum the well-known result obtained in the matrix model approach to pure gravity [@mm]. For reviews on matrix models and 2D gravity see [@mm2]. [**2.**]{} The problems arising in the continuum formulation of Liouville gravity [@aa; @a; @b] are essentially: - [To evaluate Liouville correlators on Riemann surfaces of genus $h\ge 2$;]{} - [To perform the integration on moduli spaces;]{} - [To recover nonperturbative results from the topological expansion.]{} Results from matrix models and topological gravity show that these aspects are strictly related with the structure of ${\cal M}_h\equiv {\cal M}_{h,0}$ (we denote by ${\cal M}_{h,n}$ the moduli spaces of Riemann surfaces of genus $h$ and $n$ punctures). In particular, it turns out that the Liouville action is the Kähler potential for the natural (Weil-Petersson) metric on the moduli space. Also CFT is strictly related with the geometry of moduli space. For example, the Mumford isomorphism $$\lambda_n\cong \lambda_1^{c_n},\qquad c_n=6n^2-6n+1,$$ where $\lambda_n=\det \,{\rm ind}\,\overline \partial_n$ are the determinant line bundles, connects geometrical properties of ${\cal M}_h$ with the central charge $d=-2c_n$ of a weight $n$, $b$-$c$ system (notice that $d\le 1$). Actually, the bosonization of $b$-$c$ systems can be used to reproduce the Coulomb gas formulation of $d\le 1$ conformal matter. For $d>1$ it is not possible to represent conformal matter by a $b$-$c$ system. In this case one can consider the $\beta$-$\gamma$ system of weight $n$ whose central charge is $2c_n$. However, the representation of the $\beta$-$\gamma$ system in terms of free fields is a long-standing problem which seems related to the $d=1$ barrier. These aspects indicate that there is a connection between the barrier and the Mumford isomorphism. This is related to a similar structure considered in [@mmb] in the framework of the geometrical formulation of 2D gravity [@LAT; @mmb] where representing elliptic and parabolic Liouville operators by means of a scalar field constrains the conformal matter to be in the sector $d\le 1$. The natural framework to investigate the aspects considered above is the theory of uniformization of Riemann surfaces where Liouville theory plays a crucial role. Actually, in [@grava] it has been shown that the Liouville action appears in the correlators (intersection numbers) of topological gravity [@1]. The relationships between Liouville theory, matrix models and topological gravity suggest that it is possible to extend the above Liouville-topological gravity relationship by recovering the nonperturbative results of matrix models by continuum Liouville theory. In our model we will reduce all aspects concerning higher genus contributions to punctured spheres. The reduction to punctured sphere has been considered also by V.G. Knizhnik who expressed the sum of the genus expansion as a CFT on an arbitrary $N$-sheet covering of the Riemann sphere with branch points. For each branch point he associated a vertex operator and proposed to express the infinite sum on all genus ($h\ge 2$) as the limit for $N\to\infty$ of a ‘nonperturbative’ partition function [@Knizhniknonperturbative]. A natural way to get punctured spheres is by pinching all handles of a compact Riemann surface. Degenerate (singular) surfaces belong to the boundary of moduli spaces. These singularities play a fundamental role in the evaluation of relevant integrals (intersection theory). The fact that the [*classical*]{} Liouville action is the Kähler potential for the Weil-Petersson metric and the structure of the boundary of moduli space suggest to consider integrals on $\overline{\cal M}_h$ in the framework of the Duistermaat-Heckman integration formula [@DuisterHeckm]. The final result should be a sum of integrals $Z_n^{F}$ on the moduli space of punctured Riemann spheres $\overline{\cal M}_{0,n}= \left(\widehat{\bf C}\backslash\Delta_n\right)/Symm(n)\times PSL(2,{\bf C})$ with the integrands involving the Liouville action. These remarks indicate that a theory à la Friedan-Shenker [@FriedanShenker] can be concretely formulated to recover nonperturbative results in the continuum formulation. In this paper, we do not consider points [**a**]{}-[**c**]{} separately, rather we state the final solution finding the explicit form of the integrals $Z_n^{F}$ on $\overline{\cal M}_{0,n}$. The reduction to punctured sphere is particularly evident in topological field theory coupled to 2D gravity where higher genus contributions to the free energy $\langle 1\rangle_h$ can be written in terms of the sphere amplitudes of the puncture operator $P$ [@1; @hsl]. The physical observables of the theory are the primary fields ${\cal O}_\alpha$ ($\alpha=0,1,\ldots,N-1$, ${\cal O}_0$ is the identity operator) and their gravitational descendents $\sigma_n\left({\cal O}_\alpha\right)$, $n=1,2,\ldots$. In the coupled system ${\cal O}_0$ becomes non-trivial and it is identified with $P$. Denoting by ${\cal L}_0$ the minimal Lagrangian, the more general one is ${\cal L}={\cal L}_0+\sum_{n,\alpha}t_{n,\alpha}\sigma_n\left({\cal O}_\alpha \right),\sigma_0\left({\cal O}_\alpha\right)\equiv {\cal O}_\alpha,$ where $t_{n,\alpha}$ are coupling constants. With this definition one can compute correlation functions with an insertion of $\sigma_k$ just by differentiating $\langle 1\rangle_h$ with respect to $t_k$. Thus in general $$\big <\sigma_{d_1}\left({\cal O}_{\alpha_1}\right)\cdot\cdot\cdot\sigma_{d_n} \left({\cal O}_{\alpha_n}\right)\big >_h={\partial\over \partial t_{d_1,\alpha_1}}\cdot\cdot\cdot{\partial\over \partial t_{d_n,\alpha_n}}\big <1\big >_h.$$ Therefore $\big <1\big >_h$ is the crucial quantity to compute. By means of KdV recursion relations $$\big <\sigma_1(P)P\big >_h=2\big <P^4\big >_{h-1}+ {1\over 2}\sum_{h'=0}^h\big <P^2\big >_{h'}\big <P^2\big >_{h-h'},$$ it is possible [@hsl] to express $\big <1\big >_h$ as a sum of terms of the form $\displaystyle \big <P^{n_1}\big >_0\cdot\cdot\cdot \big <P^{n_j}\big >_0/\big <P^3\big >_0^{h+j-1}$ for $1\le j \le 3h-3$ with the constraint $\sum_{k=1}^j n_k=3(j+h-1)$. The reduction to the punctured sphere arises also in the evaluation of ${\rm Vol}_{WP}({\cal M}_{h,n})$. Indeed, at least in some cases, there is a relationship between $\overline{\cal M}_{h,n}$, $\overline{\cal M}_{0,n+3h}$ and their volumes[^2]. The first example is the geometric isomorphism [@wolpertis] $\overline{\cal M}_{1,1}\cong \overline{\cal M}_{0,4}$, and $${\rm Vol}_{WP}\left({\cal M}_{1,1}\right) =2{\rm Vol}_{WP}\left({\cal M}_{0,4} \right). \label{wolpertvolume}$$ To understand this result it is sufficient to recall that the $\wp$-function enters in the expression of the uniformizing connection of the once punctured torus $\Sigma_{1,1}$ (note that $\wp$ is a solution of the KdV equation) $$T_{\Sigma_{1,1}}={1\over 2}\left(\wp(\tau,z)+c(\tau)\right),$$ where $c(\tau)$ is the accessory parameter for $\Sigma_{1,1}$. Eq.(\[wolpertvolume\]) follows from the fact that $T_{\Sigma_{1,1}}$ is strictly related to the uniformizing connection $T_{\Sigma_{0,4}}$ of the Riemann sphere with four punctures since $\wp$ maps $\Sigma_{1,1}$ two-to-one onto the four punctured Riemann sphere. Let us notice that another isomorphism is [@igusa] $\overline{\cal M}_{2,0}\cong \overline{\cal M}_{0,6}$. There is another way to understand why punctured spheres play a crucial role in 2D gravity. The point is to notice that equal size triangulated Riemann surfaces considered in matrix models can be realized in terms of thrice punctured spheres [@LevinMorozov]. This aspect is related to arithmetic surfaces theory [@LevinMorozov; @SMIT]. In this context one should investigate whether this kind of surfaces have some suitable symmetry to define antiholomorphic involution. This question is important in order to investigate Osterwalder-Schrader positivity. This is connected with the problem of defining the adjoint in higher genus. On the sphere it can be done thanks to the natural antinvolution $z\to \bar z^{-1}$. In higher genus this problem has been solved only on a Schottky double where there is a natural antinvolution [@jkl]. Recently Harvey and González Diéz [@GabinoHarvey] have considered loci of curves which are prime Galois covering of the sphere. In particular they considered the important case of Riemann surfaces admitting non-trivial automorphisms and showed that there is a birational isomorphism between a subset of the moduli space ${\cal M}_h$ and $V^{(n)}$ (defined in (\[star\])). [**3.**]{} The relation between Liouville and uniformization theory of Riemann surfaces arises in considering the Liouville equation $$\partial_{\bar z}\partial_z\varphi_{cl}={e^{\varphi_{cl}}\over 2}, \label{le}$$ which is uniquely satisfied by the Poincaré metric (i.e. the metric with Gaussian curvature $-1$). This metric can be written in terms of the inverse of the uniformizing map $J_H$, that is $e^{\varphi_{cl}}={|{J_H^{-1}}'|^2/ \left({\rm Im}\, J_H^{-1}\right)^2}$, $J_H:H\to \Sigma\cong H/\Gamma$ where $H$ is the upper half-plane and $\Gamma$ a Fuchsian group. Let us introduce the $n$-punctured sphere $\Sigma=\widehat {\bf C}\backslash\{z_1,\ldots,z_n\}, \widehat {\bf C}\equiv {\bf C}\cup\{\infty\}$. Its moduli space is the space of classes of isomorphic $\Sigma$’s, that is $${\cal M}_{0,n}= \{(z_1,\ldots,z_{n})\in \widehat{\bf C}^{n}|z_j\ne z_k\; {\rm for}\; j\ne k\}/Symm(n)\times PSL(2,{\bf C}), \label{modulisp}$$ where ${Symm}(n)$ acts by permuting $\{z_1,\ldots,z_n\}$ whereas $PSL(2,{\bf C})$ acts by linear fractional transformations. By $PSL(2,\bf C)$ we can recover the ‘standard normalization’: $z_{n-2}=0$, $z_{n-1}=1$ and $z_{n}=\infty $. Furthermore, without loss of generality, we assume that $w_{n-2}=0$, $w_{n-1}=1$ and $w_n=\infty$. For the classical Liouville tensor we have $$T^F(z) =\sum_{k=1}^{n-1}\left({1\over 2(z-z_k)^2}+ {c_k\over z-z_k}\right),\qquad \lim_{z\to \infty}T^F(z)={1\over 2z^2}+{c_n\over z^3}+ {\cal O}\left({1\over |z|^4}\right),$$ with the following constraints on the [*accessory parameters*]{} $$\sum_{k=1}^{n-1}c_k=0, \qquad \sum_{k=1}^{n-1}c_kz_k=1-n/2, \qquad \sum_{k=1}^{n-1}z_k(1+c_kz_k)=c_n.$$ The $c_k$’s are functions on $$V^{(n)}=\{(z_1,\ldots,z_{n-3})\in {\bf C}^{n-3}|z_j\ne 0,1; z_j\ne z_k,\; {\rm for}\; j\ne k\}. \label{star}$$ Note that $${\cal M}_{0,n}\cong V^{(n)}/{Symm}(n), \label{mdls}$$ where the action of $Symm(n)$ on $V^{(n)}$ is defined by comparing (\[modulisp\]) with (\[mdls\]). Let us now consider the compactification divisor (in the sense of Deligne-Knudsen-Mumford) $D=\overline V^{(n)}\backslash V^{(n)}$. This divisor decomposes in the sum of divisors $D_1$,…,$D_{[n/2]-1}$ which are subvarieties of real dimension $2n-8$. The locus $D_k$ consists of surfaces that split, on removal of the node, into two Riemann spheres with $k+2$ and $n-k$ punctures. In particular $D_k$ consists of $C(k)$ copies of the space $\overline V^{(k+2)}\times \overline V^{(n-k)}$ where $C(k)=\left(^{\;\; n}_{k+1}\right)$ for $k=1,\ldots,{(n-1)\over 2}-1$, with the exception that for $n$ even $C(n/2-1)={1\over 2}\left(^{\;\; n}_{n/2}\right)$. It turns out that the image of the $D_k$’s, provide a basis in $H_{2n-8}(\overline{\cal M}_{0,n},{\bf R})$. In the case of the punctured Riemann sphere eq.(\[le\]) follows from the Liouville action $$S^{(n)}=\lim_{r\to 0}\left[\int_{\Sigma_r} \left(\partial_z\varphi\partial_{\bar z}{\varphi}+e^{\varphi}\right)+ 2\pi (n {\log} r+2(n-2){\log}|{\log}r|)\right],$$ where $\Sigma_r=\Sigma\backslash\left(\bigcup_{i=1}^{n-1} \{z||z-z_i|<r\}\cup\{z||z|>r^{-1}\}\right)$. This action, evaluated on the classical solution, is the Kähler potential for the Weil-Petersson two-form on $V^{(n)}$ [@0] $$\omega_{WP}^{(n)}= {i\over 2}{\overline\partial}{\partial}S^{(n)}_{cl}=-i\pi\sum_{j,k=1}^{n-3} {\partial c_k\over \partial {\bar z_j}}d\bar z_j\wedge d z_k. \label{36}$$ Let us consider the volume of moduli space of punctured Riemann spheres $${\rm Vol}_{WP}\left({\cal M}_{0,n}\right)={1\over (n-3)!} \int_{\overline{\cal M}_{0,n}}{\omega_{WP}^{(n)}}^{n-3}= {1\over (n-3)!} \left[\omega_{WP}^{(n)}\right]^{n-3}\cap \left[\overline{\cal M}_{0,n}\right].$$ Recently it has been shown that [@01] $${\rm Vol}_{WP}\left({\cal M}_{0,n}\right)={1\over n!} {\rm Vol}_{WP}\left(V^{(n)}\right)={\pi^{2(n-3)} V_n\over n!(n-3)!},\qquad n\ge 4,$$ where $V_n=\pi^{2(3-n)}\left[\omega_{WP}^{(n)}\right]^{n-3}\cap \left[\overline{V}^n\right]$ satisfies the recursion relations $$V_3=1,\qquad V_n={1\over 2}\sum_{k=1}^{n-3}{k(n-k-2) \over n-1}\left(^{\;\;n}_{k+1}\right)\left(^{n-4}_{k-1}\right) V_{k+2}V_{n-k},\qquad n \ge 4.\label{51}$$ Remarkably the basic tools in the computation of the volumes are classical Liouville theory and intersection theory. [**4.**]{} We now consider the differential equation associated with (\[51\]). First of all we define $$a_k= {V_k\over (k-1)((k-3)!)^2},\qquad k\ge 3, \label{rnm2}$$ so that (\[51\]) becomes $$a_3=1/2,\qquad a_n={1\over 2}{n(n-2)\over (n-1)(n-3)} \sum_{k=1}^{n-3}a_{k+2}a_{n-k},\qquad n\ge 4.\label{51al}$$ Eq.(\[51al\]) is equivalent to the differential equation $$g''={{g'}^2t-gg'+g't\over t(t-g)}, \label{51a}$$ where $g(t)=\sum_{k=3}^\infty a_k t^{k-1}$. Notice that by (\[36\]) $$g(t)=\sum_{k=3}^\infty {k(k-2)\over (k-3)!}t^{k-1} \int_{\overline{\cal M}_{0,k}} \left({i\overline\partial\partial S_{cl}^{(k)}\over 2\pi^2}\right)^{k-3}, \label{prtbtv}$$ where “${\int_{\overline{\cal M}_{0,3}}1}$”$\equiv {1\over 6}$. Function $g(t)$ resembles a topological expansion of string theory. Furthermore the structure of eq.(\[51a\]) resembles the Painlevé I. These remarks indicate that it is possible to recover the specific heat of pure gravity in the continuum. Actually, we will recover the Painlevé I by classical Liouville theory. In particular we will get the recursion relations for the Painlevé I by performing a suitable modification of the Weil-Petersson volume form ${\omega_{WP}^{(n)}}^{n-3}$. Remarkably, as we will show, it is possible to perform the substitution $${\omega_{WP}^{(n)}}^{n-3}\longrightarrow {\omega_{WP}^{(n)}}^{n-4} \wedge \omega^F,$$ in (\[prtbtv\]) without changing the general structure of (\[51al\]); that is we will get recursion relations of the following structure $$A_n=C(n)\sum_{k=1}^{n-3}A_{k+2}A_{n-k},\qquad n\ge 4. \label{main3}$$ The first problem is to find a suitable expansion for the Painlevé I field such that the structure of the associated recursion relation be the same of (\[main3\]). Remarkably this expansion exists, namely $$f(t)=t^{-12}\sum_{k=3}^\infty d_k t^{5k}.\label{painlfield}$$ It is interesting that in searching the expansion reproducing the general structure of (\[51al\]), which is a result obtained from continuous Liouville theory, one obtains an expansion involving only [*positive*]{} powers of $t$. With this expansion the Painlevé I $$f^2(t)-{1\over 3}f''(t)=t,\label{painleve}$$ is equivalent to the recursion relations[^3] $$d_n={3\over (12-5n)(13-5n)}\sum_{k=1}^{n-3} d_{k+2}d_{n-k}, \qquad d_3=-1/2,\label{rr12}$$ which has the same structure of (\[51al\]). We now investigate on the possible volume forms reproducing (\[rr12\]). To understand which kind of modification to ${\omega_{WP}^{(n)}}^{n-3}$ can be performed without changing the basic structure of (\[51al\]) we recall basic steps in [@01] to obtain (\[51\]). Let $D_{WP}$ be the $(2n-8)$-cycle dual to the Weil-Petersson class $\left[\omega_{WP}^{(n)}\right]$. To compute the volumes it is useful to expand $D_{WP}$ in terms of the divisors $D_k$ in the boundary of moduli space. It turns out that [@01] $$D_{WP}={\pi^2\over n-1}\sum_{k=1}^{[n/2]-1}k(n-k-2)D_k. \label{wpdvsr}$$ Let us set $$\widetilde V_n=\pi^{2(n-3)}V_n= \left[\omega_{WP}^{(n)}\right]^{n-3}\cap\left[\overline V^{(n)}\right]= \left[\omega_{WP}^{(n)}\right]^{n-4}\cap \left(\left[\omega_{WP}^{(n)}\right]\cap\left[\overline V^{(n)}\right]\right).$$ On the other hand $\left[\omega_{WP}^{(n)}\right]\cap\left[\overline V^{(n)}\right]=D_{WP}\cdot \overline V^{(n)}=D_{WP}$, so that by (\[wpdvsr\]) $$\widetilde V_n=\left[\omega_{WP}^{(n)}\right]^{n-4}\cap\left[D_{WP}\right]= {\pi^2\over n-1}\sum_{k=1}^{[n/2]-1}k(n-k-2) \left[\omega_{WP}^{(n)}\right]^{n-4}\cap\left[D_k\right].$$ Since $D_k$ consists of $C(k)$ copies of the space $\overline V^{(k+2)}\times \overline V^{(n-k)}$, we have $$\widetilde V_n={\pi^2\over n-1}\sum_{k=1}^{[n/2]-1} k(n-k-2)C(k) \left[\omega_{WP}^{(n)}\right]^{n-4}\cap \left[\overline V^{(k+2)}\times \overline V^{(n-k)}\right].$$ Finally, since [@01] $$\left[\omega_{WP}^{(n)}\right]^{n-4}\cap \left[\overline V^{(k+2)}\times \overline V^{(n-k)}\right]= \left[\omega_{WP}^{(k+2)}+\omega_{WP}^{(n-k)}\right]^{n-4}\cap \left[\overline V^{(k+2)}\times \overline V^{(n-k)}\right], \label{ab2}$$ it follows that $$\widetilde V_{3}=1,\qquad \widetilde V_{n}={\pi^2\over n-1}\sum_{k=1}^{[n/2]-1} k(n-k-2)C(k)\left(^{n-4}_{k-1}\right) \widetilde V_{k+2} \widetilde V_{n-k},\qquad n\ge 4,$$ which coincides with (\[51\]). We now introduce the divisor $$D^F={\pi^2\over n-1}\sum_{k=1}^{[n/2]-1}k(n-k-2)F(n,k)D_k, \label{wpdvsrF}$$ where $F(n,k)$ is a function to be determined. Let $[\omega^F]$ be the Poincaré dual to $D^F$ and define $$Z_n^F= \int_{\overline{\cal M}_{0,n}}{\omega_{WP}^{(n)}}^{n-4} \wedge \omega^F= \int_{\overline{\cal M}_{0,n}} \left({i\overline\partial\partial S_{cl}^{(n)}\over 2}\right)^{n-4} \wedge \omega^F,\qquad n\ge 4. \label{48d}$$ An important aspect of (\[48d\]) is that we can use the recursion relations (\[51al\]) to obtain nonperturbative results. This possibility is based on the obvious, but important fact, that $\left[\omega_{WP}^{(n)}\right]^{n-3}\cap\left[\overline V^{(n)}\right]= \left[\omega_{WP}^{(n)}\right]^{n-4}\cap\left[D_{WP}\right]$ implying that the general structure of (\[51al\]) (the same of (\[rr12\])) is unchanged under the substitution ${\omega_{WP}^{(n)}}^{n-3} \longrightarrow {\omega_{WP}^{(n)}}^{n-4}\wedge \omega^F$. To see this note that $$Z^F_n={1\over n!}\left[\omega_{WP}^{(n)}\right]^{n-4}\cap \left[D^F\right] ={\pi^2\over (n-1)n!}\sum_{k=1}^{[n/2]-1}F(n,k)k(n-k-2) \left[\omega_{WP}^{(n)}\right]^{n-4}\cap\left[D_k\right], \label{aa}$$ On the other hand by (\[ab2\]) $$\sum_{k=1}^{[n/2]-1}F(n,k)k(n-k-2) \left[\omega_{WP}^{(n)}\right]^{n-4}\cap\left[D_k\right]= {1\over 2}\sum_{k=1}^{n-3}F(n,k){k(n-k-2)\over n-1} \left(^{\;\; n}_{k+1}\right)\left(^{n-4}_{k-1}\right)V_{k+2}V_{n-k}, \label{adsf}$$ and by (\[rnm2\]) $$Z_n^F={\pi^2\over 2}{(n-4)!\over n-1}\sum_{k=1}^{n-3}F(n,k) a_{k+2}a_{n-k},\qquad n\ge 4. \label{basg}$$ Let us define the ‘Liouville $F$-models’ $${\cal Z}^{F,\alpha}(x)=x^{-\alpha}\sum_{k=3}^{\infty}x^kZ_k^F, \label{Fmodels}$$ where $x$ is the coupling constant. These models are classified by $\alpha$, $F(n,k)$ and $Z_3^F$. We now show that ${\cal Z}^{F,\alpha}(x)$ includes pure gravity. In fact, putting $${\cal Z}(t)={\cal Z}^{F_0,\alpha_0}(t^5), \qquad Z_3^{F_0}=-1/2, \quad \alpha_0=12/5, \label{PIa}$$ where $$F_0(n,k)= {6\over \pi^2}{(n-1)\over (12-5n)(13-5n)(n-4)!} {Z_{k+2}^{F_0}Z_{n-k}^{F_0}\over a_{k+2}a_{n-k}}, \label{setnumber}$$ we have, by (\[basg\]) and (\[setnumber\]), $$Z_3^{F_0}=-1/2,\qquad Z_n^{F_0}={3\over (12-5n)(13-5n)}\sum_{k=1}^{n-3}Z_{k+2}^{F_0}Z_{n-k}^{F_0},\qquad n\ge 4, \label{star2}$$ so that by (\[painlfield\])-(\[rr12\]) $${\cal Z}(t)=t^{-12}\sum_{k=4}^\infty t^{5k} \int_{\overline{\cal M}_{0,k}} \left({i\overline\partial\partial S_{cl}^{(k)}\over 2}\right)^{k-4} \wedge \omega^F-{t^3\over 2}, \label{ppII}$$ satisfies the Painlevé I $${\cal Z}^2(t)-{1\over 3}{\cal Z}''(t)=t. \label{PPPP}$$ [**5.**]{} In conclusion we have introduced a class of Liouville models by defining a suitable $D^F$ divisor. These Liouville $F$-models (LFM) include pure gravity. In this context we recall that the Liouville action arises also in the correlators of topological gravity [@grava]. Punctures correspond to real points on the boundary of the upper half-plane. Correspondingly one can define hyperelliptic Riemann surfaces. In the case of infinite genus one gets the McKean-Trubowitz [@MT] model which is related to matrix model. This suggests a nonperturbative formulation on $H$ with the image of punctures related to the eigenvalues of the Hermitian matrix models. In the discrete version of this approach one should be able to connect this formulation with the ideas at the basis of [@BX]. [**Acknoweledgements**]{} I would like to thank G. Bonelli for stimulating discussions. [99]{} E. Brézin and V. Kazakov, Phys. Lett. [**236B**]{} (1990) 144. M. Douglas and S. Shenker, Nucl. Phys. [**B335**]{} (1990) 635. D. Gross and A. Migdal, Phys. Rev. Lett. [**64**]{} (1990) 127. L. Alvarez-Gaumé, [*Random Surfaces, Statistical Mechanics And String Theory*]{}, Lausanne lectures, winter 1990. P. Ginsparg, [*Matrix Models Of 2D Gravity*]{}, Trieste Lectures, LA-UR-91-4101, hepth/9112013. E. Martinec, [*An Introduction To 2D Gravity And Solvable String Models*]{}, Trieste Lectures, RU-91-51, hepth/9112019. A. Morozov, [*Integrability And Matrix Models*]{}, ITEP-M2/93, hepth/9303139. P. Di Francesco, P. Ginsparg and J. Zinn-Justin, [*2D Gravity And Random Matrices*]{}, LA-UR-93-1722, SPhT/93-061, hepth/9306153. A.M. Polyakov, Phys. Lett. [**103B**]{} (1981) 207. F. David, Mod. Phys. Lett [**A3**]{} (1988) 509. J. Distler and H. Kawai, Nucl. Phys. [**B321**]{} (1989) 509. J.-L. Gervais, Comm. Math. Phys. [**130**]{} (1990) 257; [**138**]{} (1991) 301. M. Matone, [*Quantum Riemann Surfaces, 2D Gravity And The Geometrical Origin Of Minimal Models*]{}, preprint DFPD/93/TH/62, hepth/9309096. L.A. Takhtajan, [*Liouville Theory: Quantum Geometry Of Riemann Surfaces*]{}, hepth/9308125. M. Matone, [*Uniformization Theory and 2D Gravity. I. Liouville Action And Intersection Numbers*]{}, IC-MATH/8-92, DFPD-TH/92/41, hepth/9306150. E. Witten, Nucl. Phys. [**B340**]{} (1990) 281; Surv. Diff. Geom. [**1**]{} (1991) 243. R. Dijkgraaf and E. Witten, Nucl. Phys. [**B342**]{} (1990) 486. V.G. Knizhnik, Sov. Phys. Usp. [**32**]{} (11) (1989) 945. J.J. Duistermaat and G.J. Heckman, Invent. Math. [**69**]{} (1982) 259; [**72**]{} (1983) 153. N. Berline and M. Vergne, Duke Math. J. [**50**]{} (1983) 539. R.F. Picken J. Math. Phys. [**31**]{} (1990) 616. D. Friedan and S. Shenker, Nucl. Phys. [**B281**]{} (1987) 509. J.H. Horne and S.P. Martin, Phys. Lett. [**258B**]{} (1991) 322. A. Sen, Int. J. Mod. Phys. [**A7**]{} (1992) 2559. K. Li, Nucl. Phys. [**B354**]{} (1991) 725. S.A. Wolpert, Ann. of Math. [**118**]{} (1983) 491. J. Igusa, Ann. of Math. [**72**]{} (1960) 612. A. Levin and A. Morozov, Phys. Lett. [**243B**]{} (1990) 207. D.-J. Smit, Comm. Math. Phys. 143 (1992) 253. A. Jaffe, S. Klimek and A. Lesniewski, Comm. Math. Phys. [**126**]{} (1989) 421. G. González-Diéz and W.J. Harvey, [*Moduli Of Riemann Surfaces With Symmetry*]{}, in Discrete Groups and Geometry, ed. W.J. Harvey and C. Maclachlan, Cambridge 1992. G. González-Diéz Proc. London Math. Soc. 62 (1991) 469; [*On Prime Galois Covering Of The Riemann Sphere*]{}, preprint. P.G. Zograf and L.A. Takhtajan, Math. USSR Sbornik, [**60**]{} (1988) 143; [**60**]{} (1988) 297.\ L.A. Takhtajan, Proc. Symp. Pure Math. (AMS) [**49**]{}, part 1 (1989) 581. P.G. Zograf, [*The Weil-Petersson Volume Of The Moduli Space Of Punctured Spheres*]{}, to appear in Cont. Math.. H.P. McKean and E. Trubowitz, Comm. Pure Apll. Math. [**29**]{} (1976) 143. L. Bonora and C.S. Xiong, [*Correlation Functions Of Two-Matrix Models*]{}, SISSA 172/93/EP, BONN-HE-45/93, hepth/9311089. [^1]: e-mail: matone@padova.infn.it, vaxfpd::matone [^2]: The space ${\cal M}_{h,n}$ is not affine for $h>2$. Conversely the space ${\cal M}_{0,k}$ is finitely covered by the affine space $V^{(k)}$ defined in (\[star\]). Thus for $h>2$ the are not geometrical isomorphisms between $\overline{\cal M}_{h,n}$ and $\overline{\cal M}_{0,n+3h}$. However, in principle, nothing exclude the possibility to express ${\rm Vol}_{WP}\left({\cal M}_{h,n}\right)$ in terms of ${\rm Vol}_{WP}\left({\cal M}_{0,n+3h}\right)$. [^3]: Notice that $(-1)^kd_k$ is positive.
--- author: - 'M. Latour, S. K. Randall, P. Chayer, G. Fontaine, A. Calamida, J. Ely, T. M. Brown, and W. Landsman' bibliography: - 'refOmCen.bib' date: 'Received 25 November 2015; accepted 23 February 2017' title: 'Just how hot are the $\omega$ Cen extreme horizontal branch pulsators?[^1]' --- [ Past studies based on optical spectroscopy suggest that the five   pulsators form a rather homogeneous group of hydrogen-rich subdwarf O stars with effective temperatures of around 50 000 K. This places the stars below the red edge of the theoretical instability strip in the log $g$ $-$ [$T_{\rm eff}$]{} diagram, where no pulsation modes are predicted to be excited.]{} [Our goal is to determine whether this temperature discrepancy is real, or whether the stars’ effective temperatures were simply underestimated.]{} [We present a spectral analysis of two rapidly pulsating extreme horizontal branch (EHB) stars found in . We obtained *Hubble Space Telescope*/COS UV spectra of two $\omega$ Cen pulsators, V1 and V5, and used the ionisation equilibrium of UV metallic lines to better constrain their effective temperatures. As a by-product we also obtained FUV lightcurves of the two pulsators. ]{} [Using the relative strength of the and lines as a temperature indicator yields [$T_{\rm eff}$]{} values close to 60 000 K, significantly hotter than the temperatures previously derived. From the FUV light curves we were able to confirm the main pulsation periods known from optical data. ]{} [With the UV spectra indicating higher effective temperatures than previously assumed, the sdO stars would now be found within the predicted instability strip. Such higher temperatures also provide consistent spectroscopic masses for both the cool and hot EHB stars of our previously studied sample. ]{} Introduction ============ Hot subdwarf stars populate the blue (thus hot) part of the horizontal branch (HB), which is often called the extreme horizontal branch (EHB, @heb08). Both the HB and EHB are associated with the helium-core burning phase of stellar evolution. The peculiarity of the EHB stars is that their hydrogen envelope is not massive enough (M &lt;0.02 [$M_{\rm \odot}$]{}) to sustain significant hydrogen-shell burning. Indeed, hot subdwarf stars have lost most of their hydrogen envelope prior to the start of helium-core burning. These stars are found in the Galactic field population as well as in several globular clusters. An extensive review of these peculiar stars’ many properties and the current knowledge about them can be found in @heb16. Rapidly pulsating hot subdwarf stars (also known as V361 Hya stars) have been known among the field population for almost two decades now, since the serendipitious discovery of the first pulsating subdwarf B-type (sdB) stars [@kil97; @koen97]. Since then, the number of known V361 Hya stars has increased to more than 50 [@ost10]. These H-rich pulsating sdBs show multi-periodic luminosity variations, with periods of the order of 100$-$200 s and they are found in a well defined instability strip between $\approx$29 000 and 36 000 K. Their variability is due to pressure ($p$) modes excited by the $\kappa$-mechanism which was found to be driven by an increased opacity of iron, and iron-like elements, in the sub-photospheric layers of the star [@char97]. Radiative levitation is a key ingredient in maintaining a sufficient amount of iron in the driving region. However some additional diffusion mechanisms, such as mass loss and turbulence [@hu11], can interact with radiative levitation, effectively killing the necessary conditions for driving $p$-modes. Indeed, the instability strip is far from being pure, with a fraction of pulsators less than about 10% [@bil02; @ost10]. Besides these H-rich sdBs, rapid oscillations were also found in SDSS J160043.6+074802.9, a hotter subdwarf O-type (sdO) star [@wou06]. Despite having a high effective temperature ($\approx$68 000 K) as well as a slightly enriched helium content (log [$N$([He]{})/$N$([H]{})]{} = $-$0.65), the variability of the star is thought to arise through the same $\kappa$-mechanism that drives pulsations in sdBs [@font08; @lat11]. For over a decade, pulsating hot subdwarfs were known only among the field population. When rapid oscillations (P $\approx$84-124 s) were first discovered in EHB stars in the globular cluster  [@ran09], it was assumed that these constituted the globular cluster counterparts to the rapid sdB pulsators in the field. However, an optical spectroscopic survey at the VLT revealed that the five known  pulsators are in fact He-poor sdO stars with effective temperatures estimated between 48 000-54 000 K [@ran11; @ran16]. This was, and still is, highly intriguing, since the only sdO pulsator currently known among the field population is significantly hotter. Field counterparts to the   pulsators have yet to be found, although systematic searches have been done [@rod07; @john14]. Apart from , NGC 2808 is the only other cluster known to host rapid EHB pulsators [@bro13]. The six known pulsators were found by means of far UV photometry with the *Hubble Space Telescope* (*HST*). Low resolution STIS spectra were obtained for half of them only, allowing to roughly constrain their temperature and atmospheric helium abundance. So far, the NGC 2808 pulsators do not appear similar to the ones in , neither in terms of atmospheric parameters nor pulsational properties. That being said, to our current knowledge the five  pulsators, form a unique instability strip. This strip has no equivalent, neither in the field, nor in NGC 2808. A detailed description of the EHB instability strip in  has recently been published by @ran16. By comparing the position of the stars in the log $g$-[$T_{\rm eff}$]{} diagram with predictions from seismic models, they found the pulsators to lay in a region where no pulsation modes are predicted to be excited (see their Fig. 11). This is in marked contrast with the pulsating sdB (and the one known sdO) stars in the field, for which the driving of pulsations is well predicted by the same seismic models. One explanation discussed in @ran16 is that the temperatures derived from optical spectroscopy might underestimate the true effective temperatures of the sdO pulsators. This phenomenon has been reported in a few sdO stars for which both optical and UV spectroscopy are available [@fontm08; @rauch10; @lat15; @dix16]. It is related to the so-called Balmer line problem: for these stars it is not possible to simultaneously reproduce all Balmer lines using the same atmospheric model, the higher lines in the series needing a higher [$T_{\rm eff}$]{} to be accurately reproduced than the lower ones [@nap93]. As a consequence, the temperatures derived from optical spectra can be misleading and are usually underestimated. This problem can be solved to some degree by including metals in the model atmospheres used to fit the optical spectra [@gia10; @rauch14; @lat15]. Such models yield better fits and higher temperatures. However, this fine-tuning of the models to the observed spectra requires good quality optical spectra, since the Balmer line problem can be much more subtle in lower quality data. An alternative method to estimate temperatures of hot stars is to use the ionization equilibrium of metallic species. By fitting metal lines originating from different ionization stages of a same element, one can estimate the effective temperature [@rauch07; @fontm08]. This is usually best done for hot stars in the UV range, where the strongest metal lines are found. Given the faintness of the  pulsators (B $\approx$18 mag) and their position in a rather crowded field, the quality of optical spectra that can be obtained is limited. This is why we turned towards the UV range to provide us with an independent temperature determination. We obtained $HST$ spectra for two of the sdO pulsators, V1 and V5, with the Cosmic Origin Spectrograph (COS). The two stars were chosen as they lie at the cool and hot end of the observational instability strip. This paper presents the result of our efforts in providing a better estimate of the effective temperature of the  pulsators and determining whether or not the instability strip discrepancy is real. Mass distribution of the spectroscopic sample ============================================= ![Mass distribution of the 32 coolest objects of the sample (solid) and equivalent for the subsample of the six hottest stars (dotted). The mean mass of each distribution (0.331 and 0.452 [$M_{\rm \odot}$]{}) is indicated with a dash line.[]{data-label="mass"}](gmass3-eps-converted-to.pdf){height="8cm"} We used the atmospheric parameters determined for 38 EHB stars in   [@lat14] to derive their mass distribution. These 38 stars are the “clean” subset described in @ran16 that do not show signs of pollution by a cooler star. In view of the expected large uncertainty associated with each individual determination, this is mainly done with a statistical point of view in mind. At the outset, we have access to $HST$ Advanced Camera for Surveys (ACS) or 2.2 m MPG/ESO telescope Wide Field Imager (WFI) photometry giving apparent $B$ magnitudes for all of the 38 stars. Using the distance modulus of $\omega$ Cen derived in @delp06 and @bra16, $B - M_B$ = 13.70, a reddening index of $E(B-V)$ = 0.11 from @cala05, and a standard Seaton relation, $A_V = 3.20E(B-V)$ [@sea79], we first computed the absolute magnitude $M_B$ of each target object. In a second step, we calculated the theoretical value of $M_B$ from a synthetic spectrum characterizing each star ([$T_{\rm eff}$]{}, log $g$, and helium abundance derived), assuming different given masses. Then parabolic interpolation was then used to infer the mass of the model with a theoretical absolute $B$ magnitude that would match the observed value. The resulting masses, as well as the atmospheric parameters of the sample, ordered by increasing [$T_{\rm eff}$]{}  are reported in Table \[param\]. ![image](specV1V5.png){width="19cm"} It is immediately apparent that several mass estimates are far too low to be reconciled with the idea that most the hot subdwarfs are helium core or post-helium core burning stars. While the six hottest stars in the sample, the hot H-rich sdOs (including four pulsators), show a reasonable mean mass of 0.452 [$M_{\rm \odot}$]{}, the rest of the sample, taken as a whole, shows an unacceptably low mean mass value of 0.331 [$M_{\rm \odot}$]{}, below the minimum mass for helium burning. Interestingly, the spectroscopically inferred low mass problem for HB and EHB stars in $\omega$ Cen has been encountered by @moe11 and discussed in detail by @moni2011 ([-@moni2011; -@moni2012])[^2]. The problem seems to affect only that cluster in particular. The mass distributions of the hot and cool subsamples are shown in Fig. \[mass\]. The distributions were obtained following the procedure described in @fon12: individual gaussians defined by each individual value of the mass and its uncertainty are added together. Each gaussian has been normalised such that its surface area is the same for each star, thus ensuring the same weight in the addition procedure. We do not believe the increase of mass with temperature to be real, especially considering that the hot sdO stars are likely to be post-EHB objects. The mass difference would be naturally explained if the temperature of the sdO stars had been underestimated. Recomputing masses with temperatures increased by 10 000 K for the hottest stars leads to values compatible with the rest of the sample. We thus suggest that an underestimation of the effective temperature for the sdOs in the sample could explain the apparent mass discrepancy between the hot and cool samples. The UV analysis =============== Observations ------------ Ultraviolet COS spectra of the two pulsators V1 and V5 were obtained during cycle 22 (proposal GO-13707). Each star was observed for 5337 s with the low resolution G140L (R $\approx$3000) grating in time-tag mode. The data were reduced following the standard CALCOS procedure. The resulting spectrograms of both stars are shown in Fig. \[specv1v5\], where the flux of V5 was multiplied by a factor 1.138 in order to match the flux of V1, thus emphasing the similarity between the spectra of both stars. This is somewhat expected given that the pulsators form a rather homogeneous group according to the optical analysis. We can also note that the shape of the continuum is essentially the same for both stars. Since this is largely due to interstellar reddening, it is not surprising that it is very similar for the two targets. Most of the strong spectral features are due to the interstellar medium and are labelled below the spectra in Fig. \[specv1v5\]; C <span style="font-variant:small-caps;">i-ii</span>, , and , also conspicuous are the Ly$\alpha$ and ($\lambda$1304) geocoronal emission lines. The resonance doublet ($\lambda\lambda$1394,1403) is also visible in both stars at a radial velocity (RV) consistent with zero, thus indicating its interstellar origin. No stellar component can be resolved for this doublet. As for the resonance lines ($\lambda\lambda$1548,1551), both the interstellar (RV $\approx\-$20 km s$^{-1}$) and photospheric components (RV $\approx$230 km s$^{-1}$) can be resolved. Additional photospheric lines are the resonance doublet ($\lambda\lambda$1239,1243), $\lambda$1718 and $\lambda$1640, which are found at a radial velocity consistent with that of the cluster (232 km s$^{-1}$, @har96). The COS light curves -------------------- ![image](V1lc.png) ![image](V5lc.png) We took advantage of the time-tag mode to construct the UV light curves of both stars using the LightCurve tool[^3], a discussion on the method is presented in @sand16. The time-tag counts were binned into datachunks of 5 s. This is small enough to fully resolve the expected pulsations in the $\approx$80-120 s range while still giving adequate S/N in each data point. The top panels of Figure \[lc\] show the light curves obtained for the two stars, normalised to the average flux of the star in question. They are each divided into four chunks of continuous data, corresponding to observations at the four COS FPOS positions. The large gap in each curve represents the re-acquisition period. The lower panels of Figure \[lc\] show the Fourier amplitude spectrum based on the COS lightcurves. Periodicities with amplitudes above the 3.5 $\sigma$ threshold were extracted at 114.8 s (2.7% amplitude) and 113.4 s (1.2% amplitude) for V1, and at 100.5 s (1.1% amplitude) for V5. These periods correspond well to the dominant modes known for these stars from ground-based optical time-series photometry [@ran16; @ran11]. For V5, an additional period known from the optical data is also recovered just below 3.5 $\sigma$ at 107.8 s (0.6% amplitude). While it is interesting to compare the observed periodicities and amplitudes derived from the optical and the COS data at the qualitative level, a quantitative comparison is not particularly instructive due to the very different time baselines of the datasets. Indeed, for V1 the optical $u'$-band light curve obtained over a 6-day period in 2009 yields a rather messy Fourier spectrum with several closely split components clustered around 115.0 s, 114.7 s and 114.4 s [see Table 4, @ran16], whereas an earlier $B$ band dataset taken over two nights in 2009 shows dominant periods at 114.7 s and 113.7 s [see Table 1, @ran11], very similar to those uncovered with COS. It is not clear which of the split components constitute independent harmonic oscillations and which are induced in the Fourier spectrum, for example by intrinsic amplitude variations or the beating of closely spaced modes. The longer the time baseline and the better the quality of the data, the more complicated the Fourier spectrum appears to become. For V5 the situation seems simpler, the previously extracted 100.6 s, 99.3 s and 107.5 s periodicities [@ran16] not being split in any obvious way, but this is likely due to the much lower pulsational amplitudes and the limited S/N of the data. Given the clear indications for pulsational amplitude variability there is no point in trying to use the relative amplitudes observed in the different frequency bands for mode identification. If simultaneous time-series photometry were available it would in principle be possible to exploit the colour dependence of each mode’s amplitude to derive the degree index $\ell$, as has been done with some success for rapidly pulsating sdB stars in the field [@ran05]. Since in our case the different datasets are taken several years apart we limit ourselves to a very qualitative comparison of the apparent amplitudes in the different bands. For the low-degree modes expected to be observed in these stars we would expect a general trend of amplitude decrease with increasing wavelength, that is, the apparent COS far-UV amplitudes should be significantly higher than those seen in the optical data (assuming a similar intrinsic amplitude of the mode at the time of observation) due to the frequency-dependence of the limb darkening [see @ran05 for details]. This is assuming these stars behave similarly to the much cooler pulsating sdB stars, since detailed calculations of the pulsational perturbation of the stellar atmosphere have not yet been carried out for sdO stars. In our very rudimentary colour-amplitude analysis we simply add up the amplitudes of all frequency components around the 115 s complex for V1. This gives a far-UV amplitude of 3.9%, a $u'$ amplitude of 5.1% and a $B$ amplitude of 2.2%. For the 100.6 s pulsation in V5 we find 1.1% in the far-UV vs 0.54 % in the $u'$, and for the 107.5 s pulsation we have 0.6% in the far-UV and 0.42 % in the $u'$ (no $B$-band data are available for this star). With the exception of the very large $u'$ amplitude derived for the 115 s complex in V1 (that value is particularly unreliable due to the many split components), the general trend does seem to be for the UV amplitudes to be larger than the corresponding peaks in the optical. However, given the small number statistics this is not a significant result, and it is not clear how indicative the amplitudes derived from the $u'$ and $B$ bands are. We would expect the amplitude from the ground-based optical data to underestimate the true apparent amplitudes due to the flux contribution from nearby (presumably not pulsating) stars in the very crowded $\omega$ Cen field. This is true in particular for V5, which has a very close relatively bright companion. We tentatively conclude that the far-UV amplitudes observed in the $\omega$ Cen variables appear qualitatively comparable to or up to twice as large as those from ground-based optical data. This finding is of interest in the context of comparing the far-UV space-based pulsational properties of hot subdwarfs (such as those obtained for the pulsators in NGC2808 by @bro13) to those derived from ground-based optical data. ![image](Fig4_rev_teff.png) ![image](Fig4_rev_abund_sm6.png) Analysis of the COS spectra --------------------------- Our goal was to use strong metal lines in the UV range to simultaneously determine the metal abundance and the temperature, using lines originating from different ionization levels. In the observed wavelength range ($\approx$1150$-$2000 Å), lines originating from C <span style="font-variant:small-caps;">iii</span>-<span style="font-variant:small-caps;">iv</span>, N <span style="font-variant:small-caps;">iv</span>-<span style="font-variant:small-caps;">v</span> and O <span style="font-variant:small-caps;">iv</span>-<span style="font-variant:small-caps;">v</span> are predicted. As already mentioned, we detect the nitrogen lines and the doublet in our spectra. Unfortunately, the multiplet ($\approx$1176 Å) and the lines (1169 Å) cannot be used for the analysis. A wide absorption feature between $\approx$1170-1180 Å is seen in both spectra (and indicated in Fig. \[specv1v5\]), hiding any hints of the multiplet. We could not find the origin of this feature. No strong interstellar lines are expected in this region and we verified that no artefact was produced when combining the four subexposures. Iron and nickel lines can be abundant at these wavelengths but are not expected to produce such a strong feature. To nevertheless examine this possibility we compared an IUE spectrum (degraded to the COS resolution) of Feige 34 with our COS spectra, but did not find any similar feature. Feige 34 is an sdO star with parameters, to our best knowledge, similar to those of V5 but enriched in iron and nickel [@lat16]. If those lines were at the origin of the feature seen in the COS spectra, we would expect it to feature even more prominently in the IUE spectrum of Feige 34. Although the COS handbook mentions calibration issues at these wavelengths, the same phenomenon was seen by @bro12 (see their figure 12) with STIS/G140L spectra of EHB stars, suggesting that there might be an unknown astrophysical explanation. Finally, a firm identification of the oxygen lines (1338 Å, 1342 Å, and 1371 Å) was not possible, suggesting a sub-solar abundance. Thus we had to rely on the nitrogen lines for our analysis. For each star we built a two-dimensional grid of NLTE model atmospheres, varying [$T_{\rm eff}$]{} and the nitrogen abundance, while keeping the other parameters fixed. For both stars the grid covered the following ranges: [$T_{\rm eff}$]{} from 45 000 K to 65 000 K in steps of 2 000 K and log $N$(N)/$N$(H) from $-$3.0 to $-$5.8 in 0.4 dex intervals. The surface gravity and helium abundance were fixed for each star to the values derived by the optical analysis (see Table \[param\]). The model atmospheres and synthetic spectra were computed with the public codes TLUSTY and SYNSPEC [@lanz03; @lanz07], and include C, N, and O as metallic elements in addition to H and He. We adopted in these models a carbon abundance of log $N$(C)/$N$(H)=$-$4.6, according to the estimates from the optical spectra [@lat14], and an oxygen abundance of one tenth solar, given the weakness of the observed oxygen features. Because the line spread function of COS departs from the usual Gaussian function, we used the theoretical profiles listed on the COS website [^4] (G140L/1105 at lifetime position 3) for the convolution of our synthetic spectra. We then used our synthetic spectra grids to simultaneously fit the nitrogen lines in the observed spectra. The result can be seen in Fig. \[fitn\], where the thin grey line represents the observation, while the thick black line is a best fitting model. In the case of V1 we obtained values of log $N$(N)/$N$(H) = $-$3.8$\pm$0.5 dex and [$T_{\rm eff}$]{} = 60 000 $\pm$ 5 000 K. The nitrogen lines in V5 being less strong, the fit for this star resulted in a lower N abundance of log $N$(N)/$N$(H) = $-$4.2$\pm$0.5 dex combined with [$T_{\rm eff}$]{} = 63 000 $\pm$ 6 000 K. The line profiles are not very well defined at low resolution, and the SNR is relatively low in the 1718 Å region, which explains the rather high uncertainties. Nevertheless, the nitrogen lines indicate effective temperatures higher than the optical estimates (by $\approx$10 000 K for V1 and $\approx$5 000 K for V5). The nitrogen abundances are consistent with a solar value ($-$4.2) which is rather typical for hot subdwarf stars [@bla08; @geier13]. For comparison, in Fig. \[fitn\] we overplot the N line profiles for temperatures of $\pm$10 kK than those obtained with the fit on the left panels. As seen from the figure, the line is not sensitive to temperature below $\approx$63 kK, while the doublet appears too weak at lower [$T_{\rm eff}$]{}. At the lower temperatures, the higher N abundance required to match the lines would produce a line stronger than observed. The effect of changing the nitrogen abundance by $\pm$0.5 dex is shown on the right panels of Fig. \[fitn\]. Similar temperature $-$ abundance grids were made for oxygen and carbon in order to estimate their abundances. We examined the 1330-1380 Å region where the strongest oxygen lines are predicted. The only line that could be identified in both stars is the at 1342 Å. The other lines are not adequately defined to claim a real detection. Nevertheless, we can set an upper limit on log $N$(O)/$N$(H) $\approx$ $-$4.6. We recall here the solar abundance to be $-$3.3 dex, so the stars would have abundances below 1/10 solar, which is also a normal value for hot subdwarfs. Considering carbon, although the IS and photospheric components of the doublet can be distinguished, the line profiles are blended. However, the line strength remains in agreement with the upper limits placed from the optical spectra, log $N$(C)/$N$(H)$\lesssim$ $-$4.6 (no carbon lines are distinguishable in the optical). The upper panel of Fig. \[V1V5\_C\] shows the observed and modelled carbon doublet (interstellar and stellar components) for V1 and V5. In the lower panel, we show the observed spectra in the $\approx$1175 Å region as well as a theoretical spectrum representative for both stars. The issue of the multiplet has been discussed previously, and the plot clearly illustrates that the predicted carbon line cannot explain the observed feature around 1175 Å. . \[V5\_1330\] The final photospheric feature visible in the spectra is the line at 1640 Å. Figure \[heII\] shows the region surrounding this line in the COS spectra of V1 and V5, overplotted (red) by synthetic models with a [$T_{\rm eff}$]{} of 60 and 63 kK respectively. For V5, we have a good match between the synthetic and observed spectra, while for V1 the observed line is too strong to be reproduced by the synthetic spectrum. This is somewhat unexpected since the helium abundance derived from the optical spectra is very similar for both stars (log [$N$([He]{})/$N$([H]{})]{} = $-$1.76 and $-$1.67 for V1 and V5 respectively). The effective temperature affects the strength of the He lines, but in the [$T_{\rm eff}$]{} range of our stars the effect is not very strong for $\lambda$1640. This is illustrated with the blue curve in Fig. \[heII\] which shows the He line at [$T_{\rm eff}$]{} = 49 000 K (the value derived from the optical spectra, see Table \[param\]). The line is indeed stronger at lower [$T_{\rm eff}$]{}, but still not strong enough to match the observed profile which is wider and deeper still. We also fit the line of V1 in the temperature$-$helium abundance parameter space. The resulting best fit is illustrated by the green curve in the plot, corresponding to the following parameters: [$T_{\rm eff}$]{} = 51 000 $\pm$ 4000 K and log [$N$([He]{})/$N$([H]{})]{} = $-$0.76 $\pm$ 0.3. This helium abundance is ten times higher than the one indicated by the optical spectrum and thus in serious contradiction; such an abundance would produce optical helium lines that are much stronger than observed. Moreover the theoretical line profile does not fit the UV spectrum very well, the wings being too broad and the core not deep enough. To conclude, we do not understand the line in the COS spectrum of V1. As mentioned previously, the silicon resonance lines have RVs indicating an interstellar origin with no resolved photospheric components. Individual lines from heavy elements like iron and nickel can also not be resolved at the low resolution and relatively poor S/N of our spectra. However, by comparing the COS spectra with model spectra including a solar amount of silicon and iron, we found such abundances to be compatible with our observations. Thus, a solar abundance is a reasonable upper limit to place on these elements. Figure \[V5\_1330\] displays our observed spectra over a spectral range especially rich in lines. Overplotted in red are model spectra for both stars including silicon and iron at solar abundances (C, N, O are also included at their estimated values). Oxygen and silicon lines are indicated, while the other features are due to iron, as comparison, model spectra without Si and Fe are plotted in blue. We note that the $\lambda$1370 line is blended with iron lines present at its blue side, which makes it appear stronger in a model including iron. Conclusion ========== We addressed the issue of the temperature discrepancy of the  instability strip discussed in @ran16. As shown in their Figure 11, the pulsating sdOs are found at temperatures cooler than the predicted instability region favourable to pulsations. The discrepancy could be explained either by shortcomings in the seismic models (e.g. nickel opacity might boost pulsations at lower effective temperatures) or by an underestimation of the surface temperature of the stars derived by means of optical spectroscopy. In this paper, we focussed on the second possibility by analysing low resolution COS spectra of two pulsators (V1 and V5). We also determined a mass distribution for the sample of EHB stars presented in @lat14. From the mass distribution, we noticed a significant difference between the mean mass of the “cool” stars of the sample (with [$T_{\rm eff}$]{}${\lower 0.5ex\hbox{$ \buildrel<\over\sim\ $}}$ 45 kK) and the hotter sdOs. The latter group includes only six stars (including four of the pulsators) but the mean mass difference of 0.127 [$M_{\rm \odot}$]{} is nevertheless suspicious. This mean mass discrepancy could indicate that the effective temperatures are underestimated from the optical spectra, since an increase in the temperature of the stars would lead to a lower radius (and mass) needed to match the observed magnitude of the stars, given a fixed distance to the cluster. To investigate the issue in more detail, we analysed low resolution UV spectra for two of the pulsators, namely V1 and V5. The goal was to use the ionization equilibrium of strong metal lines (C, N, O) originating from different ionic states to assess more precisely the effective temperature of the stars. However, only the nitrogen lines could be used to this purpose. From the doublet and $\lambda$1718 lines we could estimate the nitrogen abundances to be close to solar for V5 and slightly higher in V1. Given these nitrogen abundances, the observed strength of the doublet indicates temperatures significantly higher than those estimated from optical spectroscopy. For V1 we obtained [$T_{\rm eff}$]{} = 60$\pm$5 kK and a slightly higher temperature of [$T_{\rm eff}$]{} = 63$\pm$6 kK for V5. The uncertainties are rather large given that we are forced to rely on three spectral lines in low quality spectra, but it is nevertheless quite clear that the nitrogen lines require the effective temperature to be closer to 60 kK than 50 kK for both stars. As for other elements, the strength of the doublet is consistent with the upper limit derived from the optical spectra of about 1/10 solar and the oxygen abundance is also depleted by at least a factor of 10 with respect to solar. Upper limits for silicon and iron are about solar. In summary, based on the mass distribution of the  sdOs, and the UV nitrogen lines of the two pulsating stars V1 and V5, it is likely that the effective temperatures of the sdOs in the $\omega$ Cen sample are systematically higher than those derived from the optical spectra, thus moving the stars into the instability region predicted by seismic models. Our analysis was however limited by the quality of the data available. The analysis and parameter determination of hot stars (sdOs, white dwarfs) is hampered by inherent difficulties, such as the behaviour of the Balmer lines and the weak temperature dependence of the UV and optical flux distribution. Combining these issues with the observational difficulties faced for faint stars in a very crowded field makes studies such as that attempted here very challenging indeed. We believe the most promising way forward is to conduct spectroscopic studies of brighter stars with effective temperatures similar to the  sdOs. This could go a long way towards an understanding of the fundamental parameters of these stars and how to reliably derive them, as well as their chemical patterns and evolutionary status. This work was supported by a fellowship for postdoctoral researchers from the Alexander von Humboldt Foundation awarded to M.L., who also acknowledges funding by the Deutsches Zentrum für Luft- und Raumfahrt (grant 50 OR 1315). This research makes use of the SAO/NASA Astrophysics Data System Bibliographic Service. [^1]: Based on observations (proposal GO-13707) with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26666. [^2]: We note that @moni2011 and @moe11 derived masses for their star in a different way: they used an empirical bolometric correction to determine the luminosity. [^3]: http://justincely.github.io/lightcurve/ [^4]: http://www.stsci.edu/hst/cos/performance/spectral\_resolution
--- abstract: 'We review a coarse-graining strategy (multiblob approach) for polymer solutions in which groups of monomers are mapped onto a single atom (a blob) and effective blob-blob interactions are obtained by requiring the coarse-grained model to reproduce some coarse-grained features of the zero-density isolated-chain structure. By tuning the level of coarse graining, i.e. the number of monomers to be mapped onto a single blob, the model should be adequate to explore the semidilute regime above the collapse transition, since in this case the monomer density is very small if chains are long enough. The implementation of these ideas has been previously based on a transferability hypothesis, which was not completely tested against full-monomer results (Pierleoni [*et al.*]{}, [*J. Chem. Phys.*]{}, 2007, [**127**]{}, 171102). We study different models proposed in the past and we compare their predictions to full-monomer results for the chain structure and the thermodynamics in the range of polymer volume fractions $\Phi$ between 0 and 8. We find that the transferability assumption has a limited predictive power if a thermodynamically consistent model is required. We introduce a new tetramer model parametrized in such a way to reproduce not only zero-density intramolecular and intermolecular two-body probabilities, but also some intramolecular three-body and four-body distributions. We find that such a model correctly predicts three-chain effects, the structure and the thermodynamics up to $\Phi \simeq 2$, a range considerably larger than that obtained with previous simpler models using zero-density potentials. Our results show the correctness of the ideas behind the multiblob approach but also that more work is needed to understand how to develop models with more effective monomers which would allow us to explore the semidilute regime at larger chain volume fractions.' address: - '$^1$ Dipartimento di Fisica, Università dell’Aquila, V. Vetoio 10, Loc. Coppito, I-67100 L’Aquila, Italy' - '$^2$ Dipartimento di Fisica, Sapienza Università di Roma and INFN, Sezione di Roma I, P.le Aldo Moro 2, I-00185 Roma, Italy' - '$^3$ Dipartimento di Fisica, Università dell’Aquila and CNISM, UdR dell’Aquila, V. Vetoio 10, Loc. Coppito, I-67100 L’Aquila, Italy' author: - 'Giuseppe D’Adamo,$^1$ Andrea Pelissetto$^{2}$ and Carlo Pierleoni$^3$' title: 'Coarse-graining strategies in polymer solutions' --- \#1[(\[\#1\])]{} \#1[to 0pt[\#1]{}]{} \#1\#2 Introduction ============ Polymer solutions show a wide variety of behaviors, depending on chain length, density, and temperature.[@deGennes; @Doi; @desCloizeauxJannink; @Schaefer] In the dilute regime the isolated chain radius of gyration $R_g$ is the relevant length scale and the properties of the solution can be described in terms of single-chain properties and of the solvent-quality parameter. The radius of gyration scales as $R_g=bL^{\nu}$, where $b$ is the monomer characteristic length (Kuhn segment), which depends on chemical details and temperature, $L$ is the number of monomers per chain, and $\nu$ is a universal exponent. In the good-solvent regime[@Clisby:2010p2249] $\nu = 0.587597(7)$, while $\nu = 1/2$ (with logarithmic corrections [@desCloizeauxJannink; @Schaefer]) for $\theta$-solvents. The semidilute regime is entered when chains start to overlap, but still the monomer density is small. If $c = N/V$ is the polymer concentration — $N$ is the number of chains and $V$ the volume of the system under consideration — and $c_m = c L$ is the monomer concentration, the semidilute regime is characterized by $c > c^*$ (or equivalently by $\Phi > 1$, where $\Phi = c/c^*$ is the polymer volume fraction) and $c_m \ll 1$, where $c^*=3/(4\pi R_g^3)$ is the overlap concentration. Note that $c_m=(3/4\pi b^3) \Phi L^{1-3\nu}$. Hence, when increasing $\Phi$, increasingly longer polymer chains are needed to ensure the semidilute condition $c_m \ll 1$. If $c_m$ is not small, one enters the concentrated or melt regime. From this discussion it appears that very long polymers are necessary to obtain a genuine semidilute regime over several orders of magnitude in chain density. For this reason, simulations of semidilute solutions of linear chains, even at the level of generic lattice or bead-spring models with implicit solvent, are quite expensive and have been limited to not too long chains and not too high densities. [@Muller:2000p2000; @Cavallo:2006p784; @Bolhuis:2001p268; @Louis:2002Physica; @Yan:2000p2257; @Pelissetto:2005p296] Moreover, in many complex situations, polymers only constitute one species in the solution, making full-monomer simulations even more difficult. In these cases a modelling at length scales of the order of the polymer size is often sufficient to provide the relevant thermodynamic and structural informations on the solution. For instance, to determine the phase behavior of polymer-colloid mixtures in the colloid regime, [@PoonWCK:2002p1993; @Fuchs:2002p2258; @Tuinier:2003p2259; @Mutch:2006p2260] a detailed microscopic model of the polymers is not necessary. It is enough to use coarse-grained models which retain the essential thermodynamic (long-wavelength) behavior of the polymer solution.[@Bolhuis:2002p267] Other examples are block copolymers for which the self-assembling of the chains in supramolecular aggregates of various shapes and sizes is ubiquitous. The description of the physical behavior of the self-assembled phases only requires a modelling at the mesoscopic level rather than at the microscopic (monomer) level. [@Pierleoni:2006p159; @HansenJP:2006p2248; @SG-07; @GP-10] This is the realm of self-consistent field-theoretical methods which have proved to be very effective to describe the physics of concentrated solutions and melts of homopolymers and block-copolymer blends. [@Bates:1990p759; @Cavallo:2006p784] Coarse-grained models for soft condensed matter systems have received much attention in the last two decades.[@Likos:2001p277] In the simplest approach one maps polymer chains onto point particles interacting by means of the pairwise potential of mean force between the centers of mass of two isolated polymers. [@Dijkstra:1999p2142; @Likos:2001p277] This potential is of the order of $2k_BT$ at full overlap,[@Grosberg:1982p2265; @Dautenhahn:1994p2250] has a limited range of the order of $3 R_g$ and is very well represented by a linear combination of few Gaussian functions. [@Bolhuis:2001p268; @Pelissetto:2005p296] Such a model is however limited to the dilute regime $\Phi\lesssim 1$, in which many-body interactions[@Bolhuis:2001p288] can be neglected. This limitation was overcome ten years ago in a seminal work,[@Louis:2000p269; @Bolhuis:2001p268] which eliminated the complexity of the many-body interactions by introducing a density-dependent pair potential, which is unique according to Henderson theorem [@Henderson:1974p2091] and reduces to the potential of mean force in the limit of zero density. This work has paved the way to the use of soft effective particles to represent polymer coils in complex situations such as in modelling colloid-polymer mixtures.[@Bolhuis:2002p267] However, density-dependent potentials are difficult to handle. Care is needed to derive the correct thermodynamics [@HansenMcDonald; @Stillinger:2002p2125; @Louis:2002p2193] and to compute free energies and phase diagrams.[@Likos:2001p277] Also their use in non-homogeneous situations is cumbersome since the interaction should depend on the local density which is not known beforehands and some kind of self-consistent procedure should be developed. Furthermore, representing polymers as soft spherically symmetric particles is not always appropriate. For instance, in studying polymers adsorbed on surfaces, like polymer brushes or polymer-coated colloids, it is clear that the anchorage to the surface breaks the rotational symmetry of the chains, an effect that must be taken into account in any accurate coarse-grained model. A further example is in modelling solutions of A-B block copolymers which cannot be represented as soft particles interacting by a spherically symmetric pair potential. [@Pierleoni:2006p159; @HansenJP:2006p2248; @SG-07; @GP-10] In principle those limitations can be overcome by switching to a model at a lower level of coarse graining, i.e. by mapping a long linear polymer to a short linear chain of soft effective monomers (called “blobs” in the following). Such model retains some internal degrees of freedom, which allow more flexibility in chain geometry as necessary, for instance, in anisotropic systems. Moreover, in the semidilute regime, this model is expected to allow the use of density-independent blob-blob interactions, since the local density of the blobs can always be kept small by increasing the number of blobs per chain. Indeed, if chains of $L$ monomers are partitioned in $n$ effective blobs of $m = L/n$ monomers each, the local concentration of the blobs is $c_b = c n$. The blob overlap concentration is given by $c^*_b = 3/(4 \pi r_g^3)$, where $r_g$ is the radius of gyration of the blob. If we assume that $r_g = b m^\nu$, where $b$ is the Kuhn length that appears in the scaling of the radius of gyration (this relation, though not exact, is a very good approximation), we obtain $${c_b\over c_b^*} = {3\over 4\pi b^3} {n c\over m^{3\nu}} = \Phi n^{1-3\nu} \approx \Phi n^{-0.763}.$$ Hence, for any polymer volume fraction $\Phi$, since $\nu > 1/3$ above the collapsed phase, one can choose $n$ so that $c_b/c_b^* < 1$, i.e., so that blobs do not overlap. In this regime, the size of each blob is approximately density independent, hence each blob can be replaced by an effective single atom of fixed size in a coarse-grained representation. Moreover, one expects the many-body interactions among blobs of different chains to be negligible. Hence, the parametrization of the intermolecular interactions among the chains in terms of zero-density pair potentials should be reasonably accurate. Conversely, one can use the $n$ blob model with zero-density intermolecular pair potentials up to $\Phi\lesssim n^{3\nu - 1}$. For larger concentrations many-chain intermolecular interactions come into play. The main problem in this approach is how to obtain the intramolecular interactions, i.e., the potentials among the blobs of the same chain. The problem is trivial for a dumbbell model ($n=2$), where the blob-blob interaction is just the potential of mean force, but becomes increasingly difficult when increasing the number $n$ of blobs per chain. Indeed, the intramolecular effective interaction, which is simply minus the logarithm of the joint distribution of the blob positions, is inherently a many-body interaction, which cannot be represented as a hierarchy of two-body, three-body, etc. terms, at variance with what happens for the intermolecular potentials. In that case the small-density expansion gives the pair potential at lowest order, the three-body potential at next-to-leading order, and so on. Therefore, approximations must be introduced in trying to reproduce some features of the underlying full-monomer system. As a first attempt in this direction, Pierleoni [*et al.*]{} [@Pierleoni:2007p193] introduced a multiblob model, referred to as model M1 in the following, for homopolymers in good-solvent conditions. They started from the intramolecular and intermolecular potentials appropriate for dumbbells (a two-blob molecule), a problem that can be solved as explained in Addison [*et al.*]{}[@Addison:2005p225] The potentials for chains with more blobs were then obtained from the dumbbell potentials using a transferability hypothesis. This model has the correct scaling behavior for good-solvent polymers in dilute and semidilute solutions, [@Pierleoni:2007p193] including the excluded-volume screening at large length scales with density. However, its prediction for the EOS of the solution is incorrect,[@Pelissetto:2009p287] shading serious doubts on the correctness of the transferability assumption for the potentials. A modified model, referred to as model M2 in the following, was also introduced.[@Pelissetto:2009p287] In this model, for each number of blobs, the parameters of the potential are tuned to match full-monomer results for the EOS. Although successful in reproducing the thermodynamics, model M2 is inherently different from M1 in that full-monomer results at finite density are needed to tune the model parameters, which is an evident limitation of this approach in more complex situations. In this paper, beside comparing in detail results for models M1 and M2 to full-monomer predictions in a wide range of polymer concentrations in the semidilute regime, we introduce a new coarse-grained model for semidilute solutions. In this model we map a single chain on a tetramer, i.e., a chain of four blobs, in such a way to reproduce all two-body, and some three-body and four-body distributions of an isolated good-solvent polymer at the full monomer level. In order to accomplish this program we use bonding, bending and torsional angle potentials, plus additional 1-3 and 1-4 central potentials which we obtain with the Iterative Boltzmann Inversion (IBI) procedure. [@Schommers:1983p2118; @MullerPlathe:2002p2127; @Reith:2003p2128] Furthermore, a single intermolecular pair potential between blobs of different tetramers is obtained in such a way to reproduce the center-of-mass radial distribution function between two isolated chains at the full-monomer level. This model provides correct results for the EOS up to reduced densities $\Phi\simeq 2$, a considerably larger range of densities with respect to simpler models. The tetramer model presented here is a first successful attempt to realize the multiblob program: building a multiblob coarse-grained model based on zero-density potentials which is able to provide the correct single-chain structure and EOS at finite density. Our approach is very close to the coarse-graining procedure applied by Fritz [*et al.*]{}[@Fritz:2009p1721] to polystyrene. Similar methods were also used for coarse-grained simulations of typical benchmark chains like polycarbonates and polystyrene in a melt, [@Milano:2005p2251; @Carbone:2008p2254] although in these works the potentials were fixed by requiring the coarse-grained model to reproduce structural properties at fixed pressure and temperature — hence potentials also depend on thermodynamic variables, as is the case for the density-dependent potentials.[@Louis:2000p269; @Bolhuis:2001p268] In some coarse-graining procedures also thermodynamic information was taken into account to fix the potentials, see, e.g., Rossi [*et al.*]{}[@Rossi:2010p2256] and references therein. We should also mention the approach of Vettorel [*et al.*]{} [@Vettorel:2010p1733], which extends previous work on a single-blob model.[@Murat:1998p1980] In this multiblob model each blob carries internal degrees of freedom, to account for the density profile of the underlying full-monomer subchains. However, the interactions are not derived [*ab initio*]{} in the coarse-graining procedure, but are obtained by using phenomenological arguments. Finally, we mention the work of Clark [*et al.*]{} [@CG-10] which applies integral-equation methods to a coarse-grained model appropriate for polymer melts. The paper is organized as follows. In section \[sec.2\] we present the general formalism behind any coarse-graining procedure and we report our specific methodology to derive the tetramer model. In section \[sec.3\] we compare the structure and the thermodynamic behavior predicted by the tetramer model with those (referred to as full-monomer results in the following) obtained by using lattice polymer models with a large number of monomers ($L\gtrsim 1000$) — hence appropriate to obtain the universal, scaling behavior — both at zero and at finite density in the semidilute regime. In section \[sec.4\] we report results for the coarse-grained models M1 and M2 and compare them with the tetramer and the full-monomer data. Finally, we collect our conclusions and perspectives in the last section. In the appendix we give universal predictions for the blob radius of gyration, an important quantity to obtain a meaningful comparison between any coarse-grained model and the underlying full-monomer model. The blob model {#sec.2} ============== In order to obtain the coarse-grained blob model (CGBM), one works in the zero-density limit and determines in successive steps the intramolecular potentials, the two-body intermolecular potentials, then, at least in principle, the three-body, four-body, etc. intermolecular potentials. In an exact mapping all $k$-body intermolecular interactions should be considered. However, as discussed in the introduction, higher-order intermolecular interactions can be neglected if one only considers small densities $\Phi \lesssim \Phi_{\rm max}$, where $\Phi_{\rm max} \sim n^{3\nu-1}$ increases (for a given level of approximation) with the number $n$ of blobs. The blob representation of the polymer {#sec2.1} -------------------------------------- In the multiblob approach, the basic object is the “blob", which is a subchain of the polymer. Suppose we wish to partition a polymeric chain of $L$ monomers into $n$ blobs of $m=L/n$ monomers each. If the monomer positions are given by $\{ {\bf r}_1,\ldots, {\bf r}_L\}$, one first defines the blob positions ${\bf s}_1,\ldots, {\bf s}_n$ as the centers of mass of the subchains of $m$ monomers, i.e. $${\bf s}_i = {1\over m} \sum_{k=m(i-1)+1}^{mi} {\bf r}_k.$$ For the new coarse-grained chain $\{{\bf s}_1,\ldots, {\bf s}_n\}$ one defines several standard quantities. First, one defines its radius of gyration $${R}_{g,b}^2 = {1\over 2 n^2} \sum_{i,j} ({\bf s}_i - {\bf s}_j)^2 . \label{Rgb-def}$$ Such a quantity is always smaller than ${R}_g$, since $${R}_g^2 = {R}_{g,b}^2 + {1\over n} \sum_i {r}_{g,i}^2, \label{Rg-Rgb}$$ where ${r}_{g,i}$ is the radius of gyration of $i$-th blob: $${r}_{g,i}^2 = {1\over 2 m^2} \sum_{k,l= m(i-1)+1}^{mi} ({\bf r}_k - {\bf r}_l)^2.$$ The ratios $R_{g,b}^2/R_g^2$ and $r_{g,i}^2/R_g^2$ of their averages [^1] over the polymer configurations are universal, hence independent of the nature of the underlying polymer model as long as $L$ is large enough. The average of the blob squared radius of gyration ${r}_{g}$ defined by $ {r}_{g}^2 = {(1/n)} \sum_i {r}_{g,i}^2$ scales quite simply with $n$ in the zero-density limit. As discussed in \[App-rgblob\], for all $n\ge 4$ we have quite precisely $${\hat{r}_{g}^2\over \hat{R}_g^2} = 1.06 n^{-2\nu}, \label{scaling-rg2}$$ an expression we will use extensively in the present work (here and in the following we will use a hat to indicate zero-density quantities). Beside the radius of gyration, we can consider the bond-length distributions (all blob distributions depend on the number $n$ of blobs, which is implicit in the notation) $$P_{ij}(r) = \langle \delta(|{\bf s}_i - {\bf s}_j| - r) \rangle,$$ where $\langle\cdot \rangle$ is the statistical average over all chain conformations, which satisfy the normalization conditions $$\int_0^\infty dr\ P_{ij}(r) = 1.$$ They depend on the chosen length scale. As it is standard in renormalization-group analyses of polymer behavior, the relevant quantities are the adimensional combinations $\hat{R}_g P_{ij}(r)$. For $L\to \infty$ they converge to universal, hence model-independent, functions $f_{ij}(\rho)$ with $\rho = r/\hat{R}_g$, which are normalized as $$\int_0^\infty d\rho\ f_{ij}(\rho) = 1 .$$ Note that, as usual, scaling functions depend only on the adimensional combination $\rho = r/\hat{R}_g$. In this paper we will also consider the adimensional intramolecular distribution function $$g_{\rm intra}({r}) = {2 \hat{R}^3_g \over n (n-1)} \sum_{i<j} \langle \delta^{(3)}({\bf s}_i - {\bf s}_j - {\bf r})\rangle. \label{gintra-def}$$ For large $L$, $g_{\rm intra}({r})$ converges to a universal function $G_{\rm intra}(\rho)$, $\rho=r/\hat{R}_g$, which is related to the bond-distribution functions defined above by $$G_{\rm intra}(\rho) = {1 \over 2 \pi n (n-1)} \sum_{i<j} {f_{ij}(\rho)\over \rho^2}. \label{Gintra-def}$$ Note that the the ratio $R_{g,b}^2/\hat{R}_g^2$ is simply related to the second moment of $g_{\rm intra}(r)$ \[in the scaling limit to that of $G_{\rm intra}(\rho)$\]. For $L\to \infty$ we have $${R_{g,b}^2\over \hat{R}_g^2} = {(n-1)\over 2 n} \int \rho^2 G_{\rm intra}(\rho) d^3\brho. \label{Rg-Gintra}$$ Beside two-site distributions, one can define three-site correlation functions $$P_{i,jk}({\bf r}_1,{\bf r}_2) = \langle \delta^{(3)}({\bf s}_i - {\bf s}_j - {\bf r}_1) \delta^{(3)}({\bf s}_i - {\bf s}_k - {\bf r}_2) \rangle$$ — the corresponding adimensional combinations $\hat{R}_g^6 P_{i,jk}({\bf r}_1,{\bf r}_2)$ converge to universal functions of ${\bf r}_1/\hat{R}_g$ and ${\bf r}_2/\hat{R}_g$ — and, analogously, four-site, five-site, etc. correlations. As a check of the quality of our results we shall often consider the distribution of $R_{g,b}$. More precisely, for each polymer configuration we consider the corresponding radius $R_{g,b}$ and the adimensional ratio $R_{g,b}/{\langle\hat{R}_g^2\rangle}^{1/2}$, where $\langle\hat{R}_g^2\rangle$ is the average of the squared radius of gyration over the polymer configurations. The corresponding distribution $$P_{R,b}(q_b) = \left\langle \delta\left({R_{g,b}\over \sqrt{\langle\hat{R}_g^2\rangle}} - q_b\right) \right\rangle \label{distPRb}$$ is universal in the large-$L$ limit. Note that this distribution function cannot be written in terms of the bond-length distributions, but is instead a particular $n$-blob correlation since $R_{g,b}$ depends on the positions of all blobs. The coarse-grained model {#sec2.2} ------------------------ In the CGBM the basic object is a polyatomic molecule with $n$ atoms located in ${\bf t}_1,\ldots,{\bf t}_n$. All length scales are expressed in terms of $\hat{R}_g$, hence potentials and distribution functions depend on the adimensional combination $\brho = {\bf t}/\hat{R}_g$. The intramolecular potentials are determined by requiring all [*adimensional*]{} distributions to be identical in the polymer model and in the CGBM at zero density. For instance, we require $$\langle \delta(|{\bf t}_i - {\bf t}_j|/\hat{R}_g - \rho) \rangle_{CGBM} = f_{ij}(\rho),$$ where $\langle \cdot \rangle_{CGBM}$ is the average over all single-chain CGBM configurations and $f_{ij}(\rho)$ are the universal functions defined above, which are computed in the polymer model. In principle, the determination of the intramolecular potential is straightforward. First, one determines the $n$-body blob distribution in the polymer model at zero density: $$P_n({\bf r}_{12},\dots,{\bf r}_{1n})= \left\langle \prod_{k=2}^{n}\delta^{(3)} \left({\bf s}_k - {\bf s}_1 - {\bf r}_{1k}\right)\right\rangle,$$ where the average is over all single-polymer conformations. The adimensional combination $\hat{R}_g^{3(n-1)}P_n$ converges for $L\to \infty$ to a universal distribution: $$\hat{R}_g^{3(n-1)} P_n({\bf r}_{12},\dots,{\bf r}_{1n}) = f_n\left(\brho_{12}, \ldots, \brho_{1n} \right).$$ where $\brho_{ij} = {\bf r}_{ij}/\hat{R}_g$. The CGBM intermolecular potential is then $$\beta V(\brho_1,\ldots, \brho_n) = -\log f_n\left(\brho_2 - \brho_1, \ldots, \brho_n - \brho_1\right), \label{pot-nbody}$$ where $\brho_i = {\bf t}_i/\hat{R}_g$. By definition, this choice ensures that the distribution of the $n$ atoms in the CGBM is identical to the distribution of the $n$ blobs in the polymer model. Hence the intramolecular structure is exactly reproduced. Note that potential (\[pot-nbody\]) is an intrinsically $n$-body interaction and thus there is no natural method to represent it as a sum of two-body, three-body, etc., terms. Because of the universality of the function $f_n$, the potential is independent of the polymer model and is valid for any polymeric system under good-solvent conditions. The radius of gyration ${R}_{g,CGBM}$ of the CGBM molecule differs from the polymer radius of gyration ${R}_g$ but agrees instead with ${R}_{g,b}$. It is important to take this difference into account when comparing finite-density results. For polymers, the behavior is universal once densities are expressed in terms of the polymer volume fraction $$\Phi = {4\pi\over 3} \hat{R}_g^3 {N\over V}, \label{Phip-def}$$ where $N$ is the number of polymers contained in the volume $V$. Full-monomer results should be compared with results obtained in the CGBM at volume fractions $$\Phi_b = {4\pi\over 3} \hat{R}_g^3 {N_b\over V}, \label{Phib-def}$$ where $N_b$ is the number of CGBM molecules. Note that $\hat{R}_g$ and not $\hat{R}_{g,b}$ appears in the definition of $ \Phi_b$. Since $\hat{R}_{g,b}/\hat{R}_g$ converges to 1 as $n$ increases, for $n$ large, say $n\gtrsim 30$, this conceptual difference is not relevant in practice. In our case, instead, since we consider $n=4$, it is crucial to use the correct definition, that is the quantity $\Phi_b$. Once the intramolecular potentials are determined, one must determine the intermolecular potentials, which must be such to reproduce the potentials of mean force in the polymer model. Note that, in order to have an exact mapping of the polymer model onto the CGBM, not only should pair potentials be considered, but also three-body, four-body, etc. interactions should be included.[@Bolhuis:2001p288; @Pelissetto:2005p296] However, as we already discussed in the introduction, as $n$ increases, these many-body interaction potentials are expected to become smaller, so that the CGBM with only pair potentials should be accurate in a density interval which widens with increasing $n$. Determination of the four-blob CGBM intramolecular potentials {#sec.2.2} ------------------------------------------------------------- In order to have an exact mapping of the polymeric system onto the $n$-blob CGBM, one should consider an $n$-body intramolecular potential, which, for $n>2$, can be expressed in terms of $3(n-2)$ scalar combinations of the positions of the blobs because of rotational and translational invariance. The complexity increases rapidly with $n$ and for this reason we decided to consider the case $n=4$, which allows us to limit the number of approximations needed and, at the same time, allows us to go beyond the dilute regime up to $\Phi\approx 2$-3. However, even for $n$ as small as 4, an exact determination of the intramolecular potential requires considering a function of 6 independent variables, which is far too complex in practice. Thus we have used a limited set of interactions. The intramolecular interactions have been modelled by introducing six different potentials, each of them depending on a single scalar variable. This choice is arbitrary, but, as we will show in the following, it is particularly convenient and works quite well. First, we consider a set of bonding pair potentials: atoms $i$ and $j$ of the tetramer interact with a pair potential $V_{ij}(\rho)$ with $\rho = |{\bf t}_i - {\bf t}_j|/\hat{R}_g$. Because of symmetry we have $V_{13}(\rho) = V_{24}(\rho)$ and $V_{12}(\rho) = V_{34}(\rho)$, so that there are only four independent potentials to be determined. Then, we consider a bending-angle potential $V_b(\cos \beta)$ and a torsion-angle potential $V_t(\theta)$, where $\beta$ and $\theta$ are defined as $$\begin{aligned} && \cos\beta_i = {\Delta {\bf t}_i \cdot \Delta {\bf t}_{i+1} \over |\Delta {\bf t}_i| |\Delta {\bf t}_{i+1}|}, \label{bending} \\ && \cos\theta_i = {(\Delta {\bf t}_i \times \Delta {\bf t}_{i+1}) \cdot (\Delta {\bf t}_{i+1} \times \Delta {\bf t}_{i+2}) \over |\Delta {\bf t}_i \times \Delta {\bf t}_{i+1}| |\Delta {\bf t}_{i+1} \times \Delta {\bf t}_{i+2}| }, \label{torsion}\end{aligned}$$ with $\Delta {\bf t}_i = {\bf t}_{i+1} - {\bf t}_i$. Note that in the tetramer there are two bending angles, which are equivalent by symmetry, and a single torsion angle. This particular form of the potential set is inspired by the usual modelling of bonded interactions in macromolecules. However, in that context one only considers a bonding potential between atoms which are first neighbors along the chain, a bending and a torsional term. Instead, our parametrization includes interactions between atoms that are not neighbors along the chain, thereby taking into account to some extent cross-correlations among different degrees of freedom. We note that the bending potential and the torsion potential involve three and four atoms, respectively, and thus allow us to introduce some of the three-body and four-body interactions present in the exact parametrization. Since we are using a limited set of interactions, not all distributions of the internal degrees of freedom can be exactly reproduced by the CGBM. We must therefore choose the distributions which we wish to be identical in the polymer and in the CGBM case. Given our choice of interaction potentials, it is natural to use the adimensional bond-length distributions $\hat{R}_g P_{ij}(r)$ and the distributions of the bending and torsion angle \[in the blob representation of the polymer model, these angles are defined by replacing ${\bf t}$ with ${\bf s}$ in Eqs. (\[bending\]) and (\[torsion\])\], which are particular three-body and four-body correlation functions. If we indicate collectively the six potentials to be determined with $V_{i}(x_i)$, the (adimensional) distributions of the $x_i$ variables with $P_i(x_i)$ in the CGBM and with $P_{i,FM}(x_i)$ in the full-monomer case, the potentials should be such that $P_{i}(x_i) = P_{i,FM}(x_i)$. The universal (i.e., model-independent) distributions $P_{i,FM}(x_i)$ in the polymer case have been determined by performing simulations of self-avoiding walks on a cubic lattice. To detect scaling corrections, we consider chains of length $L=2100$, 4100 (the corresponding blobs have $L/4 = 525, 1025$ monomers, respectively). The (adimensional) distributions obtained in the two cases agree within errors, indicating the absence of finite-length effects. We determine the potentials of the CGBM by using the Iterative Boltzmann Inversion (IBI) scheme. [@Schommers:1983p2118; @MullerPlathe:2002p2127; @Reith:2003p2128] In this approach the effective interactions which reproduce the target structural quantities are determined iteratively. The potentials of mean force of the corresponding full-monomer distribution, $ -\ln P_{i,FM}(x_i)$, have been chosen as initial guesses for all interactions except for $V_{14}(r)$. For $V_{14}(r)$ we have assumed a simple Gaussian potential: we use a Gaussian approximation of the potential of mean force between two polymer centers of mass, rescaling the width of the Gaussian with the ratio of the radii of gyration of the blob and of the entire chain. [c]{}\ -- -- -- -- [c]{}\ At the end of the optimization procedure, the bond and angle distributions are reproduced quite precisely, see figures \[dist-radial.fig\] and \[angoli.fig\]. The potentials obtained are reported in figure \[potenziali.fig\]. The pair potentials $V_{12}(\rho)$ and $V_{23}(\rho)$ are very similar, indicating that end effects are not very important. They have a minimum for $\rho\approx 0.5$ and are very soft at the origin: $V(0) - V(0.5) \approx (0.9$-1.0)$\ k_BT$. For $\rho\to\infty$ they increase quite rapidly and for $2\lesssim \rho \lesssim 3 $ they approximately behave as $\rho^{2.2}$ ($V_{12}$) and $\rho^{2.4}$ ($V_{23}$). The potential between next-nearest neighbors is continuously decreasing and hence it penalizes configurations in which the two atoms are close. Finally, $V_{14}(\rho)$ appears to be irrelevant for $\rho\gtrsim 1$. As for the bending-angle potential, it penalizes configurations with $\beta < 90^\circ$, while it is essentially flat for $\beta > 90^\circ$: the potential has a minimum for $\beta\approx 100^\circ$ and $V_b(180^\circ) - V_b(100^\circ) \approx 0.14 k_BT$. Finally, the torsion potential turns out to be quite flat: it increases with $\theta$ and changes only by $0.3 k_BT$, going from $0^\circ$ to $360^\circ$. The results for the potentials are quite interesting since they indicate the presence of an effective hierarchy among the different contributions. The pair potentials between neighbors are the most important. For instance, for typical configurations, say, for $0.4 \lesssim \rho_{12},\rho_{23} \lesssim 1.5$, see figure \[dist-radial.fig\], the potentials $V_{12}(\rho_{12})$ and $V_{23}(\rho_{23})$ vary significantly, by 4$k_BT$-5$k_BT$. Instead, for typical distances $0.7\lesssim \rho_{13} \lesssim 2.2$, the potential $V_{13}(\rho_{13})$ varies much less, approximately by 2$k_B T$, while in the typical interval $\rho_{14}\gtrsim 1$, $V_{14}(\rho_{14})$ varies only by 0.03$k_BT$. Clearly, the relevance of the interactions decreases as the chemical distance between the atoms increases, even though interactions between distant atoms can never be neglected, otherwise one would model a random-walk chain and not a polymer under good-solvent conditions. The bending-angle potential varies at most by $k_BT$ and is thus less relevant than the bonding potentials, while the torsion potential only provides a small correction. This seems to indicate that the relevance of the interactions decreases with the number of atoms involved, so that, when the number $n$ of atoms increases, one can safely neglect higher-body interactions. It is important to stress that the pair potentials $V_{12}(\rho)$ and $V_{23}(\rho)$ are somewhat different from the potentials one would obtain by using the transferability hypothesis as suggested by Pierleoni [*et al.*]{}[@Pierleoni:2007p193] If we use Eq. (\[scaling-rg2\]) to relate $\hat{r}_g$ to $\hat{R}_{g}$, we would obtain a transferability potential (see the expression reported in the caption of figure 1 of Pierleoni [*et al.*]{}[@Pierleoni:2007p193]) $$V_{tr}(\rho) = 1.92 \exp(-3.85 \rho^2) + 0.534 (2.19 \rho - 0.73)^2,$$ where $\rho = r/\hat{R}_g$. The minimum of this potential occurs for $\rho\approx 0.67$, to be compared with $\rho\approx 0.5$ of $V_{12}$ and $V_{23}$. Overlaps are more suppressed since $V_{tr}(0) - V_{tr}(0.67) \approx 1.55k_BT$ (for our potentials we find $0.9k_BT$-$1.0k_BT$). Morever, the potential $V_{tr}(\rho)$ increases much less than $V_{12}(\rho)$ or $V_{23}(\rho)$ as $\rho$ increases. For instance, for $\rho \approx 1.5$, a value which occurs quite frequently, see figure \[dist-radial.fig\], we have $V_{tr}(\rho) \approx 3.5k_BT$, while $V_{12}(\rho) \approx 4.9k_BT$, $V_{23}(\rho) \approx 6.1k_BT$. Determination of the CGBM intermolecular potentials {#sec2.3} --------------------------------------------------- As for the intermolecular potentials, we have made some drastic simplifications. First, we do not consider $n$-body interaction terms, which, as we already mentioned, are important only for densities $\Phi\gtrsim n^{3\nu - 1}$. Then, we consider a single intermolecular pair potential $W(\rho)$: the atoms interact with the same potential, irrespective of their positions along the chains. Such a potential has been obtained by requiring the CGBM to reproduce the center-of-mass intermolecular distribution function. Indeed, define in the polymer model $$g_{CM}(r) = \langle e^{-\beta U_{12}} \rangle_{0,\bf r},$$ where $\langle\cdot \rangle_{0,\bf r}$ indicates the average over two polymers, the centers of mass of which are in the origin and in $\bf r$, respectively, and $U_{12}$ is the intermolecular energy. In the scaling limit $L\to\infty$, $g_{CM}(r)$ converges to a universal function $f_{CM}(\rho)$, $\rho = r/\hat{R}_g$. The pair potential has been determined so that $$g_{CM,CGBM}(\rho) = f_{CM}(\rho), \label{eq-gCM}$$ where $g_{CM,CGBM}(\rho)$ is the corresponding distribution function in the CGBM. Note that the second virial coefficient $B_2$ is related to $g_{CM}(r)$ by $$B_2 = {1\over 2} \int d^3 {\bf r} [1 - g_{CM}(r)] = 2 \pi \int r^2 dr [1 - g_{CM}(r)]. \label{defB2}$$ Hence, equality (\[eq-gCM\]) guarantees that the adimensional combination $A_2 \equiv B_2/\hat{R}_g^3$, hence the thermodynamics in the small-density limit, agrees in the CGBM and in the polymer model. The potential $\beta W(\rho)$ has been parametrized as $$\beta W(\rho) =c_1 \exp (-c_2 \rho^2), \label{Wdef}$$ in terms of two unknown parameters $c_1$ and $c_2$. They have been determined following the approach of Akkermans [*et al.*]{}[@Akkermans:2001p1716; @Akkermans:2001p6210] Requiring the model to reproduce at best the polymer scaling function $\rho^2 f_{CM}(\rho)$, we obtain the optimal values $c_1 = 1.66$ and $c_2 = 3.9$. For these parameter values the model with potential (\[Wdef\]) has an intermolecular pair distribution function which agrees quite precisely with the corresponding polymer quantity, see figure \[grcm0.fig\]. The result depends on the parametrization and we cannot exclude that a different parametrization with the same number of parameters gives results of better quality. Potential (\[Wdef\]) differs only slightly from the intramolecular potential $V_{14}(\rho)=1.86\exp(-4.08425\rho^2)$. Interactions between the atoms at the ends of the chain or between atoms that belong to different chains are quite similar. It is interesting to compare our result with that one would obtain by using the transferability hypothesis as suggested by Pierleoni [*et al.*]{}: [@Pierleoni:2007p193] $\beta W_{tr}(\rho) = 1.92 \exp(-3.85 \rho^2)$. The range of the potential is the same, but the potential we obtain is somewhat softer. [c]{}\ Comparison of CGBM and polymer results {#sec.3} ====================================== In order to understand how well the tetramer model reproduces the polymer behavior we have performed extensive simulations of the tetramer and of a polymer model at zero density and at volume fractions $\Phi = 1.09$, 4.36, 8.72. Since CGBM and polymer results should be compared taking $\Phi = \Phi_b$, see (\[Phip-def\]) and (\[Phib-def\]), we drop the suffix and thus $\Phi$ also refers to $\Phi_b$. At finite density polymers have been modelled by means of the Domb-Joyce (DJ) model with $w =0.505838$, a value which is close to the optimal one for which no leading scaling corrections are present (see Caracciolo [*et al.*]{}[@Caracciolo:2006p587] for details on the model). It allows us to determine precisely the universal, model-independent scaling functions by using chains of moderate length. We consider walks of length $L=600$ and $L=2400$, verifying the practical absence of scaling corrections. As in previous work[@Pelissetto:2008p1683], we considered different box sizes, finding negligible size effects when the number of chains in the box exceeds 100. Simulations have been performed using the algorithm described in Pelissetto.[@Pelissetto:2008p1683] Zero-density ------------ By construction, the tetramer CGBM reproduces the bond-length distributions. As we have already remarked in Sec. \[sec2.1\], see (\[Gintra-def\]) and (\[Rg-Gintra\]), the ratio $\hat{R}_{g,b}/\hat{R}_g$ can be expressed in terms of these distributions, hence this ratio should assume the same value in the tetramer and in the polymer case. Numerically, we find $\hat{R}_{g,b}/\hat{R}_g = 0.89093(7)$ for the tetramer and $\hat{R}_{g,b}/\hat{R}_g = 0.89210(11)$ for the polymer case. The difference is approximately 0.1%, which shows how accurate the intramolecular potentials we determined are. Moreover, not only $\hat{R}_{g,b}/\hat{R}_g$ agrees, but also its distribution (\[distPRb\]) is the same for polymers and tetramers, see figure \[dist-Rgb-ZD\]a). This is a nontrivial check, since this distribution is not directly related to the bond-length distributions, nor to those of the bending and torsion angles. Clearly, the tetramer models correctly the shape and size of a polymer at zero density. [c]{}\ Since we have matched the center-of-mass distribution function to determine the intermolecular potential, the tetramer CGBM should give the correct second virial coefficient. If we expand the compressibility factor as $$Z = {\Pi\over k_B T c} = 1 + B_2 c + B_3 c^2 + O(c^3),$$ the quantity $A_2 = B_2/R_g^3$ is universal. An accurate estimate is [@Caracciolo:2006p587] $A_2 = 5.500(3)$. For the tetramer we obtain $A_{2,t} = 5.597(1)$. The difference is approximatey 1.8% and is representative of the level of precision with which the tetramer model reproduces the center-of-mass distribution function. Much more interesting is the comparison of the third virial coefficient, since it provides an indication of the accuracy with which the tetramer model reproduces the polymer thermodynamics in the dilute regime and also of the importance of the neglected three-body forces. The universal combination $A_3 = B_3/R_g^6$ was computed by Caracciolo [*et al.*]{}[@Caracciolo:2006p587] finding $A_3 = 9.80(2)$. In order to determine $A_3$, two contributions had to be computed. One contribution is the standard one, the only present in monoatomic fluids and in fluids of rigid molecules, $A'_3 \approx 10.64$, while the second one is a flexibility contribution $A_{3,fl} \approx -0.84$ (it corresponds to $-T_1 \hat{R}_g^{-6}$ in the notations of Caracciolo [*et al.*]{}[@Caracciolo:2006p587]). The combination $A_3$ as well as the two contributions $A'_3$ and $A_{3,fl}$ are universal, hence it makes sense to compare them with the tetramer corresponding results. We obtain $$A_{3,t} = 9.99(2), \qquad A'_{3,t} = 10.57(2), \qquad A_{3,fl,t} = -0.581(5).$$ The tetramer reproduces quite reasonably the third virial coefficient, the difference being approximately 2%. Note that much of the discrepancy is due to $A_{3,fl}$: the tetramer is more rigid than the polymer. It is interesting to compare these results with those obtained by using the single-blob model in which polymers are represented by monoatomic molecules interacting by means of density-independent potentials. [^2] If we use the accurate pair potential of Pelissetto [*et al.*]{}[@Pelissetto:2005p296] we obtain $A_3 = 7.844(6)$ (of course here $A_{3,fl} = 0$) which deviates by 20% from the polymer result. Hence, the tetramer model represents a significant improvement for the thermodynamics. [c]{}\ As a further check we compute the effective three-body potential of mean force defined by [@Bolhuis:2001p288; @Pelissetto:2005p296] $$\beta V_3({\bf r}_{12},{\bf r}_{13},{\bf r}_{23}) = - \ln {\langle e^{-\beta U_{12} - \beta U_{13} - \beta U_{23}} \rangle_{{\bf r}_{12},{\bf r}_{13},{\bf r}_{23}} \over \langle e^{-\beta U_{12}}\rangle_{{\bf r}_{12}} \langle e^{-\beta U_{13}}\rangle_{{\bf r}_{13}} \langle e^{-\beta U_{23}}\rangle_{{\bf r}_{23}} };$$ here $U_{ij}$ is the intermolecular potential energy between tetramers $i$ and $j$ and the average $\langle \cdot \rangle_{{\bf r}_{12},{\bf r}_{13},{\bf r}_{23}}$ is over triplets of tetramers such that ${\bf r}_{ij} = {\bf r}_{i} - {\bf r}_{j}$, where ${\bf r}_{i}$ is the position of the center of mass of tetramer $i$. We computed $\beta V_3({\bf r}_{12},{\bf r}_{13},{\bf r}_{23})$ for equilateral triangular configurations such that ${r}_{12}={r}_{13} = {r}_{23} = r$. The result is reported in figure \[threebody\] and compared with the analogous quantity computed in full-monomer simulations. At variance with the single-blob model for which $\beta V_3 = 0$, the tetramer model reproduces the polymer $\beta V_3$ quite reasonably: differences — the tetramer potential is slightly more attractive — are observed for $\rho = r/\hat{R}_g\lesssim 1$, but they are only significant for $\rho \lesssim 0.5$, i.e., when the tetramers are very close. This is consistent with the analysis of the third virial coefficient: in the dilute limit three-body interactions are correctly reproduced by the tetramer model, without the need of introducing a three-body potential among tetramer atoms. Semidilute regime ----------------- [c]{}\ As we have discussed in the introduction, the tetramer model is expected to reproduce the full-monomer results up to $\Phi\approx 2$, representing a significant improvement with respect to the single-blob model which shows large deviations already for $\Phi = 1$. To check this expected behavior we compare tetramer and full-monomer simulations at $\Phi = 1.09$, 4.36, and 8.72. Let us begin with the structural properties. In figure \[gintra\] we report the adimensional intramolecular distribution function $g_{\rm intra}(r)$. For $\Phi = 1.09$ the agreement between the tetramer and the full monomer results is excellent. However, as $\Phi$ increases, deviations are observed for $\rho = r/\hat{R}_g\lesssim 1$. For $\Phi\gtrsim 4$ the tetramer is more swollen than the polymer: the probability for two blobs to be at a given distance $\rho\lesssim 1$ is significantly smaller in the tetramer than in the full-monomer chain. These results are further confirmed by the results for the radius of gyration. For the tetramer we have $${R_{g,b}(\Phi)\over \hat{R}_g} = \cases{ 0.85636(4) & $\qquad \Phi = 1.09,$ \cr 0.8181(2) & $\qquad \Phi = 4.36,$ \cr 0.8047(1) & $\qquad \Phi = 8.72,$ }$$ to be compared with the full-monomer results $${R_{g,b}(\Phi)\over \hat{R}_g} = \cases{ 0.8523(2) & $\qquad \Phi = 1.09,$ \cr 0.7823(2) & $\qquad \Phi = 4.36,$ \cr 0.7346(6) & $\qquad \Phi = 8.72.$ }$$ For $\Phi = 1.09$ the agreement is very good, consistently with the results reported in figure \[gintra\]. As $\Phi$ increases, however, the tetramer is more rigid than the polymer and $R_{g,b}(\Phi)/\hat{R}_g$ is larger in the tetramer than in the polymer case. The same conclusions are reached by looking at the distribution of $R_{g,b}$, see figure \[dist-Rgb-ZD\]. For $\Phi = 1.09$ the agreement is excellent, while for $\Phi = 8.72$ the tetramer distribution is slightly shifted towards larger values of $R_{g,b}$. It is also interesting to compare the results for the bending and torsion angles reported in figure \[angoli.fig\]. The distributions appear to have a tiny dependence on $\Phi$ and to be reasonably reproduced by the tetramer for all values of $\Phi$. For instance, for the largest value of $\Phi$, $\Phi = 8.72$, we have for polymers $P_b(\cos\beta = -1) \approx 0.93$, $P_t(\theta = 0) \approx 0.346$, to be compared with 0.88 and 0.354, respectively, in the tetramer case. $\Phi$ $Z_t(\Phi)$ $Z_p(\Phi)$ -------- ------------- ------------- 0.054 1.07363(3) 1.0725 0.135 1.18993(4) 1.1871 0.27 1.39852(6) 1.3929 0.54 1.8499(1) 1.8536 1.09 2.9090(1) 2.9589 2.18 5.2660(2) 5.6342 4.35 10.2056(4) 12.229 6.53 15.2279(1) 20.019 8.72 20.2811(1) 28.716 : Compressibility factor for the tetramer model, $Z_t(\Phi)$, and for polymers, $Z_p(\Phi)$. Polymer results are taken from Pelissetto [@Pelissetto:2008p1683]. []{data-label="Z-tetramer"} [c]{}\ Let us now consider the thermodynamics. For this purpose we computed the compressibility factor $Z = \beta\Pi/c$ for the tetramer using the molecular virial method [@Ciccotti:1986p2263; @Akkermans:2004p2261] ($c=N/V$ is the number concentration). As $Z$ is dimensionless, polymer and tetramer results at the same value of $\Phi$ can be directly compared. Estimates are reported in Table \[Z-tetramer\]. For $\Phi \lesssim 1$ the tetramer $Z$ is very close to the polymer prediction: for $\Phi = 1.09$ it differs by 2% from the correct result. As $\Phi$ increases, however, differences increase and the tetramer model underestimates the correct pressure. In figure \[fig:eostetra\] we compare $Z(\phi)$ for the tetramer with the corresponding expression for polymers.[@Pelissetto:2008p1683] At the scale of the figure, good agreement is observed up to $\Phi \approx 2$. For larger densities, $Z(\Phi)$ in the tetramer increases slower than in the polymer case. Indeed, while in polymers we expect $Z\sim \Phi^{1/(3\nu-1)}\sim \Phi^{1.31}$ for large $\Phi$, for the tetramer $Z$ is expected to increase only linearly with $\Phi$ (since the potential is soft, for $\Phi \to \infty$ the random-phase approximation should become exact [@Louis:2000p289]). As can be seen from figure \[fig:eostetra\], the tetramer model is significantly better than the single-blob model,[^3] in which each polymer is represented by a single atom. At $\Phi = 1.09$ such a model gives $Z = 2.70(1)$, which underestimates $Z$ by 8%, much more than the tetramer model (at this density the error on $Z$ is 2%). The single-blob model gives a value for $Z$ which differs from the polymer one by less than 2% only up to $\Phi \approx 0.38$, i.e. up to densities which are a factor-of-three smaller than the corrisponding ones for the tetramer. This improvement confirms the scaling argument we presented in the introduction. As we explained in the introduction, the multiblob model should give the correct thermodynamics up to a polymer volume fraction $\Phi_{\rm max}$ which scales as $n^{3\nu-1}$. If we compare the tetramer model with the single-blob one, we thus expect the density range in which the model is predictive to increase by $4^{3\nu-1}\approx 2.9$, which is indeed what we find. [c]{}\ Let us finally compare the center-of-mass distribution function $g_{CM}(\rho)$. It is reported in figure \[fig:gr\] for $\Phi = 1.09$ and $\Phi = 4.36$. In the first case, the tetramer result is on top of the polymer results. For $\Phi = 4.36$ small discrepancies at short distance ($\rho\lesssim 0.5$) are present. For instance, for the tetramer we have $g_{CM}(0) = 0.591(6)$, $g_{CM}(0.5) = 0.688(1)$ at $\Phi = 4.36$, respectively, to be compared with the polymer results $g_{CM}(0) = 0.550(5)$, $g_{CM}(0.5) = 0.660(2)$. These small differences are responsible for the differences in the estimates of $Z$ since $g_{CM}$ is related to $Z$ by the compressibility rule[^4] $$\left({\partial cZ\over \partial c}\right)^{-1} = 1 + c \int (g_{CM}(r)-1) d^3{\bf r}.$$ Note that, even though the thermodynamics is poorly reproduced, also the single-blob model gives an estimate of $g_{CM}(r)$ which is only slightly different from the polymer one. The largest differences are observed for $\rho\to 0$. At overlap we obtain $g_{CM}(0) = 0.344(1)$ and $g_{CM}(0) = 0.613(1)$ for $\Phi = 1.09$ and $\Phi = 4.36$, to be compared with the polymer results $0.307(9)$ and $0.550(5)$. Comparison with other models {#sec.4} ============================ In the previous section, we discussed the finite-density behavior of the tetramer model and we showed that it is quite accurate, for both structural and thermodynamic properties, up to $\Phi \approx 2$, in agreement with the multiblob argument of Pierleoni [*et al.*]{} [@Pierleoni:2007p193] presented in the introduction. It represents a significant improvement with respect to the single-blob model, which is unable to reproduce structural properties and reproduces the thermodynamics only deep in the dilute regime (the compressibility factor $Z$ for the single-blob model differs from the polymer one by less than 5% only for $\Phi\lesssim 0.75$). Here we wish to investigate the structural and thermodynamic behavior of two other models discussed in the literature. Definition of the models ------------------------ First, we consider the model introduced by Pierleoni [*et al.*]{} [@Pierleoni:2007p193] — we name it model M1. A CGBM with $n$ blobs is a chain in which neighboring atoms belonging to the same chain interact with an intramolecular potential $$V_{\rm bond}(r) = A e^{-\alpha r^2/\hat{r}_g^2} + k (r/\hat{r}_g - r_0)^2; \label{Vbond-M}$$ atoms that belong to the same chain but are not neighbors, or belong to different chains interact with a potential given by $$V_{\rm non-bond}(r) = A e^{-\alpha r^2/\hat{r}_g^2}, \label{Vnonbond-M}$$ where $A$ and $\alpha$ are the same as in (\[Vbond-M\]). In these expressions $\hat{r}_g$ is the average zero-density radius of gyration of the blobs and sets the length scale. The model depends on several constants, which can be easily determined in the dimer case, i.e., for $n=2$ (see caption of figure 1 in Pierleoni [*et al.*]{}[@Pierleoni:2007p193]): $A = 1.92$, $\alpha = 0.80$, $k = 0.534$, and $r_0 = 0.730$. To extend the model to values $n > 2$, Pierleoni [*et al.*]{} [@Pierleoni:2007p193] made the transferability hypothesis: equations (\[Vbond-M\]) and (\[Vnonbond-M\]) hold for any $n$, the $n$-dependence being completely taken into account by the radius of gyration of the blob. As discussed in Sec. \[sec2.2\], when comparing the CGBM results to the polymer ones, one should use the radius of gyration $\hat{R}_g$ of the reference polymer model and not the radius of gyration $\hat{R}_{g,b}$ of the CGBM. The radius $\hat{R}_g$ (or rather the ratio $\hat{R}_g/\hat{r}_g$, since $\hat{r}_g$ is the basic length scale in this approach) can be determined by using two different routes. As suggested by Pierleoni [*et al.*]{},[@Pierleoni:2007p193] one can determine $\hat{R}_{g,b}/\hat{r}_g$ for the CGBM and then use (\[Rg-Rgb\]). Alternatively one can use (\[scaling-rg2\]). If the model were a good CGBM, these two definitions would give the same result, and indeed in the tetramer case they do. Instead, for model M1 we observe quite large differences. For instance, for $n=30$, we find $\hat{R}_g^2 \approx 41.8 \hat{r}_g^2$ if we use $\hat{R}_g^2 = \hat{R}_{g,b}^2 + \hat{r}_g^2$ and $\hat{R}_g^2 \approx 51.4 \hat{r}_g^2$, if we use $\hat{R}_g^2/ \hat{r}_g^2 = n^{2\nu}/1.06$. These differences do not disappear as $n\to \infty$. An analysis of M1 results with $n\le 600$ gives for $n\to \infty$ the scaling behavior $${\hat{R}_{g,b}^2\over \hat{r}_g^2} = A n^{2\nu}, \qquad A = 0.78(3), \label{scalingRgb-M1}$$ which is not compatible with (\[scaling-rg2\]). In this paper we compare the results obtained by using three different “polymer" radii of gyration: $$\begin{aligned} \hat{R}_{g,1}/\hat{r}_g &=& \hat{R}_{g,b}/\hat{r}_g , \label{Rg1}\\ \hat{R}_{g,2}/\hat{r}_g &=& \sqrt{\hat{R}^2_{g,b}/\hat{r}^2_g + 1} , \label{Rg2} \\ \hat{R}_{g,3}/\hat{r}_g &=& n^\nu/\sqrt{1.06} . \label{Rg3}\end{aligned}$$ Note that, for large $n$, definitions $\hat{R}_{g,1}$ and $\hat{R}_{g,2}$ are equivalent. On the other hand, as we discussed, definition $\hat{R}_{g,3}$ differs significantly from the others for any $n$, including the limit $n\to\infty$. Recently, Coluzza [*et al.*]{} [@Coluzza:2011p1723] suggested that model M1 should not be considered as a CGBM, but rather as a generic polymer model in good-solvent conditions, so that $\hat{R}_{g,b}$ should be used as reference scale. In the following we shall mainly focus on the first and third definition and we shall label the corresponding results as (M1,1) and (M1,3), respectively. A proper definition of $\hat{R}_g$ is relevant for two purposes: first, length distributions are universal only if one expresses the lengths in terms of $\hat{R}_g$ (the relevant scale is $\rho = r/\hat{R}_g$); second, at finite density results should be compared at the same value of the polymer volume fraction $\Phi$ defined in (\[Phip-def\]). Changing the definition of $\hat{R}_g$ changes the definition of both $\rho$ and $\Phi$, hence it is crucial to specify which $\hat{R}_g$ one is using. Note that Coluzza [*et al.*]{} [@Coluzza:2011p1723] introduced a complex procedure to compare CGBM and polymer results. Their procedure is fully equivalent to the one we have discussed above: to analyze finite-density results, one must compare the results at the same value of the adimensional volume fraction $\Phi$. The thermodynamic behavior of model M1 was studied by Pelissetto.[@Pelissetto:2009p287] If $\hat{R}_{g,b}$ is used as reference scale as recently suggested by Coluzza [*et al.*]{},[@Coluzza:2011p1723] the model fails to reproduce the thermodynamics unless $n$ is larger than $10^3$, but of course, for such values of $n$, there are several other models — the lattice Domb-Joyce model we use is one of them — which better reproduce the universal polymer behavior both for the thermodynamics and the structural properties. For instance, for $n = 100$, which is a relatively large value, model M1 overestimates the second virial coefficient combination $A_2$ by 20%. A more detailed discussion will be presented below. We shall also consider a second coarse-grained model, [@Pelissetto:2009p287] we call it model M2. Conceptually, this was not intended to be a CGBM, but rather a polymeric model which reproduces the asymptotic (number of monomers $n\to \infty$) behavior already for small values of $n$. The $n$-dependent potentials were tuned so that thermodynamics was reproduced for $\Phi \lesssim 10$. For $n=26$, thermodynamics was reproduced taking potentials of the form (\[Vbond-M\]) and (\[Vnonbond-M\]) with ($\hat{r}_g$ is no longer the blob size, but simply sets the length scale) $A = 8.28$, $\alpha = 1$, $k = 0.15$ and $r_0 = 0.653$. It is important to stress that $\hat{R}_{g,b}$ was used as reference length in the optimization procedure employed to determine the optimal parameters. Therefore, for consistency, for this model it makes no sense to use definitions $\hat{R}_{g,2}$ and $\hat{R}_{g,3}$. Hence, whenever we consider model M2, $\hat{R}_g$ should always be identified with $\hat{R}_{g,b}$. We have performed simulations for model M1 for $n = 8$, 16, 30, 60 and of model M2 for $n = 30$. In the second case, one should in principle compute the appropriate parameters for $n=30$. We will use here the coefficients computed for $n=26$, as we expect the changes necessary as $n$ goes from 26 to 30 to be tiny. Numerical results and discussion -------------------------------- [c]{}\ [c]{}\ Let us first discuss the structural properties, considering the intramolecular distribution function $g_{\rm intra}(r)$. If we consider models M1 and M2 as generic polymer models — in this case we should use $\hat{R}_{g,b}$ as length scale — the corresponding results should be compared with the monomer intramolecular distribution function, which is defined in (\[gintra-def\]), taking $n = L$. Estimates of $\rho^2 g_{\rm intra}(\rho)$, which represents the average distribution of the bond lengths, are shown in figure \[gintra-mod\]. At $\Phi = 1.09$, we observe a reasonable agreement for both models: they appear to be able to reproduce the structural properties in the dilute regime. For $\Phi = 8.72$ model M1 shows some, but still reasonably small, differences for $0.4\lesssim \rho \lesssim 1$. Model M2 appears to be slightly in better agreement, except for small $\rho \lesssim 0.2$. -- -- -- -- -- -- -- -- For model M1, we can also consider $\hat{R}_{g,3}$, see definition (\[Rg3\]), for the zero-density radius of gyration. In doing this, we implicitly assume that model M1 is a CGBM, as the tetramer model, and not just a generic good-solvent polymer model. In figure \[gintra-mod-2\] we report the corresponding adimensional intramolecular distribution function, which should be compared in this case with the polymer results for a 30-blob coarse-grained representation (data labelled FM). Discrepancies are significantly larger than in figure \[gintra-mod\]. Clearly, structural properties are much better reproduced if $\hat{R}_{g,b}$ is used as radius of gyration, in agreement with the conclusions of Coluzza [*et al.*]{} [@Coluzza:2011p1723] Note that if one uses $\hat{R}_{g,b}$ the M1 distributions agree exactly with the polymer ones for $n\to \infty$. Indeed, model M1 is, in this limit, a generic polymer model and all models have the same infinite-length behavior as long as the same adimensional quantities are compared. As a consequence, the discrepancies we observe in figure \[gintra-mod-2\] do not decrease as $n$ increases. Similar conclusions are reached by considering the distribution of the radius of gyration, see figure \[Rgdist-mod\]. Depending on the interpretation of the models as generic polymer models or as CGBMs, one should compare the results with the polymer distributions of $R_g/{\langle}\hat{R}_g{\rangle}^{1/2}$ or of $R_{g,b}/{\langle}\hat{R}_g{\rangle}^{1/2}$, where $R_{g,b}$ is the radius of gyration of a 30-blob representation of the polymer chain. However, as shown in the inset of figure \[Rgdist-mod\], the two distributions are identical on the scale of the figure, so that this conceptual difference is not relevant in practice. Model M2 appears to be the one which gives the best agreement, but, if $\hat{R}_{g,b}$ is used as a reference scale, also the model-M1 distribution is close to the full-monomer one. If instead $\hat{R}_{g,3}$ is used for model M1, discrepancies are quite large. If model M1 is considered as a CGBM, it makes also sense to compare the bending and torsion angle distributions. The results, reported in figure \[angles-mod\], (similar to those observed in model M2) have little relation with what is observed for the polymer case (the full-monomer distributions we report are those appropriate for a 30-blob representation of the polymer). Hence, even if the bond-length distributions are approximately reproduced, correlations between different bonds, for instance angular distributions, are not, and the true polymer shape is quite different from that predicted by model M1. ----- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- -- $n$ $\hat{R}_{g,b}$ $\hat{R}_{g,2}$ $\hat{R}_{g,3}$ $\hat{R}_{g,b}$ $\hat{R}_{g,2}$ $\hat{R}_{g,3}$ 8 9.225(7) 7.729(5) 5.815(1) 32.0(5) 22.4(3) 12.7(1) 16 8.258(9) 7.640(8) 5.548(1) 26.0(5) 22.2(4) 11.7(1) 30 7.55(1) 7.28(1) 5.354(2) 21.4(7) 20.0(7) 10.8(2) 60 6.95(1) 6.84(1) 5.183(6) 17.9(8) 17.3(7) 10.0(4) ----- ----------------- ----------------- ----------------- ----------------- ----------------- ----------------- -- : Estimates of $A_2 = B_2 \hat{R}_g^3$ and $A_3 = B_3 \hat{R}_g^6$ using the different definitions of $\hat{R}_g$ for model M1. Numerically, we find $\hat{R}^2_{g,b}/\hat{r}^2_g = 7.987(3)$, 18.82(1), 40.83(4), 95.37(3), for $n= 8$, 16, 30, 60. The universal asymptotic values for polymers are [@Caracciolo:2006p587] $A_2 = 5.500(3)$, $A_3 = 9.80(2)$. []{data-label="table_A2A3_M1"} [c]{}\ -- -- -- -- [c]{}\ Let us now come to the thermodynamics. For both models we have determined the second virial coefficient $B_2$ and the adimensional combination $A_2 = B_2 \hat{R}_g^{-3}$. The parameters of model M2 were determined in such a way to reproduce $A_2 = 5.500$, the correct result for infinitely long polymers, hence M2 gives the correct thermodynamics in the zero-density limit. The results for model M1 are reported in Table \[table\_A2A3\_M1\] for each choice of $\hat{R}_g$. As already discussed by Pelissetto, [@Pelissetto:2009p287] if $\hat{R}_{g,b}$ is used, $A_2$ differs significantly from the asymptotic result, even for $n = 60$. If $\hat{R}_{g,2}$ is used, discrepancies are smaller for $n = 8$, but substantially the same for $n\ge 30$ (not surprising, since $\hat{R}_{g,2}/R_{g,b} \to 1$ as $n\to\infty$). Definition $\hat{R}_{g,3}$ gives apparently better results, but we believe that this apparent agreement is fortuitous. Indeed, as $n$ increases, $B_2 \hat{R}_{g,3}^{-3}$ should monotonically decrease, increasing the discrepancy with the polymer case. It is easy to compute the asymptotic value. For large $n$ model M1 is a generic good-solvent polymer model, hence standard universality arguments predict that $B_2 \hat{R}_{g,b}^{-3}$ should converge to 5.500, the result obtained for infinitely long polymers. [@Caracciolo:2006p587]. Using (\[scalingRgb-M1\]) we obtain for $n\to \infty$ $$B_2 \hat{R}_{g,3}^{-3} = (1.06 A)^{3/2} B_2 \hat{R}_{g,b}^{-3} = {5.500 (1.06 \times 0.78)^{3/2}} \approx 4.13,$$ which differs by 25% from the correct result. To further confirm that there is nothing fundamental in the observed agreement, we plot the zero-density center-of-mass distribution function $g_{CM}(\rho)$ in figure \[grcm0-mod\]. For all values of $n$ it differs significantly from the polymer one. In particular, the correlation hole $g_{CM}(0)$, which does not depend on the choice of $\hat{R}_g$, is significantly deeper in model M1 than for good-solvent polymers in the scaling limit. In Table \[table\_A2A3\_M1\] we also report the third-virial combination $A_3$. If $\hat{R}_{g,b}$ is used, results differ roughly by a factor of two from the polymer ones. Discrepancies decrease if $\hat{R}_{g,3}$ is used, but again this is accidental. The same argument given above for $A_2$ shows that the combination $B_3 \hat{R}_{g,3}^{-6}$ converges to 5.5 for $n\to\infty$, which is roughly a factor-of-two smaller than the correct result [@Caracciolo:2006p587] $A_3 = 9.80(2)$. We also report, see figure \[gr3-mod\], the three-body potential of mean force for three chains on an equilateral triangle. We observe significant discrepancies: results are significantly worse than those obtained by using the tetramer CGBM. [c]{}\ Let us now compare the thermodynamics at finite density. In figure \[Z-mod\] we compare the compressibility factor for polymers (data labelled by FM taken from Pelissetto [@Pelissetto:2008p1683]) with that for model M1. As expected on the basis of the zero-density results, if $\hat{R}_{g,b}$ is used in the definition of $\Phi$, very large discrepancies are observed. Moreover, also the dependence on $\Phi$ is incorrect: $Z(\Phi)$ increases as $\Phi^{1.13}$ for $6 \lesssim \rho \lesssim 9$, which differs significantly from the correct scaling $\Phi^{1.31}$. Discrepancies are significantly smaller (12% at most, see the right panel of figure \[Z-mod\]) if $\hat{R}_{g,3}$ is used. Again the agreement appears to be accidental, since the center-of-mass distribution function differs significantly from the polymer one, see figure \[grcmPhi-mod\]. Even worse, for $\rho \lesssim 1$, results using $\hat{R}_{g,b}$ appear to be closer to the correct full-monomer results than those obtained by using $\hat{R}_{g,3}$. Again, note that only if $R_{g,b}$ is used the distribution function $g_{CM}(\rho)$ computed in model M1 will converge to the full-monomer one for $n\to\infty$. If $R_{g,3}$ is used instead, the correlation hole is always (even for $n\to\infty$) deeper for model M1 than for true polymers at any given value of the polymer packing fraction $\Phi\not=0$. Moreover, $g_{CM}(\rho)$ shows more curvature, reaching approximately 1 at a slightly smaller value of $\rho$. By construction, model M2 reproduces the thermodynamics up to $\Phi = 10$. Indeed, the parameters were fixed by requiring $Z(\Phi)$ to be equal to the polymer compressibility in the dilute limit and for $\Phi = 10$. Note that it also gives the correct intermolecular pair distribution function, see figure \[grcmPhi-mod\], a result which is not [*a priori*]{} obvious. Conclusions =========== In the last two decades (but the first proposals [@FK-50] can be traced back to the ’50s) several coarse-grained models have been proposed for polymers in solution under good- or $\theta$-solvent conditions. In the simplest approaches polymer chains are mapped onto single atoms interacting by means of soft potentials. These classes of models are however unable to reproduce the structural properties and give the correct thermodynamics only in the dilute limit. To go to higher densities, density-dependent potentials [@Louis:2000p269; @Bolhuis:2001p268] may be used. However, their determination requires in any case finite-density full-monomer simulations, which is what one would like to avoid by using coarse-grained models. Moreover, it is not clear how accurate they are in more complex situations in which polymers only constitute one species in the solution. To overcome these difficulties, the multiblob approach was recently proposed,[@Pierleoni:2007p193] in which each polymer chain is mapped onto a short linear chain of $n$ blobs. This model retains some degrees of freedom and thus it should allow us to obtain the correct thermodynamics even in the semidilute regime. The main difficulty of this approach is the derivation of the intramolecular interactions. In Pierleoni [*et al.*]{}[@Pierleoni:2007p193] potentials were obtained for any value of $n$ on the basis of a transferability hypothesis. However, later[@Pelissetto:2009p287] it was shown that the resulting model did not have the correct thermodynamic behavior, indicating that much more work was needed to determine the intramolecular interactions. In this paper we consider again the multiblob approach, determining the intramolecular interactions by matching universal zero-density polymer distributions.[^5] We map polymer coils onto four-blob chains (tetramers) which interact be means of bonding, bending and torsional angle potentials. They are obtained by requiring the bond-length distributions and the distributions of the bending and torsion angles to be the same in the tetramer and in the full-monomer model at zero density. As for the intermolecular interactions, we only consider pairwise blob-blob interactions, neglecting many-blob potentials. This limits the validity of the model to the regime in which blob-blob overlaps are rare, i.e., to blob volume fractions $\eta_b = c_b/c_b^* \lesssim 1$ \[$c_b$ is the blob concentration and $c_b^* = 3/(4 \pi \hat{r}_g^3)$\]. For the tetramer this gives $\Phi\lesssim n^{3\nu-1}\approx 2.9$. The tetramer model turns out to be quite accurate up to $\Phi\approx 2$, in agreement with the argument given above. In this range of densities structural properties as well as the thermodynamics are correctly reproduced. For instance, for $\Phi = 2.18$ the error on $Z(\Phi)$ is 7%. If we compare the compressibility factor computed in the tetramer model to that determined in the single-blob model we observe a factor-of-three improvement, indicating that the ideas behind the multiblob approach really work. For $\Phi \gtrsim 2$ significant deviations are observed, both for the structure — tetramers are too rigid — and for the thermodynamics — $Z(\Phi)$ in the tetramer model becomes significanly smaller than for polymers as $\Phi$ increases. We have investigated again the model proposed in [@Pierleoni:2007p193], model M1, studying in detail structure and thermodynamics. We find that the model cannot be considered as a consistent CGBM, but should rather be thought as a generic polymer model, as recently suggested by Coluzza [*et al.*]{}[@Coluzza:2011p1723] Since the thermodynamics is poorly reproduced for small values of $n$ (we mainly investigate the case $n=30$), it is not a surprise that for these numbers of blobs intermolecular correlations are significantly different from those determined in full-monomer simulations with a large number of monomers. On the other hand, internal bond distributions are quite well reproduced. Clearly, for small values of $n$, in spite of the poor thermodynamic behavior, model M1 is able to model correctly some features of the polymer shape, though not all of them — for instance, angle distributions are not reproduced. This is consistent with the results of Coluzza [*et al.*]{}[@Coluzza:2011p1723] They studied the geometric structure of polymer brushes, comparing results obtained in full-monomer simulations and in model M1. Also in that case, good agreement was observed for some structural properties. Finally, we consider the model proposed by Pelissetto.[@Pelissetto:2009p287] In this case, parameters were tuned so that the thermodynamics was exactly reproduced up to $\Phi = 10$. We find that it also reproduces well the intermolecular structure: the polymer center-of-mass distribution function is correctly reproduced in the whole density range $\Phi \lesssim 10$. As for the intramolecular structure, we find that the model gives results analogous to those obtained for model M1. Bond-length distributions are approximately reproduced in the density range we have investigated, indicating that also this model correctly reproduces some features of the polymer shape. In conclusion, we have shown that the newly proposed tetramer model is a significant step forward in the development of a consistent coarse-grained model of polymer chains based on zero-density interactions. To investigate the semidilute regime for large densities, i.e., for $\Phi\gtrsim 2$, multiblob models with $n>4$ must be developed. In this respect, the most important lesson of the present work is that many-body intramolecular interactions cannot be completely neglected, if one aims at a consistent multiblob model; their absence in model M1 is probably the cause of its failure in reproducing the thermodynamics of polymer solutions. Finally, it would be very important — we leave it for future work — to develop an analogous coarse-graining strategy for chains in $\theta$ conditions. Here single-blob models with pairwise intermolecular interactions fail since thermodynamic stability is only obtained by taking into account three-chain interactions. Since the tetramer model reproduces quite nicely three-chain correlations in the good-solvent regime, it is the good candidate to attack this problem. Acknowledgements {#acknowledgements .unnumbered} ================ C.P. is supported by the Italian Institute of Technology (IIT) under the SEED project grant number 259 SIMBEDD – Advanced Computational Methods for Biophysics, Drug Design and Energy Research. The radius of gyration of the blobs: universal predictions {#App-rgblob} ========================================================== In this appendix we wish to discuss the behavior of the radius of gyration of the blobs. If $r_{g,i}(\Phi)$ is the radius of gyration of the $i$-th blob along the chain, the ratio $r_{g,i}(\Phi)/R_g(\Phi)$ is universal, being an adimensional ratio of large-scale properties of the polymer. It only depends on the position $i$ of the blob along the chain, on the number $n$ of blobs, and on the density through the polymer volume fraction $\Phi$. Of course, this holds when the number of monomers $L$ is large, otherwise scaling corrections should be taken into account. In general we have $${r_{g,i}(\Phi,L,n) \over R_g(\Phi,L)} = f_i(n,\Phi) \left(1 + k g_i(n,\Phi) L^{-\Delta} + \ldots \right), \label{roverR}$$ where $f_i(n,\Phi)$ and $g_i(n,\Phi)$ are universal functions, $\Delta = 0.528(12)$, see Clisby,[@Clisby:2010p2249] is a universal exponent, and $k$ a nonuniversal constant that does not depend on $i$, $n$, and $\Phi$, but only on the model. In the polymer model we use at finite density, the Domb-Joyce model with $w = 0.505838$, the constant $k$ is approximately zero, so that corrections decay with the next-to-leading exponent $\Delta_2 \approx 1$. An approximate expression for the $n$-dependence of the function $f_i(n,\Phi)$ which works well for $\Phi\ll 1$ is obtained as follows. In the large-$L$ limit we have standard Flory scaling, $R_g = b L^\nu$ and $r_{g,i} = b' (L/n)^\nu$, with[@Clisby:2010p2249] $\nu = 0.587597(7)$. Now assume that the blob shape and size is not influenced by the neighboring blobs, so that the size of the blob is equal to that of a free polymer with the same number of monomers. We can thus approximate $b' \approx b$, so that $r_{g,i}/R_g = n^{-\nu}$. This formula is of course not exact, since blob-blob interactions cannot be neglected. Still, as we now show, it is reasonably accurate for $\Phi \ll 1$. -- -- -- -- In order to compute $r_{g,i}/R_g$ in the asymptotic limit, we determine $Q_i(L,n) = r_{g,i}(n)/R_g$ for $L = L_1=600$ and $L = L_2 = 2400$ in the Domb-Joyce model with $w = 0.505838$. Assuming corrections with exponent $\Delta_2 = 1.0(1)$, we estimate the asymptotic ($L\to\infty$) value as $$Q_{i,\rm as} (n)= {L_1^{\Delta_2} Q_i(L_1,n) - L_2^{\Delta_2} Q_i(L_2,n) \over L_1^{\Delta_2} - L_2^{\Delta_2} }$$ The combination $C(i,n,\Phi) = n^\nu Q_{i,\rm as}(n)$ for $\Phi = 0$ is reported in figure \[Q-ratio\] as a function of $(i-1/2)/n$ for several values of $n$. Note that this quantity is always larger than 1, indicating that a blob of $L/n$ monomers is more swollen than an isolated chain of the same degree of polymerization. This is due to the neighboring blobs which are entangled with the blob one is considering, causing swelling. Second, this effect is smaller for the boundary blobs since they have only one neighbor. The scaling $\hat{r}_{g,i}/\hat{R}_g \sim n^{-\nu}$ holds quite well at zero density even for $n=4$, with a proportionality constant which is only slightly larger than 1. In particular, for the boundary blobs we have $\hat{r}_{g,i}/\hat{R}_g \sim 1.01 n^{-\nu}$, while for the internal blobs $\hat{r}_{g,i}/\hat{R}_g \sim 1.03 n^{-\nu}$. If we average over all blobs and neglect end effects, we obtain relation (\[scaling-rg2\]), which we used extensively in the text. The swelling effect is expected to increase as $\Phi$ increases, since the higher the density the higher the blob-blob entanglement is. In figure \[Q-ratio\] we also report $C(i,n,\Phi)$ for $\Phi = 8.72$. There are here two notable differences with respect to the case $\Phi = 0$. First of all, end effects are small, indicating that much of the swelling is due to neighboring chains, consistently with the idea that for $\Phi \gtrsim 1$ polymers are strongly intertwined. Second, the $n$ dependence of the scaling function $f_i(n,\Phi)$ defined in Eq. (\[roverR\]) is not captured by the simple scaling form $n^{-\nu}$ for our small values of $n$ (of course, $f_i(n,\Phi)$ scales as $n^{-\nu}$ for $n\to \infty$). Given the blob radii of gyration, using Eq. (\[Rg-Rgb\]), we can compute the ratio $R_{g,b}/R_g$. For $n = 4$ we obtain $${R_{g,b}(\Phi)\over R_{g}(\Phi)} = \cases{ 0.89210(10) & $\qquad \Phi = 0$ \cr 0.88701(10) & $\qquad \Phi = 1.09$ \cr 0.87937(11) & $\qquad \Phi = 4.36$ \cr 0.8753(4) & $\qquad \Phi = 8.72$ } \label{Rgbratio-n=4}$$ Note that the $\Phi$ dependence is tiny. At $\Phi = 0$, a good approximation for all $n\ge 4$ is given by $${\hat{R}_{g,b}\over \hat{R}_{g}} = \sqrt{1 - k n^{-2\nu} } \qquad k = \left(1.03 - 0.04/n\right)^2, \label{RgboverR-pred}$$ which predicts for ${\hat{R}_{g,b}/\hat{R}_{g}}\approx 0.8922$ for $n=4$, in good agreement with the result (\[Rgbratio-n=4\]). $\Phi$ $n=4$ $n=10$ $n=20$ $n=30$ -------- ------- -------- -------- -------- 1.09 0.982 0.990 0.994 0.996 4.36 0.938 0.962 0.976 0.982 8.72 0.898 0.933 0.955 0.965 : Ratio $r_g(\Phi,n)/\hat{r}_g(n)$ as a function of $\Phi$ and $n$.[]{data-label="ratiorg"} [c]{}\ It is also interesting to consider the ratio $r_g(\Phi,n)/\hat{r}_g(n)$, where $r_g$ is the average blob size in the asymptotic limit $L\to \infty$ (we perform the same extrapolation as done before for the ratios $Q_i$). Results for several values of $n$ and $\Phi$ are shown in Table \[ratiorg\] and plotted in figure \[fig:ratiorg\] versus the blob volume fraction $\eta_b = c_b/c_b^* = 4 \pi \hat{r}_g^3 c_b/3$. At least for $\eta_b \lesssim 1$ the data appear to depend only on $\eta_b$ and to converge to 1 linearly as $\eta_b\to 0$: $r_g(\Phi,n)/\hat{r}_g(n) \approx 1 - 0.048 \eta_b$. Since $\eta_b\to 0$ for $n\to \infty$ at fixed $\Phi$, this result allows us to predict the ratio $Q(n)$, the average of the $Q_i(n)$ defined above, as $n\to \infty$. Indeed, we have $${r_g(\Phi,n) \over R_g(\Phi)} = {r_g(\Phi,n)\over \hat{r}_g(n)}\, {\hat{r}_g(n)\over \hat{R}_g} \, {\hat{R}_g \over R_g(\Phi)} \approx {1.03 n^{-\nu}} {\hat{R}_g \over R_g(\Phi)}. \label{rgoverRg-Phi}$$ The ratio $R_g(\Phi)/\hat{R}_g$ has been computed in several works. [@CMP-06-raggi; @Pelissetto:2008p1683] For large $\Phi$, ${\hat{R}_g/ R_g(\Phi)}$ scales [@Pelissetto:2008p1683] as $0.90\Phi^{0.115}$ so that ${r_g(\Phi,n)/R_g(\Phi)} \approx 0.93 n^{-\nu} \Phi^{0.115}$. Note that scaling (\[rgoverRg-Phi\]) sets in for quite large values of $n$ if $\Phi$ is large. For instance, for $\Phi = 8.72$ it predicts $n^\nu Q = n^\nu {r_g(\Phi,n)/R_g(\Phi)} \approx 1.23$ for $n\to \infty$, since [@CMP-06-raggi] $R_g(\Phi)/\hat{R}_g \approx 0.84$ for this value of $\Phi$. Hence, even for $n = 30$, see figure \[Q-ratio\], we are still far from the asymptotic limit. References {#references .unnumbered} ========== [10]{} url \#1[[\#1]{}]{}urlprefix\[2\]\[\][[\#2](#2)]{} P. G. de Gennes, [*Scaling Concepts in Polymer Physics*]{}, Cornell University Press, Ithaca, NY, 1979. M. Doi, [*Introduction to Polymer Physics*]{}, Clarendon Press, Oxford, 1992. J. des Cloizeaux and G. Jannink, [*Polymers in Solutions. Their Modelling and Structure*]{}, Clarendon Press, Oxford, 1990. L. Schäfer, [*Excluded Volume Effects in Polymer Solutions*]{}, Springer, Berlin, 1999. N. Clisby, [*Phys. Rev. Lett.*]{}, [**104**]{}, 55702, 2010. M. M[ü]{}ller, K. Binder, and L. Sch[ä]{}fer, [*Macromolecules*]{}, [**33**]{}, 4568, 2000. A. Cavallo, M. M[ü]{}ller and K. Binder, [*Macromolecules*]{}, [**39**]{}, 9539, 2006. P. G. Bolhuis, A. A. Louis, J. P. Hansen and E. J. Meijer E J, [*J. Chem. Phys.*]{}, [**114**]{}, 4296, 2001. A. A. Louis, P. G. Bolhuis, R. Finken, V. Krakoviack, E. J. Meijer and J. P. Hansen, [*Physica*]{} A, [**306**]{}, 251, 2002. Q. Yan and J. J. de Pablo, [*J. Chem. Phys.*]{}, [**113**]{}, 5954, 2000. A. Pelissetto and J.-P. Hansen, [*J. Chem. Phys.*]{}, [**122**]{}, 134904, 2005. W. Poon, [*J. Phys.: Condens. Matter*]{}, [**14**]{}, R859, 2002. M. Fuchs and K. Schweizer, [*J. Phys.: Condens. Matter*]{}, [**14**]{}, R239, 2002. R. Tuinier, J. Rieger J and C. G. de Kruif, [*Adv. Colloid Interface Sci.*]{}, [**103**]{}, 1, 2003. K. J. Mutch, J. S. van Duijneveldt and J. Eastoe, [*Soft Matter*]{}, [**3**]{}, 155, 2007. P. G. Bolhuis, A. A. Louis and J. P. Hansen, [*Phys. Rev. Lett.*]{}, [**89**]{}, 128302, 2002. C. Pierleoni, C. Addison, J.-P. Hansen and V. Krakoviack, [*Phys. Rev. Lett.*]{}, [**96**]{}, 128302, 2006. J.-P. Hansen and C. Pearson, [*Mol. Phys.*]{}, [**104**]{}, 3389, 2006. J. Sambriski and M. G. Guenza, [*Phys. Rev.*]{} E, [**76**]{}, 051801, 2007. C. Gross and W. Paul, [*Soft Matter*]{}, [**6**]{}, 3273, 2010. F. S. Bates and G. H. Fredrickson, [*Ann. Rev. Phys. Chem.*]{}, [**41**]{}, 525, 1990. C. N. Likos, [*Phys. Rep.*]{}, [**348**]{}, 267, 2001. M. Dijkstra, R. van Roij, and R. Evans, [*Phys. Rev. E*]{}, [**59**]{}, 5744, 1999. A. Y. Grosberg, P. G. Khalatur and A. R. Khokhlov, [*Makromol. Chem., Rapid Commun.*]{}, [**3**]{}, 709, 1982. J. Dautenhahn and C. K. Hall, [*Macromolecules*]{}, [**27**]{}, 5399, 1994. P.G. Bolhuis, A. A. Louis and J. P. Hansen [*Phys. Rev. E*]{}, [**64**]{}, 021801, 2001. A. A. Louis, P. G. Bolhuis, J. P. Hansen and E. J. Meijer, [*Phys. Rev. Lett.*]{}, [**85**]{}, 2522, 2000. R. Henderson, [*Phys. Lett. A*]{}, [**49**]{}, 197, 1974. J.-P. Hansen and I. R. McDonald, [*Theory of Simple Liquids*]{}, 3rd ed., Academic Press, Amsterdam, 2006. F. H. Stillinger, H. Sakai and S. Torquato, [*J. Chem. Phys.*]{}, [**117**]{}, 288, 2002. A. A. Louis, [*J. Phys.: Condens. Matter*]{}, [**14**]{}, 9187, 2002. C. Pierleoni, B. Capone and J. P. Hansen, [*J. Chem. Phys.*]{}, [**127**]{}, 171102, 2007. C. Addison, J. P. Hansen, V. Krakoviack, and A. A. Louis, [*Mol. Phys.*]{}, [**103**]{}, 3045, 2005. A. Pelissetto, [*J. Phys.: Condens. Matter*]{}, [**21**]{}, 115108, 2009. W. Schommers, [*Phys. Rev. A*]{}, [**28**]{}, 3599, 1983. F. M[ü]{}ller-Plathe, [*Chem. Phys. Chem.*]{}, [**3**]{}, 754, 2002. D. Reith, M. P[ü]{}tz and F. M[ü]{}ller-Plathe, [*J. Comp. Chem.* ]{}, [**24**]{}, 1624, 2003. D. Fritz, V. A. Harmandaris, K. Kremer and N. F. A. van der Vegt, [*Macromolecules*]{}, [**42**]{}, 7579, 2009. G. Milano and F. M[ü]{}ller-Plathe, [*J. Phys. Chem. B*]{}, [**109**]{}, 18609, 2005. P. Carbone, H. A. K. Varzaneh, X. Chen and F. M[ü]{}ller-Plathe, [*J. Chem. Phys.*]{}, [**128**]{}, 064904, 2008. G. Rossi, L. Monticelli, S. R. Puisto, I. Vattulainen and T. Ala-Nissila, [*Soft Matter*]{}, [**7**]{}, 698, 2010. T. Vettorel, G. Besold and K. Kremer, [*Soft Matter*]{}, [**6**]{}, 2282, 2010. M. Murat and K. Kremer, [*J. Chem. Phys.*]{}, [**108**]{}, 4340, 1998. A. J. Clark and M. G. Guenza, [*J. Chem. Phys.*]{}, [**132**]{}, 044902, 2010. R. L. C. Akkermans and W. J. Briels, [*J. Chem. Phys.*]{}, [**114**]{}, 1020, 2001. R. L. C. Akkermans and W. J. Briels, [*J. Chem. Phys.*]{}, [**115**]{}, 6210, 2001. S. Caracciolo, B. M. Mognetti and A. Pelissetto, [*J. Chem. Phys.*]{}, [**125**]{}, 094903, 2006. A. Pelissetto, [*J. Chem. Phys.*]{}, [**129**]{}, 044901, 2008. G. Ciccotti and J. P. Ryckaert, [*Comp. Phys. Rep.*]{}, [**4**]{}, 346, 1986. R. L. C. Akkermans and G. Ciccotti, [*J. Phys. Chem. B*]{}, [**108**]{}, 6866, 2004. A. A. Louis, P. G. Bolhuis and J. P. Hansen, [*Phys. Rev. E*]{}, [**62**]{}, 7961, 2000. I. Coluzza, B. Capone and J.-P. Hansen, [*Soft Matter*]{}, [**7**]{}, 5255, 2011. P. J. Flory and W. R. Krigbaum, [*J. Chem. Phys.*]{}, [**18**]{}, 1086, 1950. S. Caracciolo, B. M. Mognetti and A. Pelissetto, [*J. Chem. Phys.*]{}, [**125**]{}, 094904, 2006; (erratum) [*J. Chem. Phys.*]{}, [**126**]{}, 169901, 2007. [^1]: Note that here we use the same notation for the squared radius of gyration of a single-chain configuration and for its statistical average over all chain conformations. When we will need to distinguish between the two quantities, the average squared radius of gyration will be indicated as $\langle R^2_g\rangle$. [^2]: If we were using density-dependent potentials, the thermodynamics, hence all virial coefficients, would be exactly reproduced. However, here we only consider models with density-independent potentials, hence the only meaningful comparison is with the single-blob model in which the interactions are density independent. [^3]: For the single-blob model we shall always use the accurate expression of the pair potential given in Pelissetto [*et al.*]{}[@Pelissetto:2005p296] [^4]: In principle one can use this expression to determine the density derivative of $cZ$. However, since the relevant length scale for $g_{CM}(\rho)$ is the average distance $d$ between the centers of mass of the polymer chains, finite-size effects will be small only if $d/M\ll 1$, where $M$ is the size of the box containing the system. On the other hand, if one uses the intermolecular structure factor, the relevant scale is the correlation length $\xi$, which, in the semidilute regime, is significantly smaller than $d$. Therefore, $\xi/M$ is smaller than $d/M$, which implies that determinations using the intermolecular structure factor show smaller finite-size effects than those using $g_{CM}(\rho)$. [^5]: The polymer distributions are computed by using a lattice model. However, standard renormalization-group arguments allow us to conclude that exactly the same results would be obtained in the limit $L\to\infty$ by using any other — discrete or continuous — model.
--- abstract: 'We present a search for decays of $B$ mesons to final states with a  meson and a $\rho$ or (892) meson. The search is based on a data sample consisting of 465 million  pairs collected by the  detector at the SLAC National Accelerator Laboratory. We do not observe any statistically significant signal. The upper limits we set on the branching fractions range from [$1.4$]{} to [${\ensuremath{8.0}\xspace}\times 10^{-6}$]{}at the 90% confidence level (C.L.), including systematic uncertainties.' title: '[[Search for -meson decays to $\bone\rho$ and $\bone\Kst$]{}]{}' --- -[PUB]{}-[09]{}/[021]{}\ SLAC-PUB-[13727]{}\ authors\_jun2009\_BAD2214 Measurements of charmless hadronic  decays are a powerful tool to test standard model predictions and search for new physics effects. One of the outstanding problems is represented by the so called *polarization puzzle* in decays of  mesons to a pair of spin-one mesons. Simple helicity arguments predict a longitudinal polarization $f_L$ close to 1. Contrary to this, several vector-vector ($VV$) decay modes such as $B\ra \phi\Kst$ [@phiKstorig], $B\ra\rhop\Kstz$ [@rhopKst0], and $B\ra\omega\Kst$ [@omegaKst] exhibit $f_L \sim 0.5$. Possible explanations for this puzzle have been proposed within the standard model [@VVBSMrefs] and in new physics scenarios [@nSMetc]. The measurement of the branching fractions and polarization of charmless decays of $B$ mesons to an axial-vector and vector meson ($AV$) may shed light on the size of the amplitudes contributing to charmless $B$-meson decays and on their helicity structure. Theoretical predictions of decay rates have been performed with the naïve factorization (NF) [@CMV] and QCD factorization (QCDF) approaches. The NF calculations find the rates of $B\ra AV$ decays to be smaller than the corresponding  decays to an axial-vector and pseudo-scalar meson ($AP$). The more complete QCDF calculations find the reverse, primarily due to the larger decay constants ($\rho$ vs $\pi$ for instance); the expected branching fractions for the $AV$ modes are substantial in several cases, as large as $33\times10^{-6}$ for the  final state. Additionally, decays of  mesons to charmless $AV$ final states may be sensitive to penguin annihilation effects, which tend to enhance certain modes while suppressing others. It is thus important to investigate the largest possible number of final states. Measurements of the branching fractions to $AP$ modes $\bone h$, where $h$ denotes a charged or neutral pion or kaon, are presented in Ref. [@babar_b1P]. The results are in good agreement with the predictions of QCDF . Searches for the $AV$ decays to the final states $a_1^{\pm}\rho^{\mp}$ and $a_1^+\Kstz$ are presented in Ref. [@babar_a1V], with upper limits on the branching fractions of $30\times 10^{-6}$ and $1.6\times 10^{-6}$ (at the 90% C.L.), respectively. In this paper we search for all charge combinations of decays of a $B$ meson to a final state containing a $\bone$ meson and a $\rho$ or $\Kst(892)$ meson. No previous searches for these decays have been reported. The data sample used for these measurements was collected with the  detector at the PEP-II asymmetric  collider located at the SLAC National Accelerator Laboratory. The integrated luminosity taken at the  resonance (center-of-mass energy $\sqrt{s}=10.58\ \gev$) corresponds to 424 and is equivalent to $(465\pm5)\times 10^6$  pairs. The  detector is described in detail elsewhere [@BABARNIM]. We reconstruct $B$-meson daughter candidates through the decays $\bone\ra\omega\pi$ (we assume this branching fraction to be 1 [@PDG08]), $\omega\ra\pip\pim\piz$, $\rhop\ra\pip\piz$, $\rhoz\ra\pip\pim$, $\Kstz\ra\Kp\pim$, and $\Kstp\ra\Kp\piz$ or $\KS\pip$. We impose the following requirements on the masses of the selected candidates: $1000 < m(\bone) < 1550\ \mev$, $740 < m(\omega) < 820\ \mev$, $470 < m(\rho) < 1070\ \mev$, and $755 < m(\Kst) < 1035\ \mev$; these cuts allow some sidebands, which help estimating the background level. Neutral pions are reconstructed via the decay $\piz\ra\gaga$; photon candidates with a minimum energy of 50  are combined, and we require the pion energy to exceed 250 in the laboratory frame. The invariant mass of the $\piz$ candidate is required to be in the interval 120$-$150 . We select $\KS\ra\pip\pim$ candidates in the mass range $486 < m(\KS) < 510\ \mev$; a kinematic fit constraining the two pion tracks to originate from the same vertex is performed and we require the $\KS$ flight length to be greater than three times its uncertainty. The daughters of $\bone$, $\omega$, $\rho$ and $\Kst$ are rejected if their particle identification signatures are consistent with those of protons or electrons. $\Kp$ candidates must be positively identified as kaons, while $\pip$ must fail kaon identification. Unless otherwise stated, charge-conjugate reactions are implied. The helicity angles of the (axial-) vector mesons are measured in their rest frame. For the $\bone$ candidate, the helicity angle is defined as the angle between the flight direction of the pion from the $\bone\ra\omega\pi$ decay and the direction of the boost to the $\bone$ rest frame. We define the helicity angles of the $\rho$ and $\Kst$ mesons in an analogous manner using the direction of the daughter pions \[for the $\rho^{\pm}$ ($\rho^0$) we use the (positively) charged pion\]. Finally, the helicity angle of the $\omega$ is taken as the angle between the normal to the $3\pi$ decay plane and the direction of the boost to the $\omega$ rest frame. To suppress backgrounds originating from low-momentum particles, we apply the selection criteria summarized in Table \[tab:rescuts\]. Integration over the angle between the $\bone$ and $V$ decay planes yields the following expression for the distribution $F(\theta_A,\theta_V) \propto d^2\Gamma/d\cos\theta_A d\cos\theta_V$ in the $\bone$ and $\rho/\Kst$ helicity angles $\theta_A$ and $\theta_V$: $$\begin{aligned} \label{eq:F_ang} F(\theta_A,\theta_V) &=& \fL\,\left[\cos^2\theta_A+ \left|\frac{C_1}{C_0}\right|^2\sin^2\theta_A\right]\cos^2\theta_V + \nonumber \\ &&\hspace{-21mm}(1-\fL)\frac{1}{4}\left[\sin^2\theta_A + \left|\frac{C_1}{C_0}\right|^2(1+\cos^2\theta_A)\right]\sin^2\theta_V.\end{aligned}$$ Here  is the longitudinal polarization fraction $|A_0|^2/\sum{|A_i|^2}$, where $A_i$, $i=-1,0,1$, is a helicity amplitude of the $B\ra AV$ decay. The $C_i$ are the helicity amplitudes of $\bone\ra\omega\pi$; by parity conservation $C_{-1}=C_1$. The  decays have been studied in terms of the two parity-allowed $S$ and $D$ partial wave amplitudes, which have the measured ratio $D/S = 0.277 \pm 0.027$ [@PDG08]. From this we obtain the ratio of helicity amplitudes in Eq. \[eq:F\_ang\] [@jacobWick] $$\frac{C_1}{C_0} = \frac{1 + (D/S)/\sqrt{2}}{1 - \sqrt{2}(D/S)}.$$ [lcc]{} State & $\rho$/$\Kst$ helicity & $\bone$ helicity\ & $-0.50 < \cos\theta_{\rho} < 1.00$ & $-1.0 < \cos\theta_{\bone} < 1.0$\ & $-0.50 < \cos\theta_{\rho} < 0.80$ & $-1.0 < \cos\theta_{\bone} < 0.6$\ & $\msp0.0 < |\cos\theta_{\rho}| < 0.85$ & $-1.0 < \cos\theta_{\bone} < 1.0$\ & $\msp0.0 < |\cos\theta_{\rho}| < 0.85$ & $-1.0 < \cos\theta_{\bone} < 0.7$\ $\bone^{\pm}\Kst$ & $-0.85 < \cos\theta_{\Kst} < 1.0$ & $-1.0 < \cos\theta_{\bone} < 1.0$\ $\bone^{0}\Kst$ & $-0.85 < \cos\theta_{\Kst} < 1.0$ & $-1.0 < \cos\theta_{\bone} < 0.8$\ Two kinematic variables characterize the decay of a $B$ meson: the energy-substituted mass $\mes\equiv\sqrt{s/4-\pvec_B^2}$ and the energy difference $\DE \equiv E_B-\sqrt{s}/2$, where $(E_B,\pvec_B)$ is the -meson four-momentum vector expressed in the  rest frame. The correlation between the two variables is at the few percent level. The resolution on  is about $2.6\ \mev$, while the resolution on  varies between $20$ and $40\ \mev$ depending on the number of $\piz$ mesons in the final state. We select events with $5.25 < \mes < 5.29\ \gev$ and $|\DE| < 0.1\ \gev$ except that for  we require $-0.12 < \DE < 0.10\ \gev$ to allow for the broader signal distribution when two $\piz$ mesons are present. The average number of $B$ candidates per event in the data is between $1.3$ and $1.6$. We choose the candidate with the highest value of probability in the fit to the $B$ vertex. The dominant background originates from continuum $\epem\ra\qqbar$ events ($q=u,d,s,c$). The angle $\theta_T$ between the thrust axis [@thrust] of the $B$ candidate in the rest frame and that of the remaining particles in the event is a powerful discriminating variable to suppress this background. Continuum events peak near 1.0 in the $|\cos\theta_T|$ distribution, while $B$ decays are almost flat. We require $|\cos\theta_T| < 0.7$ for all the decay modes except for which we require $|\cos\theta_T| < 0.55$, because of substantially higher backgrounds. To further reduce continuum background we define a Fisher discriminant (${\cal F}$) based on five variables related to the event topology: the polar angles, with respect to the beam axis, of the $B$ candidate momentum and the $B$ thrust axis; the zeroth and second angular moments $L_0$ and $L_2$ of the energy flow, excluding the $B$ candidate; and the flavor tagging category [@ccbarK0]. The first four variables are calculated in the $\FourS$ rest frame. The moments are defined by $L_j = \sum_i p_i\times\left|\cos\theta_i\right|^j,$ where $\theta_i$ is the angle with respect to the $B$ thrust axis of track or neutral cluster $i$, $p_i$ is its momentum. The Fisher variable provides about one standard deviation of separation between $B$-decay events and combinatorial background. The signal yields are obtained from extended maximum likelihood fits to the distribution of the data in nine observables: , , , $m_k$, and $\cos\theta_k$; $m_k$ and $\theta_k$ are the mass and the helicity angle of meson $k$ ($k = \bone,\; \omega$, and either $\rho$ or $\Kst$). For each category $j$ (signal,  background and backgrounds originating from  decays), we define the probability density functions (PDFs) ${\cal P}_j(x)$ for the variable $x$, with the associated likelihood $\cal L$: $$\begin{aligned} &&{\cal L} = \frac{e^{-(\sum_j Y_j)}}{N!} \prod_{i=1}^N \sum_j Y_j \times \label{eq:totalL}\\ &&{\cal P}_j(\DE^i){\cal P}_j(\mes^i) {\cal P}_j(\xf^i) \prod_k \left( {\cal P}_j(m_{k}^i){\cal P}_j(\cos\theta_{k}^i) \right), \nonumber\end{aligned}$$ where $Y_j$ is the event yield for component $j$ and $N$ is the number of events entering the fit. We separately model correctly reconstructed signal events and self-crossfeed (SXF) events, which are signal events for which particles are incorrectly assigned to the intermediate resonances, or particles from the rest of the event are selected. The fraction of SXF is 0.33-0.57 depending on the final state. The signal yields for the branching fraction measurements are extracted with the use of correctly reconstructed signal events only. Backgrounds originating from $B$ decays are modeled from Monte Carlo (MC) simulation [@geant]. We select the most significant charmless modes (20-40 for each signal final state) entering our selection and build a sample taking into account measured branching fractions or theoretical predictions. The expected charmless  background yield varies between 26 and 330 events, depending on the final state. The samples include the nonresonant contributions affecting $\bone\rho$ ($\bone\Kst$), measured in our data by fitting the central regions of the $\bone \pi \pi$ ($\bone K \pi$) and $\omega \pi \rho$ ($\omega \pi \Kst$) Dalitz plots. We assume the probability of the four-body nonresonant contributions to pass our selections to be negligible. We do not introduce a component modeling $B$ decays to charmed mesons, since this background is effectively absorbed in the  component. For the $\Kst$ modes we consider the potential background contribution originating from $K\pi$ $S$-wave entering the $\Kst(892)$ selection. We model this component using the LASS model [@LASS; @Latham] which accounts for the interference between the $\Kst_0(1430)$ resonance and the nonresonant component. The shape of the $K\pi$ invariant mass is kept fixed to the results found in [@LASS]; we fit for the LASS yield in the range $1035 < m(K\pi) < 1550\ \mev$ and extrapolate the expected yield to the signal region $755 < m(K\pi) < 1035\ \mev$. We find yields that are consistent with zero, ranging from -56 to 65 events. We fix this yield to zero if it is negative and take the estimated value otherwise. PDF shapes for signal, $K\pi$, and  backgrounds are determined from fits to MC samples, while for the background we use data samples from which the signal region, $5.27 < \mes\ < 5.29\ \gev$ and $|\DE| < 0.075\ \gev$, is excluded. The calibration of $\mes$ and $\DE$ is validated using high-statistics data control samples of $B$ decays to charmed mesons with similar topologies (e.g. $B\ra D(K\pi\pi)\pi$, $B\ra D(K\pi\pi)\rho$). We use linear combinations of polynomial, exponential, and Gaussian functions to parameterize most of the PDFs. For  background, we adopt a parameterization motivated by phase-space arguments [@Argus]. We allow the most important parameters of the  background to vary in the fit, along with the signal yield. Given that the signal yields we extract are small, we cannot vary the longitudinal polarization fraction $f_L$. Since no strong theoretical predictions exist about its value, we impose $f_L = 0.5$ and vary it within the physical range to evaluate the systematic uncertainty. We do not include the SXF component in fits with signal yields that are consistent with zero to avoid instabilities in the SXF fitted yield. In the case of the mode, where the (statistical only) signal significance exceeds three standard deviations, we retain the SXF component, fixing its yield to correspond to the rate given by the simulation for its size compared with the signal yield. In this case, introducing the SXF component causes the signal yield to vary by a small fraction of the statistical error. ![\[fig:proj\_mes\]Projections onto  for the modes (a) , (b) , (c) , (d) , (e) , (f) , (g) , (h) , (i) , (j) . Points with error bars represent the data, the solid (dashed) line represents the total (sum of the backgrounds) fitting function. The background is suppressed by a cut on $\ln{\cal L}$, optimized separately for each final state.](projAll_b1V.eps "fig:"){width="49.00000%"}\ To evaluate the potential bias $Y_0$ that arises from neglecting the correlations among the variables entering the fit, we perform fits to ensembles of simulated experiments. Each such experiment has the same number of signal and background events as the data; events are generated from the PDFs, while for the other categories events are taken from fully simulated MC samples. [lcccccc]{} Mode & $Y$ & $Y_0$ &$\varepsilon~~~$ & $S~$ &  &  U.L.\ & (evts) & (evts) &(%)$~$ &$(\sigma)$ & $(10^{-6})$ & $(10^{-6})$\ & $-33\pm10$ & $\msp4\pm2$ & 3.0 & [$-$]{}& [$-1.8 \pm 0.5 \pm 1.0$]{} & [$1.4$]{}\ & $-18\pm5\:\:$ & $-4\pm2$ & 1.1 & [$-$]{}& [$-3.0\pm 0.9 \pm 1.8$]{} & [$3.3$]{}\ & $\msp37\pm25$ & $\msp8\pm4$ & 3.6 & [$0.4$]{}& [$\msp 1.5\pm 1.5 \pm 2.2$]{} & [$5.2$]{}\ & $\:\:-8\pm19$ & $\msp5\pm3$ & 2.4 & [$-$]{}& [$-1.1\pm 1.7^{+1.4}_{-0.9}$]{} $\:\:$ & [$3.4$]{}\ & & & & [$1.7$]{}& [$2.4^{+1.5}_{-1.3} \pm 1.0$]{} & [$5.0$]{}\   & $\msp3\pm8$ & $-5\pm3$ & 0.8 & [$0.9$]{}& [$\msp 1.8\pm 1.9 \pm 1.4$]{}&\   & $\:17\pm9$ & $\msp4\pm2$ & 0.9 & [$1.5$]{} & [$\msp 3.2\pm 2.1^{+1.0}_{-1.5}$]{} $\:\:$ &\ & & & & [$0.1$]{}& [$0.4^{+2.0+3.0}_{-1.5-2.6}$]{} $\:\:\,$ & [$6.7$]{}\   & $-8\pm7$ & $-3\pm2$ & 0.5 & [$-$]{}& [$-2.2\pm 3.0^{+5.0}_{-2.3}$]{}$\:\:\:\,$ &\   & $\msp3\pm4$ & $\msp0\pm0$ & 0.4 & [$0.4$]{}& [$\msp 1.6\pm 2.5 \pm 3.3$]{}&\ & $\:\:\:55\pm21$ & $\:15\pm8$ & 2.8 & [$1.5$]{}& [$\msp 2.9\pm 1.5 \pm 1.5$]{} & [$5.9$]{}\ & $\:\:\:30\pm15$ & $-6\pm3$ & 1.7 & [$2.0$]{}& [$\msp 4.8\pm 1.9^{+1.5}_{-2.2}$]{} $\:\:$ & [$8.0$]{}\ We compute the branching fraction $\cal{B}$ for each mode by subtracting $Y_0$ from the fitted signal yield $Y$ and dividing by the efficiency $\varepsilon$ and the number of  mesons in our data sample. We assume the branching fractions of the  to  and  to be each 50%, consistent with measurements [@PDG08]. We evaluate $\varepsilon$ from signal MC samples, taking into account the difference in reconstruction efficiency for longitudinally and transversely polarized events. For the $\Kstp$ modes, we combine the branching fraction results from the two sub-modes by adding their $-2\ln{\cal L}$ curves. The significance $S$ is computed from the difference between the value of $-2\ln{\cal L}$ at zero signal and its minimum value. The results are summarized in Table \[tab:results\] while in Fig. \[fig:proj\_mes\] we show the projection plots onto the  variable for the ten final states we investigated. We do not observe a statistically significant signal for any of the eight decay modes. We quote upper limits on their branching fractions at the 90% C.L., taken as the branching fractions below which lie 90% of the totals of the likelihood integrals, constraining the branching fractions to be positive. The systematic uncertainties are taken into account by convolving the likelihood function with a Gaussian of width corresponding to the total systematic uncertainties. We study the systematic uncertainties due to imperfect modeling of the signal PDFs by varying the relevant parameters by their uncertainties, derived from the consistency of fits to data and control samples (the systematic uncertainty on the signal yield varies from 0.6 to 4.1 events, depending on the final state). The uncertainty due to the bias correction is taken as the sum in quadrature of half the correction itself and its statistical uncertainty (0.4-7.5 events). We vary the yield of the  backgrounds by $\pm 50\%$ (the resulting uncertainty is 0.1-8.5 events) and the yield of the $S$-wave $K\pi$ component by the larger of $\pm 100\%$ of the extrapolated yield and its statistical uncertainty (0.2-14.3 events). The asymmetric uncertainty associated with  is estimated by taking the difference in the measured ${\cal B}$ between the nominal fit ($\fL = 0.5$) and the maximum and minimum values found in the scan along the range $[0,\,1]$. We divide these values by $\sqrt{3}$, motivated by our assumption of a flat prior for  in its physical range; this is one of the largest sources of systematic uncertainty, ranging from 0.1 to 3.6 . Another large source of uncertainty is imperfect knowledge of the SXF fraction; based on studies of control samples performed in similar analyses, we assign a 5% multiplicative systematic uncertainty on the SXF fraction (relative to correctly reconstructed signal) for each $\piz$ in the final state. Other uncertainties arise from the reconstruction of charged particles (0.4% per track), $\KS$ (1.5%), and $\piz$ mesons (3% for $\piz$); the uncertainty in the number of $B$ mesons is 1.1%. In summary, we present a search for decays of $B$ mesons to $\bone\rho$ and $\bone\Kst$ final states. We find no significant signals and determine upper limits at 90% C.L. between [$1.4$]{} and [${\ensuremath{8.0}\xspace}\times 10^{-6}$]{}, including systematic uncertainties. Though these results are in agreement with the small predictions from naïve factorization calculations [@CMV], they are much smaller than the predictions from the more complete QCD factorization calculations . The fact that the branching fractions for these $AV$ modes are smaller than our previously measured $AP$ modes [@babar_b1P] is surprising given that the opposite is expected based on the ratio of the vector and pseudoscalar decay constants. pubboard/acknow\_PRL [99]{}  Collaboration, B. Aubert , , 171802 (2003); Belle Collaboration, K.F. Chen , , 201801 (2003). Belle Collaboration, K. Abe , , 141801 (2005);  Collaboration, B. Aubert , , 201801 (2006).  Collaboration, B. Aubert , , 052005 (2009). C.W. Bauer , , 054015 (2004); P. Colangelo, F. De Fazio, and T.N. Pham, , 291 (2004); A.L. Kagan, , 151 (2004); M. Ladisa , , 114025 (2004); H. Y. Cheng, C. K. Chua, and A. Soni, , 014030 (2005); H.-n. Li and S. Mishima, , 054025 (2005); H.-n. Li, , 63 (2005). A. K. Giri and R. Mohanta, , 014008 (2004); E. Alvarez , , 115014 (2004); P. K. Das and K. C. Yang, , 094002 (2005); C.-H. Chen and C.-Q. Geng, , 115004 (2005); Y.-D. Yang, R. M. Wang and G. R. Lu, , 015009 (2005); A. K. Giri and R. Mohanta, , 249 (2005); S. Baek , , 094008 (2005); W. Zou and Z. Xiao, , 094026 (2005); Q. Chang, X.-Q. Li, and Y. D. Yang, , 038 (2007). G. Calderon, J. H. Munoz, and C. E. Vera, , 094019 (2007). H.-Y. Cheng and K.-C. Yang, , 094001 (2008).  Collaboration: B. Aubert , , 241803 (2007); , 011104(R) (2008). H.-Y. Cheng and K.-C. Yang, , 114020 (2007).  Collaboration: B. Aubert , , 031104 (2006); arXiv:0808.0579 \[hep-ex\] (2008). C. Amsler , , 1 (2008).  Collaboration: B. Aubert , , 1 (2002). M. Jacob and G. C. Wick, , 774 (2000) (originally [*op. cit.*]{}, [**7**]{}, 404 (1959)). S. Brandt , , 57 (1964); E. Farhi, , 1587 (1977).  Collaboration, B. Aubert , , 171803 (2007). The  detector Monte Carlo simulation is based on GEANT4 \[S. Agostinelli , , 250 (2003)\] and EvtGen \[D. J. Lange, , 152 (2001)\]. LASS Collaboration, D. Aston , , 493 (1988).  Collaboration, B. Aubert , , 072003 (2005); [**74**]{}, 099903(E) (2006). ARGUS Collaboration, H. Albrecht , , 278 (1990).
--- abstract: | We report the discovery of X-ray emission from NGC 7027, a prototypical object for the study of the formation and evolution of planetary nebulae (PNs). Observations with the Advanced CCD Imaging Spectrometer (ACIS) aboard the Chandra X-ray Observatory show that the X-ray emission from NGC 7027 is extended and is bipolar in morphology. The ACIS spectrum displays strong emission from highly ionized Ne and weaker emission features which we attribute to O, Mg, and Si. Model fits to this spectrum suggest a characteristic temperature $T_x \sim 3\times10^6$ K and an intrinsic (unabsorbed) X-ray luminosity of $L_x \sim 1.3\times10^{32}$ ergs s$^{-1}$. The intranebular absorption of X-ray emission is highly nonuniform, but the modeling indicates an average column density $N_H \sim 6\times10^{21}$ cm$^{-2}$, consistent with previous measurements of relatively large visual extinction within the nebula. We suggest that the X-ray emission from NGC 7027 is or was generated by a hitherto undetected fast wind from the central star of NGC 7027, or from a companion to this star. Chandra’s detection of extended, high-temperature X-ray emission from BD +30$^\circ$ 3639, NGC 6543, and now NGC 7027 suggests that such emission is a common feature of young planetary nebulae. author: - | Joel H. Kastner\ Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, 54 Lomb Memorial Dr., Rochester, NY 14623; jhk@cis.rit.edu\ Saeqa D. Vrtilek\ Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138; saku@cfa.harvard.edu\ Noam Soker\ Department of Physics, University of Haifa at Oranim, Oranim, Tivon 36006, ISRAEL; soker@physics.technion.ac.il title: 'Discovery of Extended X-ray Emission from the Planetary Nebula 7027by the Chandra X-ray Observatory ' --- \#1[=]{} \#1\#2[=.45 =.45]{} \#1\#2\#3\#4\#5\#6\#7 to\#2 ------------------------------------------------------------------------ 7027[NGC 7027]{} Introduction ============ The shaping of planetary nebulae (PNs) is a topic of considerable contemporary interest in astronomy (Kastner, Soker, & Rappaport 2000a). It has long been understood that a PN is the ejected envelope of an expired red giant star, which has been subsequently ionized and accelerated by the combination of UV radiation and fast winds from the emerging white dwarf that was the core of the former star. However, despite the appeal and widespread acceptance of the so-called “interacting winds” model of PN evolution (Kwok, Purton, & Fitzgerald 1978), crucial details of the morphologies and kinematics of many PNs do not appear to be explained by such a model (Frank 2000; Soker 2000). The early stages of evolution of PNs (and/or proto-PNs) appear to hold the key to an understanding of the mechanisms ultimately responsible for shaping these objects (e.g., Sahai & Trauger 1998). The nearby (distance $880\pm150$ pc; Masson 1989), well-studied NGC 7027 represents a particularly intriguing and important object in this regard. It is evidently a young PN (dynamical age $\sim600$ yr; Masson 1989) and displays a remarkably complex morphology. Optical imaging by the Hubble Space Telescope (Bond et al. 1997; Ciardullo et al. 1999) reveals a bright, compact ($\sim 5''$ radius) core region encircled by concentric rings of reflection nebulosity that extend to $15''$ in radius (Fig. 1a). The core region clearly suffers a large degree of spatially irregular (clumpy) extinction in the optical whereas, in the near-infrared (Kastner et al.1994; Latter et al. 2000, hereafter L2000) and radio (Masson 1989), the ionized region is revealed to be an elliptical shell. Surrounding this shell, but largely interior to the system of concentric rings, is a photodissociation region with a remarkable cloverleaf or double-ring morphology that is best seen in near-infrared H$_2$ and PAH emission (Graham et al. 1993; Kastner et al. 1996; L2000). Further complicating this picture is a system of at least two pairs of lobes protruding out from the main photodissociation region. L2000 speculate that these features, which can be seen both in H$_2$ emission and in reflection (Fig. 1a), are formed by fast, well-collimated outflows impinging on the relatively slowly expanding ($V_{\rm exp} \sim 20$ km s$^{-1}$) shell of molecular gas surrounding the photodissociation region. If such fast, collimated flows are present, then wind interactions and/or magnetic fields may be important in shaping this young PN (Blackman, Frank, & Welch 2000). To investigate this possibility, we used the Chandra X-ray Observatory to observe NGC 7027, with the goal of detecting extended, high-temperature gas within this PN. According to theory (e.g., Mellema & Frank 1995; Soker 1994), very hot ($>10^6$ K) gas is likely to be present if interacting winds or magnetic fields play an important role in sculpting a PN. Chandra imaging has already demonstrated the presence of an asymmetric “bubble” of hot ($\sim3\times10^6$ K) gas in BD +30$^\circ$3639 (Kastner et al. 2000b, hereafter KSVD) and an elongated “bubble” of X-ray emitting plasma in NGC 6543 (Chu et al. 2001). Whereas X-rays were detected from both BD +30$^\circ$3639 and NGC 6543 prior to the Chandra observations of these nebulae, NGC 7027 had not been detected previously in X-ray emission. Given the many similarities between BD +30$^\circ$3639 and NGC 7027, however, we anticipated that the latter also should be an X-ray source. Observations and Data Reduction =============================== Chandra observed NGC 7027, with ACIS as the focal plane instrument, on 2000 June 1. The duration of the observation was 18.2 ks. The Science Instrument Module was translated and the telescope was pointed such that the telescope boresight was positioned near the center of the spectroscopy CCD array (ACIS-S) and the coordinates of the PN fell on the central back-illuminated CCD (device S3). The ACIS-S3 pixel size is $0.49''$, similar to the spatial resolution of Chandra’s High Resolution Mirror Assembly. The Chandra X-ray Center (CXC) carried out standard pipeline processing on the raw ACIS event data, producing an aspect-corrected, bias-subtracted, graded, energy-calibrated event list, limited to grade 02346 events (ASCA system). From this list, we constructed a broad-band (0.3–10.0 keV) image. To improve its signal to noise ratio, the image was convolved with a Gaussian function with full-width at half maximum of 2.0 pixels, such that the resultant image (Fig. 1b) has an effective resolution of $\sim1''$. [*The image in Fig. 1b represents the first detection of X-rays from this well-studied PN.*]{} The J2000 coordinates of the center of the X-ray nebula (as determined from the CXC-processed event list), RA $=$ 21$^h$07$^m$01.75$^s$, dec $=$ +42$^\circ$14$'$10.0$''$ ($\pm$0.5$''$), are the same, to within the uncertainties, as the coordinates of the optical nebula (as listed in the SIMBAD[^1] database). We also extracted the ACIS pulse height spectrum of NGC 7027. To do so, we used CXC software to construct a histogram of pulse heights for events contained within a circle of radius 20 pixels ($\sim10''$), an aperture judged to include all of the X-ray flux from NGC 7027. The broad-band (0.2–3 keV) ACIS-S3 count rate within this aperture was 0.014 counts sec$^{-1}$. The background count rate in a nearby, off-source region of equivalent area was $\sim0.001$ counts sec$^{-1}$ (0.2–3 keV). The ACIS-S3 count rate is consistent with the non-detection of NGC 7027 by ROSAT All-Sky Survey (RASS) observations in that, adopting a plausible X-ray emission model (§3.2), the ACIS count rate implies a ROSAT Position-Sensitive Proportional Counter (PSPC) count rate of $\sim2\times10^{-3}$ counts sec$^{-1}$, well below the typical sensitivity limits of both the bright and faint source catalogs (Voges et al. 1999, 2000). Discussion ========== X-ray Image ----------- The Chandra image of NGC 7027 reveals that the nebula is clearly extended in X-rays, and shows a distinct butterfly morphology (Fig. 1b). While this morphology differs sharply from that of the clumpy, more or less elliptical nebula seen in the optical, the emitting region distributions correspond in several key respects (Fig. 1). First, the narrow “waist” just southeast of the center of the X-ray nebula — which appears to divide the nebula into a bright, northwest lobe and a fainter, southeast lobe – corresponds to a conspicuous “dark lane” located just southeast of the geometric center of the optical nebula. The optical dark lane marks the equatorial plane of the system or, perhaps, a ring of neutral material at high latitude (L2000) that is seen in absorption against the optical nebula. This “sense of perspective” is confirmed by the kinematics of molecular hydrogen emission from NGC 7027, which indicate that the NW side of the nebula is pointed toward the observer and the SE side is pointed away (L2000). Second, the brighter (NW) X-ray lobe is located on the side of the optical nebula that is closer to the observer. Third, the X-ray brightness peak located along the western edge of the NW lobe corresponds to the brightness peak of the optical nebulosity. Finally, the principal direction of alignment of the X-ray emission (NW-SE) follows that of projectile-like protrusions that appear in scattered light in the HST/WFPC2 color composite (Fig. 1a). To illustrate better this last correspondence, we show in Fig. 2 an overlay of contours of X-ray emission on a near-infrared (2.12 $\mu$m) H$_2$ $+$ continuum image obtained by HST/NICMOS (L2000). The images have been registed such that the compact source of X-ray emission near the center of the Chandra image coincides with the position of the central star (see below). This registration requires the application of offsets of $+1.5''$, $+0.5''$ (in RA, Decl.) to the Chandra image, which are well within the absolute pointing uncertainties of Chandra. The overlay demonstrates that the X-ray emission is largely contained within the central, elliptical shell of bright nebulosity seen in the near-infrared. However, the brightest X-ray emission is detected along those directions extending from the central star toward the outermost H$_2$ filaments, especially along the direction toward the H$_2$ feature that extends 10$''$ to the northwest of the central star (axis 1 in Fig. 3 of L2000). The image overlay in Fig. 2 indicates the possible presence of X-ray emission from the vicinity of the central star. However, the image alignment in Fig. 2, while suggestive, is not unique and cannot be used to conclude that the central star is an X-ray source. Such emission, if present, is not nearly so prominent as in the case of the central star of NGC 6543 (Chu et al. 2001). X-ray Spectroscopy ------------------ The Chandra pulse height spectrum of NGC 7027 shows that almost all detected photons have energies between $\sim0.2$ keV and $\sim 2.5$ keV (Fig. 3). The spectrum, which peaks at $\sim1$ keV, is evidently somewhat harder than that of BD $+30^\circ$ 3639, which peaks at $\sim 0.5$ keV (KSVD). However, the spectra of both young PNs share a prominent feature at $\sim0.9$ keV, which is likely due to a blend of Ne lines. The ACIS spectrum of NGC 7027 also displays weak features at $\sim0.6$ keV, $\sim1.3$ keV and $\sim1.8$ keV, which we tentatively attribute to emission lines of O, Mg, and Si, respectively. Guided by these identifications, we performed fits of a variable-abundance plasma emission model (the VMEKAL model) using the CXC’s Sherpa software (v1.1). The results, though highly uncertain due to the relatively small number of counts ($\sim250$) in the spectrum, indicate an approximate emitting region temperature $T_x \sim 3\times10^6$ K and volume emitting measure $\sim 2\times10^{54}$ cm$^{-3}$. The temperature is reasonably well constrained by the shape of the Ne feature, which suggests that emission from the Ne IX complex at 13.5 Å (0.89 keV) is stronger than that from the Ne X line at 12.1 Å(1.0 keV). The fit suggests the X-ray emitting region has near-solar abundances of O and Ne, and overabundances of He, C, N, Mg, and Si. As in the case of BD $+30^\circ$ 3639 (KSVD), no Fe emission is evident in the ACIS spectrum of NGC 7027. The intervening absorbing column derived from the fitting is $N_H \sim 6 \times 10^{21}$ cm$^{-2}$, which is in good agreement with measurements of visual extinction within the nebula (e.g., $A_V = 2.97$ mag; Robberto et al. 1993). It is apparent, however, that the absorption is highly nonuniform. In Fig. 4 we present spectra extracted for $5''\times5''$ square regions encompassing the NW and SE X-ray lobes of the nebula. The spectrum of the fainter SE lobe is evidently harder than that of its brighter NW counterpart, suggesting considerably larger intervening absorption toward the SE lobe. Thus, the above result for $N_H$ represents the average absorbing column toward the dominant emission source, i.e., the NW lobe. Adopting the model results for mean $N_H$ and for $T_x$, we derive a total observed flux of $F_x = 3.1 \times 10^{-14}$ ergs cm$^{-2}$ s$^{-1}$ and total unabsorbed (intrinsic) source luminosity of $L_x = 1.3 \times 10^{32}$ ergs s$^{-1}$. Conclusions =========== The detection by Chandra of extended, high-temperature X-ray emission in the central regions of BD +30$^\circ$ 3639 (KSVD), NGC 6543 (Chu et al. 2001), and now NGC 7027 suggests that many or even most young planetary nebulae may harbor very high temperature inner regions. Such emitting regions, including that in NGC 7027, likely have escaped detection by previous X-ray telescopes due to a combination of their compact source sizes and obscuration along our lines of sight. Indeed, the values of $T_x$, emission measure, and $L_x$ derived for NGC 7027 (§3.2) are very similar to those found for BD $+30^\circ$ 3639 (KSVD). Given this similarity, and the similar apparent X-ray emitting volumes of the two PNs, it is apparent that the large difference in their Chandra/ACIS-S count rates is due primarily to the larger absorbing column characterizing the emission from NGC 7027. Furthermore, the correspondence between the optical and X-ray morphologies of NGC 7027 (Fig. 1), and the sharp differences between the surface brightnesses and spectral energy distributions of the two X-ray lobes (Fig. 4), strongly suggests that the “patchy” X-ray appearance of NGC 7027 is determined in large part by foreground extinction in the nebula itself. We conclude, therefore, that the same mechanism is reponsible for the X-ray emission from both NGC 7027 and BD $+30^\circ$ 3639, and that the differences between their X-ray spectra and morphologies are largely a result of viewing angle. That is, the two nebulae have very similar intrinsic structures — prolate ellipsoidal shells with multiple protrusions along specific directions at high latitude — but BD $+30^\circ$ 3639 is viewed more nearly pole-on than NGC 7027. This interpretation is consistent with the measurement of large molecular outflow velocities in BD $+30^\circ$ 3639 (Bachiller et al. 2000) and with the smaller visual extinction measured toward its central star ($A_V \sim 0.75$ mag; Leuenhagen, Hamann, & Jeffery 1996). It is very likely that the X-ray emission from both PNs originates in shocks formed by the collision of fast outflows from the central star(s) with slower-moving material within the elliptical shells detected in radio, infrared, and optical imaging. A similar mechanism likely explains the X-ray bubble that fills the central region of NGC 6543 (Chu et al. 2001). Both BD $+30^\circ$ 3639 and NGC 6543 present evidence for fast winds, with velocities of $v_f = 700$ km s$^{-1}$ (Leuenhagen et al.  1996) and $v_f = 1700$ km s$^{-1}$ (Perinotto, Cerruti-Sola, & Lamers 1989), respectively; no such fast wind has been detected in NGC 7027. One prediction of the foregoing model, therefore, is that the central star of NGC 7027 — or a companion to this central star (Soker & Rappaport 2000 and references therein) — drives a fast wind. If this fast wind were highly collimated it would explain simultaneously the “patchy” X-ray emission morphology of the nebula as well as its outer loops of H$_2$ emission (L2000). For the X-ray emitting gas to be shocked to a temperature of $3\times10^6~{\rm K}$ the minimum preshock fast wind velocity would be $\sim 400~{\rm km}~{\rm s}^{-1}$. The electron density $n_e$ can be obtained from the emission measure ${\rm EM} \equiv n_e n_p V_x \simeq 2 \times 10^{54}$ cm$^{-3}$, where $n_p$ is the proton density and $V_x$ the volume of the X-ray emitting gas. Assuming that the X-ray emitting gas occupies half the volume of the inner cavity (Kastner et al. 2001), we estimate $V_x=10^{50}$ cm$^3$. This suggests an average electron density $n_e \simeq 150$ cm$^{-3}$. The cooling time for gas at $3\times10^6~{\rm K}$ is then given by $$t_{\rm cool} \simeq 7000 \left( \frac {n_e}{150~{\rm cm}^{-3}} \right)^{-1} {\rm yr},$$ which, given the derived electron density, is much longer than the dynamical age of the nebula ($\sim 600~{\rm yr}$; Masson 1989). So, if the present wind speed is well in excess of $\sim 400~{\rm km}~{\rm s}^{-1}$, as would be expected for the present central white dwarf (see below), this would suggest heat conduction along magnetic field lines and/or mixing of the fast wind with nebular material acts to moderate the temperature of the X-ray emitting region (KSVD and references therein; Chu et al. 2001). The total mass of hot ($T \sim 3\times10^6$ K) gas implied by our observations and modeling, $\sim 10^{-5} M_\odot$, suggests a duration of only $\sim100$ yr for the episode of mass loss via a fast wind — less than the dynamical age of the nebula — assuming a mass loss rate typical of the central stars of young PNs (i.e., $\sim 10^{-7} M_\odot ~{\rm yr}^{-1}$) and that the X-ray-emitting gas is dominated by fast wind (rather than nebular) material. The presence of a luminous source of X-rays within NGC 7027 raises the intriguing possibility that this emission is ultimately responsible for the excitation of its infrared H$_2$ line emission (e.g., Gredel & Dalgarno 1995) and, perhaps, for the presence of certain key species in its molecular envelope (e.g., HCO$^+$; Deguchi et al. 1990). In a subsequent paper (Kastner et al. 2001), we further pursue these and other suggestions raised in this paper and in KSVD. Bachiller, R., Forveille, T., Huggins, P. J., Cox, P., & Maillard, J. P. 2000, A&A, 353, L5 Blackman, E., Frank, A., & Welch, C. 2001, ApJ, 546, 288 Bond, H. E., Fullton, L. K., Schaefer, K. G., Ciardullo, R., & Sipior, M. 1997, in IAU Symp. 180, Planetary Nebulae, ed. H. J. Habing & H. J. G. L. M. Lamers (Dordrecht: Kluwer), 211 Ciardullo R., Bond H.E., Sipior M.S., Fullton L.K., Zhang C.-Y., & Schaefer K.G. 1999, AJ, 118, 488 Chu, Y.-H., Guerrero, M.A., Gruendl, R.A., Williams, R.M., & Kaler, J.B. 2001, ApJL, in press Deguchi, S., Izumiura, H., Kaifu, N., Mao, X., Nguyen-Q-Rieu, Ukita, N. 1990, ApJ, 351, 522 Frank, A. 2000, in “Asymmetrical Planetary Nebulae II: From Origins to Microstructures,” eds. J.H. Kastner, N. Soker, & S. Rappaport, ASP Conf. Ser. Vol. 199, p. 225 (astro-ph/9912191) Graham, J. R., Serabyn, E., Herbst, T. M., Matthews, K., Neugebauer, G., Soifer, B. T., Wilson, T. D., & Beckwith, S. 1993, AJ, 105, 250 Gredel, R., & Dalgarno, A. 1995, ApJ, 446, 852 Kastner, J. H., Gatley, I., Merrill, K. M., Probst, R., & Weintraub, D. A. 1994, ApJ, 421, 600 Kastner, J.H., Soker, N., & Rappaport, S., eds. 2000a, “Asymmetrical Planetary Nebulae II: From Origins to Microstructures,”, ASP Conf. Ser. Vol. 199 Kastner, J.H., Soker, N., Vrtilek, S.D., & Dgani, R. 2000b, ApJL, 545, L57 (KSVD) Kastner, J.H., Soker, N., & Vrtilek, S.D. 2001, in preparation Kastner, J. H., Weintraub, D. A., Gatley, I., Merrill, K. M., & Probst, R. 1996, ApJ, 462, 777 Kwok, S., Purton, C. R., & Fitzgerald, P. M. 1978, ApJ, 219, L125 Latter, W. B., Dayal, A., Bieging, J. H., Meakin, C., Hora, J. L., Kelly, D. M., Tielens, A. G. G. M. 2000, ApJ, 539, 783 (L2000) Leuenhagen, U., Hamann, W.-R., & Jeffery, C. S. 1996, A&A, 312, 167 Masson, C. R. 1989, ApJ, 336, 294 Mellema, G. & Frank, A. 1995, MNRAS, 273, 401 Perinotto, M., Cerruti-Sola, M., Lamers, H.J.G.L.M. 1989, ApJ, 337, 382 Robberto, M., Clampin, M., Ligori, S., Paresce, F., & Staude, H. J. 1993, A&A, 280, 241 Sahai, R., & Trauger, J.T. 1998, AJ, 116, 1357 Soker, N. 1994, AJ, 107, 276. Soker, N. 2000, in “Asymmetrical Planetary Nebulae II: From Origins to Microstructures,” eds. J.H. Kastner, N. Soker, & S. Rappaport, ASP Conf. Ser. Vol. 199, p. 71 (astro-ph/9909258) Soker, N., & Rappaport, S. 2000, ApJ, 538, 241 Voges, W. et al. 1999, A&A, 349,38 Voges, W. et al. 2000, http://www.xray.mpe.mpg.de/rosat/survey/rass-fsc/ Figure Captions {#figure-captions .unnumbered} --------------- Figure 1. : Comparison of [*a)*]{} Hubble Space Telescope (HST) optical (Ciardullo et al.1999) and [*b)*]{} Chandra X-ray Observatory images of NGC 7027. The HST color composite was generated from images obtained through filters F555W (5550 Å) and F814W (8140 Å). Figure 2. : Overlay of Chandra X-ray Observatory image of NGC 7027 (contours; levels at 3, 5, 7, and 10 counts per pixel integrated over the 18.2 ksec observation) on a Hubble Space Telescope near-infrared image, obtained at 2.12 $\mu$m (Latter et al. 2000). The HST image is dominated by H$_2$ rovibrational emission, but also includes contributions from continuum and He I emission. Figure 3. : [*Top:*]{} Chandra/ACIS spectrum of NGC 7027 (solid histogram), with “best-fit” model overlaid (dotted curve). The spectrum is presented for a bin size of 52 eV. [*Bottom:*]{} Residuals of the fit, in units of the measurement uncertainty for each spectral bin. Figure 4. : Comparison of spectra extracted for NW and SE X-ray lobes of NGC 7027. Both spectra are presented for a bin size of 52 eV. [^1]: http://simbad.u-strasbg.fr/Simbad
--- abstract: | =0.7cm We investigate the possible occurrence of extra spatial dimensions ($D = 3+\epsilon$) in the early universe. A detailed calculation is presented which shows that the crucial signal is the apparent inequality of the cosmological $Z$-term between matching Lyman alpha (Ly$_{\alpha}$) and Lyman beta (Ly$_{\beta}$) spectral lines, both emission and absorption, when using the present epoch (laboratory) wavelengths. We present preliminary upper limits to the value of epsilon, to be improved by direct, more careful analysis of the spectra. We take catalogued quasar Ly$_{\alpha}$ forest data and perform Student’s t-test to determine whether we should reject the null hypothesis (no fractal dimensions). Finally, a $\chi^{2}$ analysis is done for fitting $\epsilon$ in the early universe. The statistical tests and experimental data are all consistent with $\epsilon = 0$ for $Z \leq 3.3$, but the experimental data support non-zero $\epsilon$ values for $Z \geq 4$. However, it should be emphasized that the non-zero values of epsilon found for $Z \geq 4$ may be due to undiscovered systematic errors in the original data. --- =-2cm =0.7cm USM-TH-83 **Extra Dimensions in the Early Universe** Douglas J. Buettner\ XonTech Corporation\ 6862 Hayvenhurst Avenue, Van Nuys, CA 91406 P.D. Morley\ EIS International\ 555 Herndon Parkway, Herndon, VA 20170 Ivan Schmidt\ Department of Physics, Universidad Técnica Federico Santa María\ Casilla 110-V, Valparaíso, Chile PACS: 98.62.Ra, 11.10.Kk, 98.62.Py, 98.80.Es (To be published in Physics Letters B.) Many modern physical theories predict [@KK] that space has more than 3 spatial dimensions, some of which would reveal themselves only at small distances. This means that because of an expanding universe, the effective dimension of space may be a time-dependent parameter. In fact, the dimension of space is an experimental quantity [@AZ], whose present epoch value and past distant value may be different. Since we are interested in the possibility that the early universe had fractal ($D = 3+\epsilon$ with $\epsilon \ll 1$) spatial dimensions, the natural place for investigation is in the spectra of distant ($Z = \frac{\Delta \lambda}{\lambda}> 2.5$) quasars. In this regard, the quasar’s Lyman spectra and the existence of spectroscopic Ly$_{\alpha}$ forests [@MR] provide an ideal opportunity to probe the fractal nature of early universe space on an atomic length scale, provided we can identify a suitable experimental signal. Past researchers have investigated the possibility of a time-varying fine structure constant ($\alpha$), using the hyperfine multiplets of absorption lines in quasar ionized iron and magnesium spectra, and obtaining an upper bound of $\delta \alpha / \alpha < 1.1 \times 10^{-5}$ [@JKW]. In this paper we use ancient quasar light to probe the dimension of space in the early universe, specifically matching (same $Z$) Ly$_{\alpha}$ and Ly$_{\beta}$ hydrogen lines. As far as we know, both the main idea of this paper and the specific matching lines technique have not been considered before. Therefore we discuss in detail the errors involved, which are both experimental and theoretical. To lowest order in quantum corrections, the atomic electromagnetic potential is the solution of the Poisson equation, whose form depends on the spatial dimension In [@BM] the D(spatial)-dimensional Schrödinger equation is derived. From a mathematical point-of-view [@DH], expectation values in quantum mechanics can be analytically continued into continuous dimensions and it becomes meaningful to construct a Taylor’s expansion for $D = 3+\epsilon$ with $\epsilon \ll 1$ $$<|H|>\big|_{D=3+ \epsilon} =<|H|>\big|_{D=3} + \frac{d <|H|> }{d D}\big|_{D=3}\epsilon + \cdots \; .$$ Reference [@BM] has proven the generalized Hellmann-Feynman theorem $$\frac{d <|H|> }{d D}\big|_{D=3}=<| \frac{\partial H}{\partial D}\big|_{D=3} |>$$ where $H$ is the D-dimensional Hamiltonian. An interesting aspect of fractal dimensions is that the electric charge $e$ becomes a dimensional constant. The scaling is $e \sim l^{(D-3)/2}_{o}$. Thus a length parameter $l_{o}$ enters into the problem. In discussing the present-epoch Lamb shift, reference [@BM] has shown that if this length parameter is not very much smaller than the Planck length ($10^{-33}$ cm), then it makes a negligible contribution to atomic energy levels. Using eqn. (2) we have calculated the shift in energy levels due to the first order $\epsilon$ contributions to atomic hydrogen, and these are given in Table 1. The atomic energy levels are $$E(nl) = E(nl)|_{laboratory} + \Delta E(nl) \times \epsilon \; .$$ From these energy levels we obtain the Ly$_{\alpha}$ ($\alpha$) and Ly$_{\beta}$ ($\beta$) rest frame transition wavelengths $$\lambda _{\alpha }=\lambda _{\alpha }^{lab}+a\,\epsilon, \quad \; {\rm and}\quad \; \lambda _{\beta }=\lambda _{\beta }^{lab}+b\,\epsilon \; ,$$ where $\lambda_{\alpha}^{lab} = 1215.67 \mbox{\normalsize\/ \AA}$ and $\lambda_{\beta}^{lab} = 1025.72 \mbox{\normalsize\/ \AA}$ are the laboratory (present epoch) wavelengths, and $a=1418.27\mbox{\normalsize\/ \AA}$ and $b=1111.18 \mbox{\normalsize\/ \AA}$ are the corresponding shifts due to the extra dimensions. Equation (4) demonstrates that a fractal dimension of space affects the two primary cosmological transitions differently. From the physics viewpoint, the kinetic energy and centripetal terms in the D-dimensional Schrödinger equation are most sensitive to the presence of fractal dimensions and make the atomic energy levels ideal indicators of additional space-time dimensions. [lc]{}\ state vector $|nl(j)>$ & $\Delta E(nl)$\ &\ $|1s(1/2)>$ & 1/2\ $|2s(1/2)>$ & 1/16\ $|2p(1/2)>$ and $|2p(3/2)>$ & 1/16\ $|3p(1/2)>$ and $|3p(3/2)>$ & 1/54\ Lyman alpha & -7/16\ Lyman beta & -13/27 The effect of a non-zero fractal dimension is as follows. When measurements are taken of matching (same hydrogen cloud) Ly$_{\alpha}$ and Ly$_{\beta}$ lines (both absorption and emission), it will not be possible to obtain the same $Z$ shift using present epoch (i.e. laboratory) rest wavelengths. Conversely, since matching Ly$_{\alpha}$ and Ly$_{\beta}$ lines must have the same cosmological $Z$, the measurement of the two red-shifted lines allows us to obtain the original early universe rest frame frequencies, and determine statistically whether early universe fractal dimensions exist. By equating $Z$ and $\epsilon$ for the two transitions, we obtain two equations in the two unknowns $\lambda_{\alpha}$, $\lambda_{\beta}$ which are the original early universe rest frame transitions. From either solution, using eqns. (4), $\epsilon$ can be deduced: $$\epsilon = \frac{\lambda_{\alpha}^{obs}}{a} \frac{[ \frac{\lambda_{\alpha}^{lab}}{a} - \frac{\lambda_{\beta}^{lab}}{b}]} {[ \frac{\lambda_{\alpha}^{obs}}{a} - \frac{\lambda_{\beta}^{obs}}{b} ]} - \frac{\lambda_{\alpha}^{lab}}{a}$$ where $\lambda_{\alpha}^{obs}$, $\lambda_{\beta}^{obs}$ are the respective Ly$_{\alpha}$, Ly$_{\beta}$ measured red-shifted transition wavelengths, and $\lambda_{\alpha}^{lab}$, $\lambda_{\beta}^{lab}$ are the laboratory measured wavelengths. In order to obtain the unknown quasar rest frame transitions from the measured cosmological red shifted lines, one must have strict equality of the $Z$ factors. By choosing matching Ly$_{\alpha}$ and Ly$_{\beta}$ lines, this is assured. Other spectra, such as metal ions, have narrower lines than the hydrogen Lyman, but finding two matching transitions in the same element is difficult. Using two different elements will introduce uncertainty as to the equality of the two $Z$. Finally, computing the epsilon expansion coefficients for elements other than hydrogen involves a considerable effort, with greater error in the theoretical $a$ and $b$ values compared to hydrogen. Using standard error propagation, the error associated with $\epsilon$, $\delta \epsilon$, can be calculated. Thus $$\delta \epsilon^{2}=\left\{ \left[-{1\over a}+{\Lao\over a^{2} \left(\dLo\right)}\right]\delta \Lal\right\} ^{2} + \left\{ \left[{\Lao\over ab\left(\dLo\right)}\right]\delta \Lbl\right\} ^{2} $$ $$+ \left\{ \left[-{\Lao\left(\dLl\right)\over a^{2}\left(\dLo\right)^{2}}+{\left(\dLl\right)\over a\left(\dLo\right)}\right]\sigma _{\Lao}\right\} ^{2}$$ $$+\left\{ \left[{\Lao\left(\dLl\right)\over ab\left(\dLo\right)^{2}}\right]\sigma _{\Lbo}\right\} ^{2}$$ $$+ \left\{ \left[{\Lal\over a^{2}}+{{\Lao}^{2}\left(\dLl\right)\over a^{3}\left(\dLo\right)^{2}}-{\Lal\Lao\over a^{3}\left(\dLo\right)}-{\Lao\left(\dLl\right)\over a^{2}\left(\dLo\right)}\right]\delta a\right\} ^{2}$$ $$+ \left\{ \left[-{\Lao\left(\dLl\right)\Lbo\over ab^{2}\left(\dLo\right)^2}+{\Lao\Lbl\over ab^{2}\left(\dLo\right)}\right]\delta b\right\} ^{2}$$ Here $\delta\lambda^{lab}_{\alpha}$ and $\delta\lambda^{lab}_{\beta}$ are the uncertainties in the lab (present epoch) wavelengths (taken to be $0.01 \AA$, since we average over the hyperfine Lyman doublet), $\sigma_{\lambda_{\alpha}^{obs}}$ and $\sigma_{\lambda_{\beta}^{obs}}$ are the standard deviations of the measured red-shifted lines, and $\delta a, \delta b$ are the theoretical uncertainties in the epsilon expansion values (taken as $\delta a/a = \alpha = \delta b/b$). This equation includes all the errors associated with the quantities determining the value of $\epsilon$, for each matching Ly$_{\alpha}$ and Ly$_{\beta}$ data pair. For example, the absorption widths of Ly$_{\beta}$ are larger than Ly$_{\alpha}$ due to contamination by lower redshift Ly$_{\alpha}$ lines. Thus the centroid of the line has an ambiguity, resulting in a possible non-zero $\epsilon$ for that pair, having nothing to do with $D \neq 3$. This spectral contamination leads to an increase in the line width, so even though $\epsilon$ picks up a non-zero contribution, so too does $\delta \epsilon$, and it is the relative magnitude of these two quantities which determines the significance of the data pair. The SIMBAD database was searched for papers having either emission or absorption spectra containing matching Ly$_{\alpha}$ and Ly$_{\beta}$ lines. Those spectra which have multiple very closely grouped Ly$_{\alpha}$ and Ly$_{\beta}$ lines, indicating the presence of closely spaced hydrogen cloud groupings, were not used due to the possible ambiguity of assigning which Ly$_{\alpha}$ line goes with which Ly$_{\beta}$ line. This non-ambiguity filter and the fact that of the many references identified in the database, very few contained tabular identification of both Ly$_{\alpha}$ and Ly$_{\beta}$ lines, restricted the number of matching pairs to 11, given in Table 2. In all cases, the original authors (astronomers) identified the Lyman lines. QSO $\lambda^{obs}_{\alpha}$ (Å) $\lambda^{obs}_{\beta}$ (Å) FWHM $\sigma_{\alpha}$ (Å) $\sigma_{\beta}$ (Å) ref --------------- ------------------------------ ----------------------------- --------- ----------------------- ---------------------- -------- Q1206+119 4890.79 4126.63 50 km/s 0.3 0.2 [@AD] Q1315+472 4342.76 3664.28 50 km/s 0.3 0.2 [@AD] Q1315+472 4322.87 3647.27 50 km/s 0.3 0.2 [@AD] Q1315+472 4220.69 3561.08 50 km/s 0.3 0.2 [@AD] Q1623+268 4289.70 3619.42 50 km/s 0.3 0.2 [@AD] BR 0401-1711 6383.5 5396.3 5 Å 1.8 1.8 [@DT] BRI 1108-0747 6000.5 5050.7 5 Å 1.8 1.8 [@DT] BRI 1114-0822 6755.5 5659.7 5 Å 1.8 1.8 [@DT] BR 1117-1329 6130.6 5111.2 5 Å 1.8 1.8 [@DT] Q0010+008 4963.7 4190.2 3.6 Å 1.3 1.3 [@LJS] Q0014+813 5253.29 4432.44 0.06 Å 0.02 0.04 [@CJH] : QSOs with matching Ly$_{\alpha}$ and Ly$_{\beta}$. Here $\lambda^{obs}_{\alpha}$ and $\lambda^{obs}_{\beta}$ are the measured redshifted transition wavelengths, FWHM is the spectral resolution of the data, and $\sigma_{\alpha}$ and $\sigma_{\beta}$ are the standard deviations. We then transferred each authors’ data into Mathematica for analysis, and $\epsilon$ was computed for each matched pair using (5). We used weighted averages based on the quantum mechanical intensities for the laboratory wavelengths $\lambda_{\alpha}^{lab}$, $\lambda_{\beta}^{lab}$ from the (1/2-1/2) and (1/2-3/2) transitions. Next, we computed the uncertainty for each $\epsilon$, $\delta \epsilon$, based on the listed spectral resolution from each reference and the order($\alpha$) uncertainty due to higher order quantum corrections. Table 3 lists these results. ------------------------------------------------- QSO $<Z>$ $\epsilon$ $\delta \epsilon$ --------------- -------- ------------ ----------- Q1206+119 3.0222 -0.000059 0.00095 Q1315+472 2.5723 -0.00023 0.0011 Q1315+472 2.556 0.00052 0.0011 Q1315+472 2.48 0.00045 0.0011 Q1623+268 2.5287 0.000064 0.0011 BR 0401-1711 4.236 -0.022 0.0058 BRI 1108-0747 3.922 0.030 0.0074 BRI 1114-0822 4.495 0.094 0.016 BR 1117-1329 3.958 0.17 0.030 Q0010+008 3.076 -0.0059 0.0049 Q0014+813 3.32 0.00008 0.00019 ------------------------------------------------- : $\epsilon$ with its uncertainty $\delta \epsilon$ for each mean QSO $Z$. These data have a mean for $\epsilon$ of 0.02436, with a standard deviation of 0.057. Figure 1 shows $\epsilon$ for each of the emission and absorption line pairs, against the mean QSO $Z$ value for the pair. We next employed Student’s t-test (10 degrees of freedom) to check the null hypothesis $\epsilon$ = 0. Computing the value for t gave 1.4056. Statistically, this means that to the level of significance of 0.05, we cannot reject $\epsilon$ = 0, but to the level of significance of 0.10, it can be rejected [@GKK]. Finally a $\chi^{2}$ fitting gave $\epsilon$ = 0 with a goodness-of-fit probability of 0.2867. Concerning the data, [@CJH] ($Z$ = 3.32) taken with the Keck 10m instrument, is the highest quality. The uncertainty in $\epsilon$, $\delta\epsilon$, due to the error measurements in the red-shifted lines is considerably smaller than epsilon itself for the data of reference [@DT]. This can be an indicator that $\epsilon$ really is non-zero for high $Z \geq 4$ objects, or that the $Z \geq 4$ data are contaminated with some undetermined systematic error. In conclusion, the statistical tests and experimental data are all consistent with $\epsilon = 0$ for $Z \leq 3.3$, but the experimental data support non zero $\epsilon$ values for $Z \geq 4$. High spectral resolution data for $Z \geq 4$ would allow $\epsilon$ in the early universe to be better determined. [**Acknowledgments:** ]{} This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. We would also like to thank the SIMBAD staff for their assistance, and for providing one of us (DJB) with an account to search their database for high $Z$ QSOs. [99]{} See the recent review: “Early History of Gauge Theories and Kaluza-Klein Theories”, Lochlain O’Raifeartaigh and Norbert Straumann, hep-ph/9810524; Also: [*Modern Kaluza-Klein Theories*]{}, T. Appelquist, A. Chodos and P. G. O. Freund (Eds.), Frontiers in Physics, vol. 65, Addison-Wesley, Reading, MA, 1987. Anton Zeilinger and Karl Svozil, Phys. Rev. Lett. [**54**]{} (1985) 2553. M. Rauch, Annu. Rev. Astron. Astrophys. [**36**]{} (1998) 267; C. R. Lynds, Astrophys. J. [**164**]{} (1971) L73; W. L. W. Sargent et al., Astrophys. J. Suppl. Ser. [**42**]{} (1980) 41. J. K. Webb et al., Phys. Rev. Lett. [**82**]{} (1999) 884; See also: M. J. Drinkwater et al., Mon. Not. R. Astron. Soc. [**295**]{} (1998) 457; S. A. Levshakov, Mon. Not. R. Astron. Soc. [**269**]{} (1994) 339. Berndt Müller and Andreas Schäfer, Phys. Rev. Lett. [**56**]{} (1986) 1215; J. Phys. [**A19**]{} (1986) 3891. David Hochberg and James T. Wheeler, Phys. Rev. [**D43**]{} (1991) 2617. Adam Dobrzycki, Astrophys. J. [**457**]{} (1996) 102. David Tytler et al., Astron. J. [**106**]{} (1993) 426. L. J. Storrie-Lombardi et al., Astrophys. J. [**468**]{} (1996) 121. M. Rugers and C. J. Hogan, Astrophys. J. [**459**]{} (1996) L1. , Gopal K. Kanji, Sage Publications, Thousand Oaks, California, 1999.
--- abstract: 'When we elastically impose a homogeneous, affine deformation on amorphous solids, they also undergo an inhomogeneous, non-affine deformation, which can have a crucial impact on the overall elastic response. To correctly understand the elastic modulus $M$, it is therefore necessary to take into account not only the affine modulus $M_A$, but also the non-affine modulus $M_N$ that arises from the non-affine deformation. In the present work, we study the bulk ($M=K$) and shear ($M=G$) moduli in static jammed particulate packings over a range of packing fractions $\varphi$. The affine $M_A$ is determined essentially by the static structural arrangement of particles, whereas the non-affine $M_N$ is related to the vibrational eigenmodes. One novelty of this work is to elucidate the contribution of each vibrational mode to the non-affine $M_N$ through a modal decomposition of the displacement and force fields. In the vicinity of the (un)jamming transition, $\varphi_{c}$, the vibrational density of states, $g(\omega)$, shows a plateau in the intermediate frequency regime above a characteristic frequency $\omega^\ast$. We illustrate that this unusual feature apparent in $g(\omega)$ is reflected in the behavior of $M_N$: As $\varphi \rightarrow \varphi_c$, where $\omega^\ast \rightarrow 0$, those modes for $\omega < \omega^\ast$ contribute less and less, while contributions from those for $\omega > \omega^\ast$ approach a constant value which results in $M_N$ to approach a critical value $M_{Nc}$, as $M_N-M_{Nc} \sim \omega^\ast$. At $\varphi_c$ itself, the bulk modulus attains a finite value $K_c=K_{Ac}-K_{Nc} > 0$, such that $K_{Nc}$ has a value that remains below $K_{Ac}$. In contrast, for the critical shear modulus $G_c$, $G_{Nc}$ and $G_{Ac}$ approach the same value so that the total value becomes exactly zero, $G_c = G_{Ac}-G_{Nc} =0$. We explore what features of the configurational and vibrational properties cause such the distinction between $K$ and $G$, allowing us to validate analytical expressions for their critical values.' author: - Hideyuki Mizuno - Kuniyasu Saitoh - 'Leonardo E. Silbert' bibliography: - 'manuscript.bib' title: Elastic Moduli and Vibrational Modes in Jammed Particulate Packings --- Introduction ============ A theoretical foundation to determine and predict the elastic response of amorphous solids persists as an ongoing problem in the soft condensed matter community [@Alexander_1998]. As developed, the classical theory of linear elasticity of solids is based on the concept of affineness [@elastictheory2; @elastictheory; @Ashcroft; @kettel]: The elastic response of solids is inferred on assuming an affine deformation, i.e., the constituent particles are assumed to follow the imposed, homogeneous, affine deformation field. For that case, the elastic modulus can be formulated through the so-called Born-Huang expression, which we denote as the affine modulus in this paper. In contrast, amorphous solids, such as molecular, polymer, and colloidal glasses [@monaco_2009; @monaco2_2009; @tsamados_2009; @Fan_2014; @Wagner_2011; @Hufnagel_2015; @Mayr_2009; @Mizuno_2013; @Wittmer_2013; @Wittmer2_2013; @Wittmer_2015; @yoshimoto_2004; @Zaccone_2013; @makke_2011; @Klix_2012], disordered crystals [@Kaya_2010; @Mizuno2_2013; @Mizuno_2014], and athermal jammed or granular packings [@OHern_2002; @OHern_2003; @Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009; @Wyart; @Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006; @Karmakar_2010; @Hentschel_2011; @Zaccone_2011; @Zaccone2_2011; @Zaccone_2014; @Lerner_2014; @Karimi_2015], exhibit inhomogeneous, non-affine deformations or relaxations, which cause the system to deviate from the homogeneous affine state, significantly impacting the elastic response. In such cases, the Born-Huang expression for the elastic modulus requires the addition of non-negligible correction arising from the non-affine deformation. Therefore, the key to determining the mechanical properties of amorphous solids lies in understanding the role played by their non-affine response [@Makse_1999; @Wittmer_2002; @Tanguy_2002; @leonforte_2005; @DiDonna_2005]. Here, it should be noted that the presence of disorder is not the only defining property necessary for observing non-affine behavior. While a perfectly ordered crystalline solid with a single atom per unit cell shows a true affine response, such that the Born-Huang expression becomes exact in this case, crystals with a multi-atom unit cell generally exhibit non-affine responses [@Jaric_1988]. Thus, investigating the fundamental mechanisms that lead to non-affine behavior is a topic of interest to the broader community concerned with materials characterization. When all the constituent particles in an amorphous solid are displaced according to a homogeneous affine strain field, its immediate elastic response is described by the affine deformation with its associated, affine modulus (or the Born-Huang expression) [@elastictheory; @elastictheory2; @Ashcroft; @kettel; @Alexander_1998]. However, due to the amorphous structure, whereby the local environment of each particle is slightly different from every other particle, the imposed affine deformation actually causes the forces on individual particles to become unbalanced in a heterogeneous manner [@Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006]. Thus, as the particles seek pathways to relax back towards a new state of mechanical equilibrium, they adopt a configuration that is different from the originally imposed affine deformation field [@Wittmer_2002; @Tanguy_2002; @leonforte_2005; @DiDonna_2005; @Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006]. Consequently, the elastic response of an amorphous solid cannot be described by the affine deformation response alone. It also becomes necessary to take into account the non-affine deformation (relaxation). The elastic modulus is therefore composed of two components [@Mayr_2009; @Mizuno_2013; @Wittmer_2013; @Wittmer2_2013; @Wittmer_2015; @yoshimoto_2004; @Zaccone_2013; @Mizuno2_2013; @Mizuno_2014; @Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006; @Karmakar_2010; @Hentschel_2011; @Zaccone_2011; @Zaccone2_2011; @Zaccone_2014]: (i) The affine modulus, which comes from the imposed affine deformation, and (ii) the non-affine modulus, which is considered as an energy dissipation term during non-affine relaxation, or more specifically regarded as a inhomogeneous repartitioning of the interaction potential energy during the relaxation process as work done along the non-affine pathways. In the harmonic limit, the affine modulus essentially derives directly from the static configuration of the constituent particles and the interaction potential between them. Whereas, the non-affine modulus is formulated in terms of the vibrational eigenmodes (eigenvalues and eigenvectors) of the system [@Lutsko_1989; @Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006; @Karmakar_2010; @Hentschel_2011; @Zaccone_2011; @Zaccone2_2011; @Zaccone_2014], which can be obtained by performing a normal mode analysis on the dynamical matrix [@Ashcroft; @kettel; @McGaughey]. Physically this means that the vibrational eigenmodes are excited during the non-affine deformation process, contributing to the energy relaxation (the non-affine elastic modulus) [@Wittmer_2002; @Tanguy_2002]. In this sense, the nonaffine modulus can be constructed as a product of the inherent displacement field and corresponding force field [@Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006], which are defined through the eigenmodes. Thus, we expect that any unusual features expressed by the vibrational properties of amorphous solids should be reflected in their elastic properties. Indeed, it is well known that (both thermal and athermal) amorphous materials exhibit anomalous features in their vibrational states, such as an excess of low-frequency modes (Boson peak) [@monaco_2009; @monaco2_2009; @Kaya_2010; @Mizuno2_2013; @Mizuno_2014] and localizations of modes [@mazzacurati_1996; @Allen_1999; @Schober_2004; @Silbert_2009; @Xu_2010], which should be reflected in the behavior of the non-affine modulus. In addition, Maloney and Lemaître [@Maloney_2004; @Maloney2_2006] demonstrated that at the onset of a plastic event in an overcompressed disc packing under shear, a single eigenmode frequency goes to zero, which causes the non-affine modulus to diverge (toward $-\infty$) initiating the plastic event. A paradigmatic system that expresses the generic features of amorphous materials is the case of an isotropically, overcompressed, static, jammed packing of particles [@OHern_2002; @OHern_2003; @Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009]. As we decompress the jammed system, it unjams - goes from solid to fluid phase - at a particular packing fraction of particles, $\varphi_c$, that is the unjamming transition. The jamming (unjamming) point, $\varphi_{c}$, signals the transition between a mechanically robust solid phase and a collection of non-contacting particles unable to support mechanical perturbations. In such athermal solids, peculiar vibrational features are readily apparent in the vibrational density of states (vDOS), $g(\omega)$ [@Silbert_2009; @Xu_2010; @Silbert_2005]. The vDOS exhibits a plateau in the intermediate frequency regime, $\omega > \omega^\ast$, above some characteristic frequency $\omega^\ast$ (see also Fig. \[fig.vibration\](a)). On approach to the transition point $\varphi_c$, this plateau regime extends down to zero frequency, as the onset frequency $\omega^\ast$ goes to zero, $\omega^\ast \rightarrow 0$ [@Silbert_2005]. Wyart *et. al.* [@Wyart_2005; @Wyart_2006; @Xu_2007] described the vibrational modes in the plateau regime of $g(\omega)$, in terms of “anomalous” modes emerging from the isostatic feature of marginally stable packings. More recent work [@Goodrich_2013] proposed an alternative description based on the concept of a rigidity length scale. Either way, the progressive development of vibrational modes in the plateau regime seems to play a crucial role in controlling the mechanical properties of marginally jammed solids, e.g., in the loss of rigidity at the transition $\varphi_c$. ![image](fig1.eps){width="98.00000%"} In the present work, by using a model jammed packing of particles interacting via a finite-range, repulsive potential (see Fig. \[fig.panel\] and Eq. (\[interaction\])), we study the compressive, bulk modulus $K$ and the shear modulus $G$, close to the transition point $\varphi_c$. We execute a comprehensive analysis of the affine and non-affine components of these two elastic moduli. A main novelty of the present work is to elucidate the contribution to the non-affine moduli, from each vibrational mode, particularly those in the plateau regime of $g(\omega)$. To achieve this, we perform a normal mode analysis of the dynamical matrix [@Ashcroft; @kettel; @McGaughey], and then an eigenmode decomposition of the non-affine moduli [@Lutsko_1989; @Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006; @Karmakar_2010; @Hentschel_2011; @Zaccone_2011; @Zaccone2_2011; @Zaccone_2014]. Thereby, we avoid the need to explicitly apply a deformation to the packings which can be troublesome for very fragile systems close to $\varphi_{c}$. We demonstrate that in the plateau regime above $\omega^\ast$, each vibrational mode similarly contributes to the non-affine elastic moduli, i.e., the contribution is independent of the eigenmode frequency. This behavior derives from the competing influences of the displacement and force fields that are in turn largely set by low-frequency modes and high-frequency modes, respectively. In addition, the modal contribution shows a crossover at $\omega^\ast$, from the plateau independence for $\omega > \omega^\ast$, to a growing behavior $\sim \omega^{-2}$ (with decreasing $\omega$) for $\omega < \omega^\ast$. We show that this crossover at $\omega^\ast$ is controlled by the competition between compressing/stretching and sliding vibrational energies. As the system approaches the unjamming transition from above, and passes into the fluid phase, the two elastic moduli, $K$ and $G$, show distinct critical behaviors: The bulk modulus $K$ discontinuously drops to zero, whereas the shear modulus $G$ continuously goes to zero, $G \rightarrow 0$ [@OHern_2002; @OHern_2003; @Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009; @Wyart]. At the transition itself of the packing fraction $\varphi_c$, the critical value of the affine component of the bulk modulus remains above that of the nonaffine counterpart, whence the total modulus $K$ takes on a finite, positive value. In contrast, for the shear modulus, the non-affine modulus cancels out the affine modulus, leading to the shear modulus becoming identically zero at the transition. Here, we explore what features in the configurational and vibrational properties of jammed solids cause such the distinction between these critical behaviors, which leads us to derive the critical values of $K$ and $G$, analytically. An overview of our study is shown in Fig. \[fig.panel\]. The rest of this paper is organized as follows. In Sec. \[NM\] we outline the simulation method. We describe the system of jammed packings and the method for vibrational eigenmode analysis. We also discuss in detail the linear response formulation for obtaining the linear elastic moduli and their modal decomposition. Section \[results\] contains a comprehensive presentation of our results. This section is broken down into several subsections that focus on the affine and non-affine moduli, characterization of the eigenmodes themselves, the modal contributions to elastic moduli, and derivations of the critical values of the elastic moduli. We summarize our results in Sec \[summary\], and end with an extensive set of conclusive remarks in Sec. \[conclusions\]. Numerical method {#NM} ================ System description ------------------ We study a 3-dimensional ($d=3$) athermal jammed solid, which is composed of mono-disperse, frictionless, deformable particles with diameter $\sigma$ and mass $m$. Configurations of static, mechanically stable states are prepared over a wide range in packing pressure in a cubic simulation box with periodic boundary conditions in all three ($x,y,z$) directions, using a compression/decompression protocol [@Silbert_2010] implemented within the open-source, molecular dynamics package LAMMPS [@plimpton_1995]. Particles, $i$ and $j$, interact via a finite-range, purely repulsive, harmonic potential; $$\label{interaction} \phi(r_{ij}) = \left\{ \begin{aligned} & \frac{ \textrm{k} }{2} \left(\sigma-r_{ij} \right)^2 & (r_{ij} < \sigma), \\ & 0 & (r_{ij} \ge \sigma), \end{aligned} \right. \\$$ where $r_{ij}=|\vec{r}_i-\vec{r}_j|$ is the distance between particles $i$ and $j$, the $\vec{r}_i$ is particle position vector, and $\textrm{k}$ parameterizes the particle stiffness and sets an energy scale through $\textrm{k}\sigma^{2}$. In the following, we use $\sigma$, $m$, and $\tau=(m/\textrm{k})^{1/2}$ as units of length, mass, and time, respectively, i.e., we set $\sigma=m=\textrm{k}=1$. When $r_{ij} < \sigma$, the pair of particles, $(i,j)$, feels a finite potential, i.e., particles are connected. In the present study, we always removed rattler particles which have less than $3$ contacting neighbors, and the total number of particles is $N \simeq 1000$ (precise number $N$ depends on the configuration realizations that we used to average our data). We denote the number of connected pairs of particles as $N^\text{ct}=Nz/2$, where $z$ is the average contact number per particle (or the coordination number). At the transition point $\varphi_c$, where the system is in the isostatic state [@Wyart; @Wyart_2005; @Wyart_2006; @Maxwell_1864], the number of connections (constraints) is precisely balanced by the number of degrees of freedom, i.e., $N^\text{ct}_c=3N-3$ (three ($x,y,z$) translational degrees of freedom are removed), and the contact number is $$z_c = \frac{2N^\text{ct}_c}{N} = 6 \left( 1- \frac{1}{N} \right),$$ which is $6=2d$ in the thermodynamic limit, $N \rightarrow \infty$. The total potential energy $E$ of the system is then given by (using $\sigma = \textrm{k} = 1$) $$\label{energy1} E = \sum_{(i,j)} \phi(r_{ij}) = \sum_{(i,j)} \frac{1}{2} \left(1-{r_{ij}} \right)^2,$$ where the summation, $\sum_{(i,j)}$, runs over all connected pairs of particles, $(i,j) \in N^\text{ct}$. The temperature is zero, $T=0$, and the packing fraction of particles, $\varphi$, is the control parameter that we use to systematically probe static packings of varying rigidity [@OHern_2002; @OHern_2003; @Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009]; $$\varphi = \frac{\pi N}{6V} = \frac{\pi}{6} \hat{\rho},$$ where $V=L^3$ is the total volume ($L$ is the system length), and $\hat{\rho} = N/V$ is the number density. The critical value of $\varphi$ at the transition is found to coincide with the value of random close packing, $\varphi_c \simeq 0.64$, in $d=3$ dimensions [@OHern_2002; @OHern_2003]. The critical value of $\hat{\rho}$ is then given as $\hat{\rho}_c = (6/\pi)\varphi_c \simeq 1.2$. We study the jammed solid phase above the transition point $\varphi_c$, and characterize the rigidity of the system by the distance from $\varphi_c$, i.e., $\Delta \varphi = \varphi - \varphi_c \ge 0$. In the present work, we varied $\Delta \varphi$ by five decades, $10^{-6} \le \Delta \varphi \le 10^{-1}$. At each $\Delta \varphi$, $100$ configuration realizations were prepared, and the values of quantities were obtained by averaging over those $100$ realizations. Unstressed system ----------------- In the harmonic limit, the energy variation, $\delta E$, due to the displacements of particles from the equilibrium positions $\{\vec{r}_1,\vec{r}_2,...,\vec{r}_N \}$ by $\{\delta \vec{R}_{1},\delta \vec{R}_{2},...,\delta \vec{R}_{N} \}$ is formulated as [@Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009; @Wyart_2005; @Wyart_2006; @Xu_2007] $$\begin{aligned} \label{energy2} \delta E &= \sum_{(i,j)} \left[ \frac{\phi''(r_{ij})}{2} {\delta \vec{R}_{ij}^{\parallel}}^2 + \frac{\phi'(r_{ij})}{2 r_{ij}} {\delta \vec{R}_{ij}^{\perp}}^2 \right], \\ &:= \delta E^{\parallel} - \delta E^{\perp}, \end{aligned}$$ where $\phi'(r_{ij})$ and $\phi''(r_{ij})$ are respectively the first and second derivatives of the potential $\phi(r_{ij})$ with respect to $r_{ij}$. The vectors, $\delta \vec{R}_{ij}^{\parallel}$ and $\delta \vec{R}_{ij}^{\perp}$, are projections of $\delta \vec{R}_{ij} = \delta \vec{R}_{i}-\delta \vec{R}_{j}$ onto the planes parallel and perpendicular to $\vec{r}_{ij}=\vec{r}_{i}-\vec{r}_{j}$ (the equilibrium separation vector), respectively; $$\begin{aligned} \label{energy3} \delta \vec{R}_{ij}^{\parallel} &= \left(\delta \vec{R}_{ij} \cdot {\vec{n}_{ij}} \right) \vec{n}_{ij}, \\ \delta \vec{R}_{ij}^{\perp} &= \delta \vec{R}_{ij} - \left(\delta \vec{R}_{ij} \cdot {\vec{n}_{ij}} \right) \vec{n}_{ij}, \end{aligned}$$ with $\vec{n}_{ij} = {\vec{r}_{ij}}/{{r}_{ij}}$, the unit vector of $\vec{r}_{ij}$. In the present paper, we call $\vec{n}_{ij}$ the “bond vector" of contact $(i,j)$. As in Eq. (\[energy2\]), $\delta E$ is decomposed into two terms, $\delta E^{\parallel}\ (\ge 0)$ and $-\delta E^{\perp}\ (\le 0)$, which are energy variations due to compressing/stretching motions, $\delta \vec{R}_{ij}^{\parallel}$, and transverse sliding motions, $\delta \vec{R}_{ij}^{\perp}$, respectively [@Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009; @Wyart_2005; @Wyart_2006; @Xu_2007]. In the jammed solid state $\Delta \varphi > 0$, the pressure $p > 0$ is finite (positive), and the first derivative of the potential, $\phi'(r_{ij})$, which corresponds to the contact force, is a finite (negative) value between the connected pair of particles, $(i,j)$. For this reason we refer to such a state as the “stressed” state. Besides this original stressed system, we have also studied the “unstressed” system [@Wyart_2005; @Wyart_2006; @Xu_2007], where we keep the second derivative $\phi''(r_{ij})$ but drop the first derivative $\phi'(r_{ij}) \equiv 0$, i.e., we replace stretched springs between connected particles by unstretched (relaxed) springs of the same stiffness $\phi''(r_{ij})$. Note that the unstressed system is stable to keep exactly the same configuration of the original stressed system, with zero pressure, $p=0$. In the stressed system, the sliding motion $\delta \vec{R}_{ij}^{\perp}$ reduces the potential energy by $\delta E^{\perp} >0$ (see Eq. (\[energy2\])) and destabilizes the system [@Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009; @Wyart_2005; @Wyart_2006; @Xu_2007], whereas $\delta \vec{R}_{ij}^{\perp}$ in the unstressed system does *not* contribute to the energy variation, i.e., $\delta E^{\perp} \equiv 0$. Thus, by comparing the stressed and unstressed systems, we can separately investigate the effects of these two types of motions, the normal $\delta \vec{R}_{ij}^{\parallel}$ and tangential $\delta \vec{R}_{ij}^{\perp}$ motions, on energy-related quantities such as the elastic moduli. Vibrational eigenmodes {#sec.vibration} ---------------------- The vibrational eigenmodes are obtained by means of the standard normal mode analysis [@Ashcroft; @kettel; @McGaughey]. We have solved the eigenvalue problem of the dynamical matrix $H$, $$H = \frac{\partial^2 E}{\partial\vec{r} \partial \vec{r}} = \left[ \frac{\partial^2 E}{\partial\vec{r}_i \partial \vec{r}_j} \right]_{i,j=1,2,...,N},$$ with $\vec{r}=\left[\vec{r}_1,\vec{r}_2,...,\vec{r}_N \right]$, in order to get the eigenvalues, $\lambda^k$, and the eigenvectors, $\vec{e}^k=\left[\vec{e}^{k}_1,\vec{e}^{k}_2,...,\vec{e}^{k}_N \right]$, for vibrational modes $k=1,2,...,3N-3$ (the three ($x,y,z$) zero-frequency translational modes are removed). Note that $\vec{r}$ and $\vec{e}^k$ are $3N$ dimensional vectors, and $H$ is the $3N \times 3N$ Hessian matrix. Since we always remove rattler particles, there are no zero-frequency modes associated with them, thus $3N-3$ eigenvalues are all positive-definite, $\lambda^k > 0$. The quantity, $\omega^k = \sqrt{\lambda^k}$, is the eigenfrequency of the mode $k$ [@Ashcroft; @kettel; @McGaughey], from which we calculate the vDOS $g(\omega)$; $$\label{vdoseq} g(\omega) = \frac{1}{3N-3} \sum_{k=1}^{3N-3} \delta \left( \omega-\omega^k \right),$$ where $\delta(x)$ is the Dirac delta function. The eigenvector $\vec{e}^k=\left[\vec{e}^{k}_1,\vec{e}^{k}_2,...,\vec{e}^{k}_N \right]$, which is normalized as $\vec{e}^k \cdot \vec{e}^l = \sum_{i=1}^N \vec{e}^k_i \cdot \vec{e}^l_i = \delta_{kl}$ ($\delta_{kl}$ is the Kronecker delta), is the polarization field of particles in mode $k$, i.e., each particle $i$ ($=1,2,...,N$) vibrates along its polarization vector $\vec{e}^k_i$. The vector, $\vec{e}_{ij}^k=\vec{e}_{i}^k-\vec{e}_{j}^k$, represents the vibrational motion between particle pair, $(i,j)$. Like $\delta \vec{R}_{ij}$ in Eq. (\[energy3\]), $\vec{e}_{ij}^k$ can also be decomposed into the normal $\vec{e}_{ij}^{k \parallel}$ and tangential $\vec{e}_{ij}^{k \perp}$ vibrational motions with respect to the connecting bond vector $\vec{n}_{ij}$; $$\begin{aligned} \label{vs2} \vec{e}_{ij}^{k \parallel} &= \left( \vec{e}^k_{ij} \cdot {\vec{n}_{ij}} \right) \vec{n}_{ij}, \\ \vec{e}_{ij}^{k \perp} &= \vec{e}^k_{ij} - \left( \vec{e}^k_{ij} \cdot {\vec{n}_{ij}} \right) \vec{n}_{ij}. \end{aligned}$$ By substituting $\vec{e}_{ij}^{k \parallel}$ and $\vec{e}_{ij}^{k \perp}$ into $\delta \vec{R}_{ij}^{\parallel}$ and $\delta \vec{R}_{ij}^{\perp}$ in Eq. (\[energy2\]), we obtain the vibrational energy $\delta E^k$ of the mode $k$; $$\begin{aligned} \label{vs1} \delta E^k &= \sum_{(i,j)} \left[ \frac{\phi''(r_{ij})}{2} {\vec{e}_{ij}^{k \parallel}}^2 + \frac{\phi'(r_{ij})}{2 r_{ij}} {\vec{e}_{ij}^{k \perp}}^2 \right], \\ &:= \delta E^{k \parallel} - \delta E^{k \perp}. \end{aligned}$$ $\delta E^{k \parallel}\ (\ge 0)$ and $-\delta E^{k \perp}\ (\le 0)$ are energies due to the compressional/stretching, $\vec{e}_{ij}^{k \parallel}$, and sliding, $\vec{e}_{ij}^{k \perp}$, vibrational motions, respectively. $\delta E^k$ is also formulated as [@Ashcroft; @kettel; @McGaughey] $$\label{vs3} \delta E^k = \frac{1}{2} \left( \vec{e}^k \cdot H \cdot \vec{e}^k \right) = \frac{ \lambda^k }{2} = \frac{ {\omega^k}^2 }{2}.$$ Thus, Eqs. (\[vs1\]) and (\[vs3\]) give $$\label{vs4} \sum_{(i,j)} \left[ \phi''(r_{ij}) {\vec{e}_{ij}^{k \parallel}}^2 + \frac{\phi'(r_{ij})}{r_{ij}} {\vec{e}_{ij}^{k \perp}}^2 \right] = {\omega^k}^2.$$ In the present work, we characterize the vibrational mode $k$ in terms of the quantities described above, i.e., $\omega^k, \vec{e}_{ij}^{k \parallel}, \vec{e}_{ij}^{k \perp}, \delta E^{k \parallel}, \delta E^{k \perp}$, which will be presented in Sec. \[sec.vibrationalstate\]. We note that those quantities are different between the original stressed system and the unstressed system, since the dynamical matrix is different between them. In Sec. \[sec.vibrationalstate\], we will also compare the vibrational modes between the two systems. Elastic moduli {#sec.moduli} -------------- The linear elastic response of the isotropic systems studied here is characterized by two elastic moduli: The bulk modulus $K$ is for volume-changing bulk deformation ${\epsilon}_K$, and the shear modulus $G$ for volume-preserving shear deformation ${\epsilon}_G$, where ${\epsilon}_K$ and ${\epsilon}_G$ are the strains representing the global affine deformations [@elastictheory2; @elastictheory; @Ashcroft; @kettel; @Alexander_1998]. In the present paper, we represent $M$ for those two elastic moduli, i.e., $M=K, G$. Rather than explicitly applying a deformation field to the systems at hand, we calculate the elastic modulus $M$ through the harmonic formulation, which has been established and employed in previous studies [@Lutsko_1989; @Maloney_2004; @Maloney2_2006; @Maloney_2006; @Karmakar_2010; @Lemaitre_2006; @Hentschel_2011; @Zaccone_2011; @Zaccone2_2011; @Zaccone_2014]. In the following, we introduce the formulation and notations for modulus $M$. We show the formulation of only $C_{xyxy}$ (Voigt notation) for the shear modulus $G$, but the other shear moduli, e.g., $C_{xzxz}, C_{yzyz}$, coincide with $C_{xyxy}$ in the isotropic system and give the same results. As we described in the introduction, the elastic modulus, $M=K,G$, has two components, the affine modulus, $M_A = K_A,G_A$, and the non-affine modulus, $M_N = K_N,G_N$, such that $$\label{modulus1} M = M_{A} - M_{N}.$$ The affine modulus $M_A$ is formulated as the second derivative of the energy $E$ with respect to the homogeneous affine strain ${\epsilon}_M$ ($={\epsilon}_K,{\epsilon}_G$) [@Lutsko_1989; @Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006; @Karmakar_2010; @Hentschel_2011; @Zaccone_2011; @Zaccone2_2011; @Zaccone_2014]; $$\begin{aligned} \label{modulus2a} M_{A} & = \frac{1}{V} \frac{\partial^2 E} {\partial {\epsilon_M}^{2}} = \frac{1}{V} \sum_{(i,j)} \frac{\partial^2 \phi(r_{ij})} {\partial {\epsilon_M}^{2}}, \\ &:= \frac{1}{V} \sum_{(i,j)} M_{A}^{ij}. \end{aligned}$$ Specifically, when we use the Green-Lagrange strain for $\epsilon_M$, then $M_A$ is formulated as the so-called Born term; $$\begin{aligned} \label{modulus2} K_{A} & = \frac{1}{V} \sum_{(i,j)} \left( \phi''(r_{ij}) - \frac{ \phi'(r_{ij}) }{ r_{ij} } \right) \frac{{ r_{ij} }^2}{9}, \\ G_{A} & = \frac{1}{V} \sum_{(i,j)} \left( \phi''(r_{ij}) - \frac{ \phi'(r_{ij}) }{ r_{ij} } \right) \frac{ {r^x_{ij}}^2 {r^y_{ij}}^2}{{r_{ij}}^2}, \end{aligned}$$ where $r_{ij}^x, r_{ij}^y, r_{ij}^z$ are Cartesian coordinates of $\vec{r}_{ij}$; $\vec{r}_{ij}=(r_{ij}^x, r_{ij}^y, r_{ij}^z)$. Here we note that we can also use the linear strain for $\epsilon_M$, instead of the Green-Lagrange strain [@Barron_1965]. In this case, if the stress tensor has a finite value in its components, the stress correction term is necessary in $M_A$ [@Lemaitre_2006; @Mizuno_2013; @Wittmer_2013; @Wittmer2_2013; @Wittmer_2015; @Mizuno2_2013; @Mizuno_2014; @Barron_1965], which is of same order as $\phi' \sim \Delta \varphi$. As in Eqs. (\[modulus2a\]) and (\[modulus2\]), the affine modulus $M_A$ can be decomposed into contributions from connected pairs $(i,j)$, $M_A^{ij}$, which will be shown in Sec. \[sec.affine\]. On the other hand, the non-affine modulus $M_N$ is formulated in terms of the dynamical matrix $H$ [@Lutsko_1989; @Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006; @Karmakar_2010; @Hentschel_2011; @Zaccone_2011; @Zaccone2_2011; @Zaccone_2014]; $$\label{modulus3} M_{N} = \frac{1}{V} \left( \vec{\Sigma}_M \cdot H^{-1} \cdot \vec{\Sigma}_M \right),$$ with $$\label{modulus4} \vec{\Sigma}_M = - \frac{\partial^2 E}{\partial \epsilon_M \partial \vec{r}} = - V \frac{\partial \sigma_M}{\partial \vec{r}},$$ where $\sigma_M=(1/V) (\partial E / \partial \epsilon_M)$ is the conjugate stress to the strain $\epsilon_M$, that is the (negative) pressure $\sigma_M = -p$ for $\epsilon_M=\epsilon_K$, and the shear stress $\sigma_M=\sigma_{s}$ for $\epsilon_M=\epsilon_G$. The pressure $p$ and the shear stress $\sigma_{s}$ are formulated through the Irving-Kirkwood expression (without the kinetic term for the static systems under study here) [@Irving_1950; @Allen1986]; $$\begin{aligned} \label{modulus5} p &= - \frac{1}{V} \frac{\partial E}{\partial \epsilon_K} = -\frac{1}{V} \sum_{(i,j)} \phi'(r_{ij}) \frac{{ r_{ij} }}{3}, \\ \sigma_{s} &= \frac{1}{V} \frac{\partial E}{\partial \epsilon_G} = \frac{1}{V} \sum_{(i,j)} \phi'(r_{ij}) \frac{ r^x_{ij} r^y_{ij} }{r_{ij} }. \end{aligned}$$ Note that $\vec{\Sigma}_M=\left[ -{\partial^2 E}/{\partial \epsilon_M \partial \vec{r}_1},...,-{\partial^2 E}/{\partial \epsilon_M \partial \vec{r}_N} \right]$ is a $3N$-dimensional vector field. Following the discussions by Maloney and Lemaître [@Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006], $\vec{\Sigma}_M$ is interpreted as the field of forces which results from an elementary affine deformation $\epsilon_M$. This is understood when we write $\vec{\Sigma}_M$ as $$\label{modulus4a} \vec{\Sigma}_M = \frac{\partial \vec{F}}{\partial \epsilon_M},$$ where $\vec{F} = -\partial E/\partial{\vec{r}}$ is the interparticle force field acting on the $N$ particles. In amorphous solids, $\vec{\Sigma}_M$ generally causes a force imbalance on particles, leading to an additional non-affine displacement field of the particles, $\delta \vec{R}_{\text{na} M}$ ($3N$-dimensional vector field). Indeed, $\delta \vec{R}_{\text{na} M}$ is formulated as the linear response to the force field $\vec{\Sigma}_M$ [@Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006]; $$\label{modulus6} \delta \vec{R}_{\text{na} M} = H^{-1} \cdot \vec{\Sigma}_M.$$ From Eq. (\[modulus3\]), the non-affine modulus $M_N$ is the product of those two vector fields, $\vec{\Sigma}_M$ and $\delta \vec{R}_{\text{na} M}$; $$\label{modulus3x} M_{N} = \frac{1}{V} \left( \vec{\Sigma}_M \cdot \delta \vec{R}_{\text{na} M} \right).$$ Therefore $M_N$ is interpreted as an energy relaxation during the non-affine deformation, or more precisely the work done in moving the particles along the non-affine displacement field which corresponds to a repartitioning of the contact forces between particles as a result of the relaxation process. In order to study the relation between vibrational modes $k$ and the non-affine modulus $M_N$, we formulate $M_N$ explicitly by using $\omega^k$ and $\vec{e}^k$ ($k=1,2,...,3N-3$) [@Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006; @Karmakar_2010; @Hentschel_2011; @Zaccone_2011; @Zaccone2_2011; @Zaccone_2014], instead of the dynamical matrix $H$. To do this, $\vec{\Sigma}_M$ is decomposed as $$\label{modulus7} \vec{\Sigma}_M = \sum_{k=1}^{3N-3} \Sigma_M^k \vec{e}^k.$$ The component $\Sigma_M^k$ is formulated as $$\begin{aligned} \label{modulus8} \Sigma_M^k &= \vec{\Sigma}_M \cdot \vec{e}^k = -V \sum_{i=1}^{N} \frac{\partial \sigma_M}{\partial \vec{r}_{i}} \cdot \vec{e}^k_{i}, \\ &= -V \sum_{(i,j)} \frac{\partial \sigma_M}{\partial \vec{r}_{ij}} \cdot \vec{e}^k_{ij}. \end{aligned}$$ Here we note that the stress $\sigma_M$ is a function of $\vec{r}_{ij}$, which leads to the last equality in Eq. (\[modulus8\]). Similarly $\delta \vec{R}_{\text{na} M}$ is $$\label{modulus7a} \delta \vec{R}_{\text{na} M} = \sum_{k=1}^{3N-3} \delta {R}_{\text{na} M}^{k} \vec{e}^k,$$ with $$\label{modulus9} \delta {R}_{\text{na} M}^k = \delta \vec{R}_{\text{na} M} \cdot \vec{e}^k = \frac{\Sigma_M^k}{{\omega^k}^2}.$$ The non-affine modulus $M_{N}$ can then be expressed as $$\begin{aligned} \label{modulus10} M_{N} &= \frac{1}{V} \sum_{k=1}^{3N-3} \Sigma_M^k \delta {R}_{\text{na} M}^k= \frac{1}{V} \sum_{k=1}^{3N-3} \frac{{\Sigma_M^k}^2}{{\omega^k}^2}, \\ &:= \frac{1}{V} \sum_{k=1}^{3N-3} M_{N}^{k}. \end{aligned}$$ Therefore, (i) the non-affine modulus $M_N$ is decomposed into normal mode $k$ contributions, $M_{N}^{k}$, and (ii) $M_{N}^{k}$ is described as the product of the force field $\Sigma_M^k$ and the non-affine displacement field $\delta {R}_{\text{na} M}^k$, which is interpreted as an energy relaxation by the mode $k$ excitation. In addition, from Eq. (\[modulus8\]), ${\Sigma_M^k}$ is interpreted as the fluctuation of the stress $\sigma_M$, induced by the mode $k$; $$\label{modulus8x} \Sigma_M^k = -V \delta {\sigma}^k_M,$$ where $\delta \sigma^k_M = \sum_{(i,j)} \left( \partial {\sigma_M}/ {\partial \vec{r}_{ij}}\right) \cdot \vec{e}^{k}_{ij}$. Then Eq. (\[modulus10\]) becomes $$\label{modulus11} M_{N} = \frac{1}{V} \sum_{k=1}^{3N-3} \frac{\left( V \delta {\sigma}^k_M \right)^2}{{\omega^k}^2}.$$ Thus, (iii) the non-affine modulus $M_N$ is seen as a summation of the stress fluctuations (the pressure or shear stress fluctuations). In fact, at finite temperatures $T$, the non-affine modulus is formulated in terms of thermal fluctuations of the stress [@Lutsko_1989; @Mayr_2009; @Mizuno_2013; @Wittmer_2013; @Wittmer2_2013; @Wittmer_2015; @yoshimoto_2004; @Zaccone_2013; @Mizuno2_2013; @Mizuno_2014]. Eqs. (\[modulus10\]) and (\[modulus11\]) allow us to directly relate the vibrational normal modes $k$ to the non-affine modulus $M_N$, which will be done in Secs. \[sec.nonaffine1\] and \[sec.nonaffine2\]. ![\[fig.phidependence\] (Color online) Dependence on the packing fraction, $\Delta \varphi = \varphi-\varphi_c$, of the (a) bulk modulus $K, K_A, K_N$, (b) shear modulus $G, G_A, G_N$, (c) potential energy per particle $E/N$, pressure $p$, and the (d) excess contact number $\Delta z=z-z_c$. In (a) and (b), we plot values from the unstressed system (closed symbols), in addition to values from the original stressed system (open symbols). The inset to (a) presents $K, K_A, K_N$ on a linear scale. The lines indicate power-law scalings with respect to $\Delta \varphi$. The error bars were calculated from $100$ configuration realizations.](fig2.eps){width="48.00000%"} Results ======= Dependence of elastic moduli on packing fraction $\Delta \varphi$ ----------------------------------------------------------------- [**Scaling laws with packing fraction $\boldsymbol{\Delta \varphi}$.**]{} Figure \[fig.phidependence\] shows the elastic moduli $K, G$, potential energy per particle $E/N$, pressure $p$, and the excess contact number $\Delta z=z-z_c$, as functions of $\Delta \varphi$. Our values as well as the power-law scalings are consistent with previous works on the harmonic system [@OHern_2002; @OHern_2003]; $$\begin{aligned} \label{plmacro} & K \sim \Delta \varphi^0, \qquad G \sim \Delta \varphi^{1/2}, \\ & E \sim \Delta \varphi^2, \qquad p \sim \Delta \varphi, \\ & \Delta z \sim \Delta \varphi^{1/2}. \end{aligned}$$ As $\Delta \varphi \rightarrow 0$, the affine shear modulus $G_A$ and the non-affine shear modulus $G_N$ converge to the same value, and consequently the total shear modulus $G$ vanishes according to $G \sim \Delta \varphi^{1/2} \rightarrow 0$. On the other hand, the affine bulk modulus $K_A$ is always larger than the non-affine value $K_N$, i.e., $K_A > K_N$, and the total bulk modulus $K$ does not vanish, approaching a finite constant value. [**Comparison between stressed and unstressed systems.**]{} The stressed and unstressed systems show similar values of $K$ and $G$, as well as consistent exponents for the power-law scalings (compare open and closed symbols in Fig. \[fig.phidependence\](a),(b)). Close to the transition point ($\Delta \varphi \ll 1$), the interparticle force, $\sim \phi'(r_{ij}) \sim \mathcal{O}(\Delta \varphi)$, becomes very small, as manifested in the pressure, $p \sim \phi'(r_{ij}) \sim \Delta \varphi \ll 1$. In this situation, the unstressed system is a good approximation to the original stressed system [@Wyart_2005; @Wyart_2006; @Xu_2007]. However, as we will see in Figs. \[fig.decomposition\] and \[fig.decompositiona\] and discuss in Sec. \[sec.nonaffine1\], differences between the two systems visibly appear in the non-affine modulus contributions, $M_{N}^{k}$, from the low-$\omega$ normal modes $k$. These differences are hidden by a summation of $M_N^k$ over all $3N-3$ normal modes, and as a result, only tiny differences are noticeable in the total moduli, $K$ and $G$ (or $K_N$ and $G_N$) (Fig. \[fig.phidependence\](a),(b)). ![\[fig.angle\] (Color online) Probability distribution, $P(\phi_{ij},\theta_{ij})$, of the orientation angles of the unit bond vector, $\vec{n}_{ij}=\left(\cos\phi_{ij} \sin \theta_{ij},\sin\phi_{ij} \sin \theta_{ij},\cos \theta_{ij} \right)$. We plot $P(\phi_{ij},\theta_{ij})$ as a function of $\phi_{ij}$ in (a),(b), and $\theta_{ij}$ in (c),(d). Note $0\le \phi_{ij} < 2\pi$, and $0\le \theta_{ij}\le \pi$, and in the figures, $\phi_{ij}$ and $\theta_{ij}$ are normalized by $2\pi$ and $\pi$, respectively. The packing fraction is $\Delta \varphi = 10^{-1}$ in left panels and $10^{-6}$ in right panels. The solid lines indicate $P(\phi_{ij},\theta_{ij})$ in Eq. (\[affine2\]), which coincides with numerical results (symbols), thereby demonstrating the isotropic distribution of the orientation of $\vec{n}_{ij}$.](fig3.eps){width="48.00000%"} ![\[fig.affine\] (Color online) Contributions to the affine moduli, $K_A^{ij}$ and $G_A^{ij}$, from each connected pair of particles, $(i,j)$ (see Eqs. (\[modulus2a\]) and (\[modulus2\])). Shown are the probability distributions, (a) $P(K^{ij}_A)$ and (b) $P(G^{ij}_A)$, for the range of $10^{-6} \le \Delta \varphi \le 10^{-1}$. It is seen that $P(K^{ij}_A) \simeq \delta (K^{ij}_A -1/9)$ (delta function), and $P(G^{ij}_A) \sim {G^{ij}_A}^{-1/2}$ (power-law function). The solid line in (b) presents $P(G_A^{ij})$ calculated from Eq. (\[affine2a\]). In (c), the average values over all $N^\text{ct}$ contacts, $\left< K^{ij}_A \right>$ and $\left< G^{ij}_A \right>$, are plotted as functions of $\Delta \varphi$. The horizontal lines indicate the values of $\left< K^{ij}_A \right>=1/9$ and $\left< G^{ij}_A \right>=1/15$ (see Eq. (\[affine3\])). In (d), we compare $K_A$ and $G_A$ from Eq. (\[affine5\]) (lines) to numerical values presented in Fig. \[fig.phidependence\](a),(b) (symbols).](fig4.eps){width="48.00000%"} Affine moduli {#sec.affine} ------------- Firstly we study the affine modulus $M_A$, which is decomposed into contributions from each contact $(i,j)$, $M_{A}^{ij}$, as in Eqs. (\[modulus2a\]) and (\[modulus2\]). Close to the transition point $\varphi_c$, $r_{ij} = 1 + \mathcal{O}(\Delta \varphi)$, $\phi'(r_{ij}) = \mathcal{O}(\Delta \varphi)$, and $\phi''(r_{ij}) = 1$ for all contacts, $(i,j) \in N^\text{ct}$. Therefore, we get $$\begin{aligned} \label{affine1} K_{A}^{ij} &= \left( \phi''(r_{ij}) - \frac{ \phi'(r_{ij}) }{ r_{ij} } \right) \frac{ { r_{ij} }^2}{9},\\ &= \frac{1}{9} + \mathcal{O}(\Delta \varphi),\\ G_{A}^{ij} &= \left( \phi''(r_{ij}) - \frac{ \phi'(r_{ij}) }{ r_{ij} } \right) \frac{ {r^x_{ij}}^2 {r^y_{ij}}^2}{{r_{ij}}^2},\\ &= {n_{ij}^{x}}^2 {n_{ij}^{y}}^2 + \mathcal{O} (\Delta \varphi),\\ &= \cos^2\phi_{ij} \sin^2 \phi_{ij} \sin^4 \theta_{ij} + \mathcal{O} (\Delta \varphi). \end{aligned}$$ In the last equality for $G_A^{ij}$ of Eq. (\[affine1\]), we write the unit bond vector, $\vec{n}_{ij}= \left( n_{ij}^x,n_{ij}^y,n_{ij}^z \right)$, as $$\left( n_{ij}^x,n_{ij}^y,n_{ij}^z \right) = \left(\cos\phi_{ij} \sin \theta_{ij},\sin\phi_{ij} \sin \theta_{ij},\cos \theta_{ij} \right),$$ where the pair of angles, $(\phi_{ij},\ \theta_{ij})$, are the polar coordinates specifying the orientation of $\vec{n}_{ij}$, and $0 \le \phi_{ij} < 2\pi$, $0 \le \theta_{ij}\le \pi$. The bulk modulus, $K_A^{ij} \simeq 1/9$ ($=\phi''(r_{ij})/9$), just picks up the stiffness of bond $\vec{n}_{ij}$, which is same for all contacts. Whereas the shear modulus, $G_A^{ij} \simeq {n_{ij}^{x}}^2 {n_{ij}^{y}}^2 =\cos^2\phi_{ij} \sin^2 \phi_{ij} \sin^4 \theta_{ij}$, depends on the orientation of $\vec{n}_{ij}$. In the present work, we follow Zaccone *et. al.* [@Zaccone_2011; @Zaccone2_2011] and assume an isotropic distribution of the orientation of $\vec{n}_{ij}$: The joint probability distribution of $\phi_{ij},\theta_{ij}$ is assumed to be $$\label{affine2} P(\phi_{ij},\theta_{ij}) = \frac{1}{2\pi} \times \frac{\sin \theta_{ij}}{2}.$$ We plot numerical results of $P(\phi_{ij},\theta_{ij})$ for the packing fractions of high $\Delta \varphi = 10^{-1}$ and low $\Delta \varphi = 10^{-6}$ in Fig. \[fig.angle\], which well verifies Eq. (\[affine2\]). [**Probability distribution of $\boldsymbol{M_A^{ij}}$.**]{} Figure \[fig.affine\] presents the probability distributions, $P(K_A^{ij})$ in (a) and $P(G_A^{ij})$ in (b). We see that $P(K_A^{ij})$ and $P(G_A^{ij})$ are both insensitive to $\Delta \varphi$. As expected from Eq. (\[affine1\]), $P(K_A^{ij})$ shows a delta function, $P(K_A^{ij}) \simeq \delta(K_A^{ij}-1/9)$. On the other hand, $P(G_A^{ij})$ is a power-law function, $P(G_A^{ij}) \sim {G_A^{ij}}^{-1/2}$, with a finite range of $0 \le G^{ij}_A \le 1/4$. The power-law behavior of $P(G_A^{ij})$ is obtained using the isotropic distribution of the bond-orientation, i.e., $P(\phi_{ij},\theta_{ij})$ in Eq. (\[affine2\]), as $$\begin{aligned} \label{affine2a} & P(G_A^{ij}) dG_A^{ij} \\ & = \int_{G_A^{ij} < \cos^2\phi_{ij} \sin^2 \phi_{ij} \sin^4 \theta_{ij} < G_A^{ij}+dG_A^{ij}} P(\phi_{ij},\theta_{ij}) d\phi_{ij} d\theta_{ij}, \\ & \Longrightarrow\\ & P(G_A^{ij}) \\ &=\frac{{G_A^{ij}}^{-1/2}}{\pi} \int_{2{G^{ij}_A}^{1/2}}^{1} \left[ x(1-x^2) \left(x- 2 {G_A^{ij}}^{1/2} \right) \right]^{-1/2} dx. \end{aligned}$$ We note that $G^{ij}_A$ takes values in the range of $0 \le G^{ij}_A \le 1/4 \Leftrightarrow 0 \le 2{G^{ij}_A}^{1/2} \le 1$. Eq. (\[affine2a\]) is numerically verified in Fig. \[fig.affine\](b) (see solid line), and demonstrates that the power-law behavior, $P(G_A^{ij}) \sim {G_A^{ij}}^{-1/2}$, comes from its prefactor. [**Average value $\boldsymbol{\left< M_A^{ij} \right>}$.**]{} From the distribution function $P(M^{ij}_A)$, we obtain the average value $\left< M_A^{ij} \right>$; $$\begin{aligned} \label{affine3} \left< K_A^{ij} \right> &= \frac{1}{N^\text{ct}} \sum_{(i,j) \in N^\text{ct}} K_A^{ij} = \int K_A^{ij} P(K_A^{ij}) dK_A^{ij}, \\ &= \frac{1}{9} + \mathcal{O} (\Delta \varphi),\\ \left< G_A^{ij} \right> &= \frac{1}{N^\text{ct}} \sum_{(i,j) \in N^\text{ct}} G_A^{ij} = \int G_A^{ij} P(G_A^{ij}) dG_A^{ij}, \\ &= \frac{1}{15} + \mathcal{O} (\Delta \varphi), \end{aligned}$$ where $\left< \right>$ denotes the average over all the $N^\text{ct}$ contacts, $(i,j)$. $\left< G_A^{ij} \right> ={1}/{15}$ can be also calculated by using $P(\phi_{ij},\theta_{ij})$ in Eq. (\[affine2\]) as $$\begin{aligned} \left< G_A^{ij} \right> &= \int_{0}^{2\pi} d\phi_{ij} \int_0^\pi d\theta_{ij} P(\phi_{ij},\theta_{ij}) G_A^{ij},\\ &= \int_{0}^{2\pi} \frac{d\phi_{ij}}{2\pi} \int_0^\pi \frac{\sin \theta_{ij} d\theta_{ij}}{2} \cos^2\phi_{ij} \sin^2 \phi_{ij} \sin^4 \theta_{ij},\\ &= \frac{1}{15}. \end{aligned}$$ Panel (c) of Fig. \[fig.affine\] plots numerical values of $\left< M_A^{ij} \right>$ as a function of $\Delta \varphi$, and verifies Eq. (\[affine3\]). [**Formulation of the affine modulus $\boldsymbol{M_A}$.**]{} The total affine modulus $M_{A}$ is therefore formulated as $$\begin{aligned} \label{affine5x} M_{A} &= \frac{1}{V} \left< M_{A}^{ij} \right> N^\text{ct} = \frac{\hat{\rho}}{2} \left< M_{A}^{ij} \right> (z_c + \Delta z),\\ &= M_{Ac} + \frac{\hat{\rho}_c}{2} \left< M_{A}^{ij} \right>_c \Delta z + \mathcal{O} (\Delta \varphi), \end{aligned}$$ where $M_{Ac} = \left({\hat{\rho}_c}/{2}\right) \left< M_{A}^{ij} \right>_c z_c $ is the critical value at the transition point $\varphi_c$. Specifically, we get $$\begin{aligned} \label{affine5} K_{A} &= K_{Ac} + \frac{\hat{\rho}_c}{18} \Delta z + \mathcal{O}(\Delta \varphi),\\ G_{A} &= G_{Ac} + \frac{\hat{\rho}_c}{30} \Delta z + \mathcal{O}(\Delta \varphi), \end{aligned}$$ with $$\begin{aligned} \label{affine5a} K_{Ac} &= \frac{\hat{\rho}_c}{18}z_c \simeq 0.40, \\ G_{Ac} &= \frac{\hat{\rho}_c}{30}z_c \simeq 0.24. \end{aligned}$$ We note that $\Delta z \sim \Delta \varphi^{1/2}$ is the leading order term of $M_A$ in Eqs. (\[affine5x\]) and (\[affine5\]). Eq. (\[affine5\]) is the same formulation obtained by Zaccone *et. al.* [@Zaccone_2011; @Zaccone2_2011] for $d=3$ dimensions, which is based on the isotropic distribution of the bond-orientations, $P(\phi_{ij},\theta_{ij})$ in Eq. (\[affine2\]). Figure \[fig.affine\](d) demonstrates that Eq. (\[affine5\]) matches the numerical values of $M_A$ presented in Fig. \[fig.phidependence\](a),(b). On approach to the transition point $\varphi_c$, the excess contact number $\Delta z$ is vanishing, which reduces the affine modulus $M_A$ towards the critical value $M_{Ac}$. It is worth mentioning that the critical values of both $K_{Ac}$ and $G_{Ac}$ are finite positive (see Eq. (\[affine5a\])). Therefore, similar to the coordination number $z$, $M_A$ discontinuously drops to zero, through the transition to the fluid phase, $\varphi < \varphi_c$, where $M_A \equiv 0$. ![\[fig.vibration\] (Color online) Vibrational eigenmodes in the stressed (left panels) and the unstressed (right panels) systems. The vDOS $g(\omega)$ in (a),(b), displacements ${e}^{k \parallel}$ (solid lines), ${e}^{k \perp}$ (dashed lines) in (c),(d), net displacement ${e}^{k \parallel}_\text{net}$ in (e),(f), and the mode energies $\delta E^{k \parallel}$ (solid), $\delta E^{k \perp}$ (dashed) in (g),(h), are plotted as functions of the eigenfrequency $\omega$. See Eqs. (\[displace1\]) and (\[displace2\]) for the definitions of ${e}^{k \parallel}, {e}^{k \perp}, {e}^{k \parallel}_\text{net}$. The values of ${e}^{k \parallel}, {e}^{k \perp}, {e}^{k \parallel}_\text{net}, \delta E^{k \parallel}, \delta E^{k \perp}$ are averaged over frequency bins of $\log_{10} \omega^k \in [\log_{10} \omega-\Delta \omega/2,\log_{10} \omega + \Delta \omega/2]$ with $\Delta \omega = 0.07$. The different lines indicate different packing fractions, $\Delta \varphi = 10^{-1}$ (red), $10^{-2}$ (green), $10^{-3}$ (blue), $10^{-4}$ (orange), $10^{-5}$ (magenta), $10^{-6}$ (black), from right to left or from top to bottom. Details of the presented quantities are given in Sec. \[sec.vibration\].](fig5.eps){width="48.00000%"} ![\[fig.frequency\] (Color online) Characteristic frequencies, $\omega^\ast$, $\omega^h$, $\omega^\ast_{M}$, and $\omega_{M}^h$ ($M=K, G$), as functions of $\Delta \varphi$. $\omega^\ast, \omega^h$ characterize the vDOS $g(\omega)$, and $\omega^\ast_{M}, \omega^h_{M}$ are from the modal contribution to the non-affine moduli, $M_N^k = K_N^k, G_N^k$. In (a), we compare $\omega^\ast$, $\omega^h$ between the stressed (open symbols) and unstressed (closed symbols) systems, which are seen to coincide with each other. In (b), $\omega^\ast$, $\omega^h$ are compared to $\omega^\ast_{M}$, $\omega_{M}^h$ for the stressed system. We observe that $\omega^\ast \simeq \omega^\ast_{M} \sim \Delta \varphi^{1/2}$ whereas $\omega^h \simeq \omega^h_{M} \simeq 1.0$ is insensitive to $\Delta \varphi$. Note that for the bulk modulus $M=K$, only $\omega^\ast_{K}$ is determined in $\Delta \varphi \le 5 \times 10^{-3}$ ($\omega_{K}^h$ is not). A more detailed discussion of these frequencies is given in the main text.](fig6.eps){width="48.00000%"} ![\[fig.alpha\] (Color online) Probability distribution $P(\alpha_{ij}^k)$ of the sliding angle, $\alpha^k_{ij} = \arctan \left( \left| \vec{e}_{ij}^{k \perp} \right| / \left| \vec{e}_{ij}^{k \parallel} \right| \right)$, for several different vibrational modes $k$, at (a) $\Delta \varphi=10^{-1}$ and (b) $\Delta \varphi=10^{-6}$ (inset is a zoom of the central portion). The number of the label indicates the eigenfrequency $\omega^k$. The value of $\alpha^k_{ij}$ is normalized by $\pi$, and the vertical solid line indicates $\alpha^k_{ij}=\pi/2$.](fig7.eps){width="48.00000%"} Vibrational eigenmodes {#sec.vibrationalstate} ---------------------- Before studying the non-affine modulus $M_N$, we report on the vibrational eigenmodes in this section. As explained in Sec. \[sec.vibration\], we characterize vibrational mode $k$ in terms of its eigenfrequency $\omega^k$, eigenvectors $\vec{e}_{ij}^{k \parallel}, \vec{e}_{ij}^{k \perp}$, and mode energies $\delta E^{k \parallel}, \delta E^{k \perp}$. Regarding the eigenvectors $\vec{e}_{ij}^{k \parallel}, \vec{e}_{ij}^{k \perp}$ (see Eq. (\[vs2\])), we introduce the “absolute” displacement ${e}^{k \parallel}, {e}^{k \perp}$ (root mean square); $$\begin{aligned} \label{displace1} {e}^{k \parallel} &= \sqrt{\frac{1}{N^\text{ct}} \sum_{(i,j) \in N^\text{ct}} {\vec{e}^{k \parallel}_{ij}}^2 } = \sqrt{ \left< {\vec{e}^{k \parallel}_{ij}}^2 \right>}, \\ {e}^{k \perp} &= \sqrt{\frac{1}{N^\text{ct}} \sum_{(i,j) \in N^\text{ct}} {\vec{e}^{k \perp}_{ij}}^2} = \sqrt{ \left< {\vec{e}^{k \perp}_{ij}}^2 \right>}, \end{aligned}$$ and the “net” displacement ${e}^{k \parallel}_\text{net}$; $$\label{displace2} {e}^{k \parallel}_\text{net} = \left| \frac{1}{N^\text{ct}} \sum_{(i,j) \in N^\text{ct}} \vec{e}^{k \parallel}_{ij} \cdot \vec{n}_{ij} \right| = \left| \left< \vec{e}^{k \parallel}_{ij} \cdot \vec{n}_{ij} \right> \right|.$$ In this way, the net displacement ${e}^{k \parallel}_\text{net}$ is a measure of vibrational motions $\vec{e}_{ij}^{k \parallel}$ along the bond vector $\vec{n}_{ij}$ that distinguishes between compressing ($\vec{e}^{k \parallel}_{ij} \cdot \vec{n}_{ij}<0$) and stretching ($\vec{e}^{k \parallel}_{ij} \cdot \vec{n}_{ij}>0$) motions, while ${e}^{k \parallel}$ merely picks up the “absolute” amplitude. The absolute amplitudes of ${e}^{k \parallel}, {e}^{k \perp}$ are directly related to the energies $\delta E^{k \parallel}$ and $\delta E^{k \perp}$ (see Eq. (\[vs1\])); $$\begin{aligned} \label{displace3} \delta E^{k \parallel} & = N^\text{ct} \left< \frac{\phi''(r_{ij})}{2} {\vec{e}^{k \parallel}_{ij}}^2 \right> \sim {{e}^{k \parallel}}^2,\\ \delta E^{k \perp} & = N^\text{ct} \left< -\frac{\phi'(r_{ij})}{2 r_{ij}} {\vec{e}^{k \perp}_{ij}}^2 \right> \sim \Delta \varphi {{e}^{k \perp}}^2, \end{aligned}$$ whereas the net amplitude of ${e}^{k \parallel}_\text{net}$ is related to the force $\left| \Sigma_M^k \right|$ and the non-affine displacement $\left| \delta {R}_{\text{na} M}^k \right|$ (see Eqs. (\[modulus8\]) and (\[modulus9\])); $$\begin{aligned} \label{displace4} & \left| \Sigma_M^k \right| \sim \left| N^\text{ct} \left< \phi''(r_{ij}) \left( \vec{e}^{k \parallel}_{ij}\cdot \vec{n}_{ij} \right) \right> + \mathcal{O}(\Delta \varphi) \right| \sim {e}^{k \parallel}_\text{net}, \\ & \left| \delta {R}_{\text{na} M}^k \right| = \frac{ \left| \Sigma_M^k \right| }{ {\omega}^{2} } \sim \frac{ {e}^{k \parallel}_\text{net} }{ {\omega}^{2} }. \end{aligned}$$ Figure \[fig.vibration\] shows $g(\omega)$ (vDOS), ${e}^{k \parallel}, {e}^{k \perp}, {e}^{k \parallel}_\text{net}, \delta E^{k \parallel}, \delta E^{k \perp}$ as functions of the eigenfrequency $\omega$, for the range of $\Delta \varphi = 10^{-1}$ to $10^{-6}$. In the figure, the values of ${e}^{k \parallel}, {e}^{k \perp}, {e}^{k \parallel}_\text{net}, \delta E^{k \parallel}, \delta E^{k \perp}$ are averaged over frequency bins of $\log_{10} \omega^k \in [\log_{10} \omega-\Delta \omega/2,\log_{10} \omega + \Delta \omega/2]$ with $\Delta \omega = 0.07$. Results from the original stressed system (left panels) as well as the unstressed system (right panels) are presented. [**Vibrational density of states $\boldsymbol{g(\omega)}$.**]{} As reported in previous studies [@Silbert_2009; @Xu_2010; @Silbert_2005], the vDOS $g(\omega)$, presented in Fig. \[fig.vibration\](a),(b), is divided into three regimes distinguishable by two characteristic frequencies $\omega^\ast$ and $\omega^h$; (i) intermediate $\omega^\ast< \omega < \omega^h$ regime, (ii) low $\omega < \omega^\ast$ regime, and (iii) high $\omega > \omega^h$ regime. Over the intermediate regime, $\omega^\ast< \omega < \omega^h$, $g(\omega)$ is nearly constant, i.e., $g(\omega)$ exhibits a plateau. At the low-frequency end, $\omega < \omega^\ast$, $g(\omega)$ decreases to zero as $\omega \rightarrow 0$, following Debye-like, power-law behavior, $g(\omega) \sim \omega^{a}$. Although, here we find the values of the exponents, $a \simeq 3/2$ in the stressed system and $a \simeq 1$ in the unstressed system, which are both smaller than the exact Debye exponent, $a = d-1 = 2$ [@Ashcroft; @kettel; @McGaughey]. (We would expect to recover the Debye behavior, $g(\omega) \sim \omega^{2}$, in the low frequency limit.) Finally, at the high $\omega > \omega^h$, $g(\omega)$ goes to zero as $\omega$ increases to $\omega_\text{max} \simeq 3$, where the vibrational modes are highly localized [@Silbert_2009; @Xu_2010]. In Fig. \[fig.frequency\](a), we show the characteristic frequencies, $\omega^\ast$ and $\omega^h$, as functions of $\Delta \varphi$. As $\Delta \varphi \rightarrow 0$, $\omega^\ast$ goes to zero, following the power-law scaling of $\omega^\ast \sim \Delta \varphi^{1/2} \rightarrow 0$ [@Silbert_2005; @Wyart_2005; @Wyart_2006], whereas $\omega^h \simeq 1.0$ is almost constant, independent of $\Delta \varphi$, and is set by the particle stiffness (recall, $\textrm{k}=1.0$). Thus, as demonstrated in Figs. \[fig.vibration\](a),(b) and \[fig.frequency\](a), on approach to the transition point $\varphi_c$, (i) the intermediate plateau regime extends towards zero frequency, (ii) the low $\omega <\omega^\ast$ region shrinks and disappears, and (iii) the high $\omega >\omega^h$ regime remains unchanged. Figure \[fig.frequency\](a) also compares $\omega^\ast, \omega^h$ between the stressed (open symbols) and the unstressed (closed symbols) systems, and demonstrates that the two systems show identical values of $\omega^\ast, \omega^h$. Thus, the three regimes, (i) to (iii), in $g(\omega)$ practically coincide between the two systems. However, here we note that the crossover at $\omega = \omega^\ast$ between regimes (i) and (ii) is milder in the stressed system than in the unstressed system, which is clearly observed in Fig \[fig.vibration\](a),(b) and was reported in previous works [@Wyart_2005; @Wyart_2006; @Xu_2007]. The stress, $\sim \phi'(r_{ij})$, reduces the mode energy $\delta E^k$ by $\delta E^{k \perp}$ (see Eq. (\[vs1\])), and shifts the vibrational modes to the low $\omega$ side [@Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009; @Wyart_2005; @Wyart_2006; @Xu_2007]. Thus, the “anomalous modes", which lie in the plateau regime, move into the Debye-like regime, and as a result, the crossover becomes less abrupt in the stressed system. [**Displacements $\boldsymbol{{e}^{k \parallel}, {e}^{k \perp}}$.**]{} We now pay attention to the stressed system in the left panels of Fig. \[fig.vibration\]. When looking at ${e}^{k \parallel}$ (solid lines) and ${e}^{k \perp}$ (dashed lines) in (c), the sliding displacement ${e}^{k \perp}$ is almost constant, i.e., ${e}^{k \perp} \simeq {A}^{\perp}$. In the tangential direction, particles are displaced by the same magnitude in each mode $k$, independent of the eigenfrequency $\omega^k$. Since there are few constraints in the tangential direction close to the jamming transition, the sliding motion ${e}^{k \perp}$ dominates over the normal motion ${e}^{k \parallel}$ and determines the whole vibrational motion regardless of the mode frequency $\omega^k$ (except for the highest frequency end). On the other hand, the compressing/stretching displacement ${e}^{k \parallel}$ is comparable to ${e}^{k \perp}$ at high $\omega$, and as $\omega$ is lowered, it monotonically decreases, following ${e}^{k \parallel} \sim \omega$. Around $\omega = \omega^\ast$, ${e}^{k \parallel}$ shows a functional crossover, from ${e}^{k \parallel} \sim \omega$ to $\sim \omega^0$. As $\omega \rightarrow 0$, ${e}^{k \parallel}$ converges to a constant value, ${A}^{\parallel}$, which depends on $\Delta \varphi$; ${e}^{k \parallel} \rightarrow {A}^{\parallel}(\Delta \varphi)$. Here we note that as $\omega$ decreases, ${e}^{k \perp}$ increases relative to ${e}^{k \parallel}$, indicating that the sliding angle, $\alpha^k_{ij} := \arctan \left(\left| \vec{e}_{ij}^{k \perp} \right| / \left| \vec{e}_{ij}^{k \parallel} \right| \right)$, approaches $\pi/2$ for each contact $(i,j)$, and vibrational motions become more floppy-like [@Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009]. To illustrate this point more explicitly, Fig. \[fig.alpha\] plots the probability distribution $P(\alpha_{ij}^k)$ for several different normal modes $k$, and shows that the lower $\omega^k$ mode expresses a higher probability for $\alpha_{ij}^k = \pi/2$. At low packing fraction $\Delta \varphi=10^{-6}$ (Fig. \[fig.alpha\](b)), the lowest $\omega^k$ modes resemble a delta function distribution, $P(\alpha_{ij}^k) \simeq \delta(\alpha_{ij}^k-\pi/2)$, where sliding ${e}^{k \perp}$ is orders of magnitude larger than compressing/stretching ${e}^{k \parallel}$. [**Mode energies $\boldsymbol{\delta E^{k \parallel}, \delta E^{k \perp}}$.**]{} We next turn to the mode energies, $\delta E^{k \parallel}$ (solid lines) and $\delta E^{k \perp}$ (dashed lines), in Fig. \[fig.vibration\](g). From Eq. (\[displace3\]) and ${e}^{k \perp} \simeq A^{\perp}$, the transverse energy $\delta E^{k \perp}$ is described as $\delta E^{k \perp} \sim \Delta \varphi {A^{\perp}}^2 \sim \Delta \varphi$. Thus, $\delta E^{k \perp}$ is independent of $\omega$ and is proportional to $\Delta \varphi$, which is indeed numerically demonstrated in (g). On the other hand, the compressing/stretching energy $\delta E^{k \parallel}$ dominates over $\delta E^{k \perp}$ at high $\omega$, and the total mode energy is determined by $\delta E^{k \parallel}$ only; $\delta E^{k \parallel} \simeq \delta E^{k}$. As $\omega$ is lowered, $\delta E^{k \parallel}$ decreases as $\delta E^{k \parallel} \simeq \delta E^{k} = {\omega}^2/2$ (see Eq. (\[vs3\])). From Eq. (\[displace3\]) we obtain $\delta E^{k \parallel} \sim {{e}^{k \parallel}}^2 \sim {\omega}^2$, which explains the behavior of ${e}^{k \parallel} \sim \omega$ in (c). At the crossover $\omega = \omega^\ast$, $\delta E^{k \parallel} \simeq {{\omega}^\ast}^2/2$ reaches the same order of magnitude as $\delta E^{k \perp}$, from which we obtain the scaling law of $\omega^\ast$ with respect to $\Delta \varphi$ as $$\label{resultvs1} \begin{aligned} & \delta E^{k \parallel} \simeq \frac{{{\omega}^\ast}^2}{2} \sim \delta E^{k \perp} \sim \Delta \varphi, \\ & \Longleftrightarrow \ {\omega}^\ast \sim {\delta E^{k \perp}}^{1/2} \sim \Delta \varphi^{1/2}. \end{aligned}$$ Eq. (\[resultvs1\]) is indeed what we observed in Fig. \[fig.frequency\] and is consistent with previous works [@Silbert_2005; @Wyart_2005; @Wyart_2006]. The crossover in ${e}^{k \parallel}$ at $\omega=\omega^\ast$ corresponds to that in $\delta E^{k \parallel}$. As $\omega$ further decreases towards zero frequency, $\delta E^{k \parallel}$ converges to $\delta E^{k \perp} \sim \Delta \varphi$ such that the total $\delta E^k = \delta E^{k \parallel} - \delta E^{k \perp} \rightarrow 0$, thus ${e}^{k \parallel}$ to ${A}^{\parallel} \sim {\delta E^{k \perp}}^{1/2} \sim \Delta \varphi^{1/2}$ as observed in (c). Therefore, in the stressed system, we identify $\omega^\ast$ as the frequency-point where $\delta E^{k \parallel}$ becomes comparable to $\delta E^{k \perp}$. Even though the transverse energy, $\delta E^{k \perp} \sim \Delta \varphi$, becomes very small close to the transition point ($\Delta \varphi \ll 1$), it cannot be neglected in the low $\omega$ regime below $\omega^\ast$, $\omega < \omega^\ast$. [**Net displacement $\boldsymbol{{e}^{k \parallel}_\text{net}}$.**]{} The net displacement ${e}^{k \parallel}_\text{net}$ in Fig. \[fig.vibration\](e), which is roughly two orders of magnitude smaller than the absolute displacement ${e}^{k \parallel}$, shows a similar $\omega$-dependence as ${e}^{k \parallel}$. In particular, ${e}^{k \parallel}_\text{net}$ similarly exhibits a functional crossover at $\omega^\ast$, from ${e}^{k \parallel}_\text{net} \sim \omega$ to $\sim \omega^0$. As $\omega \rightarrow 0$, ${e}^{k \parallel}_\text{net} \rightarrow A^{\parallel}_\text{net}$, which depends on $\varphi$ as $A^{\parallel}_\text{net} \sim \Delta \varphi^{1/2}$ in the same manner as $A^{\parallel}$. Thus, we conclude that as for ${e}^{k \parallel}$, the crossover in ${e}^{k \parallel}_\text{net}$ at $\omega=\omega^\ast$ is also controlled by the competition between the two mode energies, $\delta E^{k \parallel}$ and $\delta E^{k \perp}$. However, we see a difference between ${e}^{k \parallel}$ and ${e}^{k \parallel}_\text{net}$ at high frequencies $\omega>\omega^h$: ${e}^{k \parallel}_\text{net}$ shows a crossover from ${e}^{k \parallel}_\text{net} \sim \omega$ to $\sim \omega^0$, while ${e}^{k \parallel}$ retains the scaling ${e}^{k \parallel} \sim \omega$ with no crossover. In order to characterize the crossover in ${e}^{k \parallel}_\text{net}$ at $\omega=\omega^h$, we divide ${e}^{k \parallel}_\text{net}$ into two terms, ${e}^{k \parallel}_\text{com}$ and ${e}^{k \parallel}_\text{str}$, which originate from the compressing ($\vec{e}^{k \parallel}_{ij} \cdot \vec{n}_{ij}<0$) and the stretching ($\vec{e}^{k \parallel}_{ij} \cdot \vec{n}_{ij}>0$) motions, respectively; $$\begin{aligned} \label{displace5} {e}^{k \parallel}_\text{net} &= \left| \frac{1}{N^\text{ct}} \left( \sum_{\vec{e}^{k \parallel}_{ij}\cdot \vec{n}_{ij}<0} + \sum_{\vec{e}^{k \parallel}_{ij}\cdot \vec{n}_{ij}>0} \right) \vec{e}^{k \parallel}_{ij}\cdot \vec{n}_{ij} \right|,\\ & := \left| -{e}^{k \parallel}_\text{com} + {e}^{k \parallel}_\text{str} \right|, \end{aligned}$$ where ${e}^{k \parallel}_\text{com} > 0$ and ${e}^{k \parallel}_\text{str} > 0$ are both positive quantities. The absolute displacement ${e}^{k \parallel}$ can be approximated by a sum of those two terms; ${e}^{k \parallel} \approx {e}^{k \parallel}_\text{com} + {e}^{k \parallel}_\text{str}$. We have confirmed that below $\omega^h$, the two terms increase with $\omega$, with different rates, i.e., ${e}^{k \parallel}_\text{com} \approx R_\text{com} \omega$ and ${e}^{k \parallel}_\text{str} \approx R_\text{str} \omega$ ($R_\text{com} \neq R_\text{str}$), and as a result, the net value ${e}^{k \parallel}_\text{net}$ increases as ${e}^{k \parallel}_\text{net} \approx \left| R_\text{str}-R_\text{com} \right| \omega$. On the other hand, above $\omega^h$, they increase at the same rate, $R_\text{com} \approx R_\text{str} \approx R$, so that the net value does not vary with $\omega$. The absolute ${e}^{k \parallel}$ increases as ${e}^{k \parallel} \approx (R_\text{str} + R_\text{com}) \omega$, both below and above $\omega^h$. Therefore, we conclude that the crossover in ${e}^{k \parallel}_\text{net}$ at $\omega = \omega^h$ is determined by the balance between the compressing (${e}^{k \parallel}_\text{com}$) and the stretching (${e}^{k \parallel}_\text{str}$) motions. The net displacement ${e}^{k \parallel}_\text{net}$ exhibits two crossovers at $\omega^\ast$ and $\omega^h$, such that the three regimes defined in $g(\omega)$ [@Silbert_2009; @Xu_2010; @Silbert_2005] can be distinguished by the scaling-behaviors of ${e}^{k \parallel}_\text{net}$ as $$\begin{aligned} \label{resultvs2} & {e}^{k \parallel}_\text{net} \sim \left\{ \begin{aligned} & \omega^0 & (\omega > \omega^h), \\ & \omega & (\omega^\ast < \omega < \omega^h), \\ & \omega^0 & (\omega < \omega^\ast). \end{aligned} \right. \\ \end{aligned}$$ [**Comparison to unstressed system.**]{} Finally we look at the unstressed system in right panels of Fig. \[fig.vibration\]. Above $\omega^\ast$, where $\delta E^{k \parallel}$ controls the total mode energy $\delta E^k$ in the stressed system, the unstressed system exhibits the same behaviors and power-law scalings as the stressed system. However, since $\delta E^{k \perp} \equiv 0$ and $\delta E^{k \parallel} \equiv \delta E^k$, the unstressed system shows no crossover at $\omega = \omega^\ast$, and no distinct behaviors between $\omega>\omega^\ast$ and $\omega<\omega^\ast$. Therefore, although the unstressed system is a good approximation to the original stressed system, the low $\omega < \omega^\ast$ modes (low energy modes) behave differently between the two systems. ![\[fig.decomposition\] (Color online) Eigenmode decomposition of non-affine *bulk* modulus in the stressed (left panels) and the unstressed (right panels) systems. We plot the non-affine modulus $K_N^k$ in (a),(b), force field $\left| \Sigma_K^k \right|$ in (c),(d), and the non-affine displacement field $\left| \delta R^k_{\text{na} K} \right|$ in (e),(f), as functions of the eigenfrequency $\omega$. The values are averaged over the frequency bins of $\log_{10} \omega^k \in [\log_{10} \omega-\Delta \omega/2,\log_{10} \omega + \Delta \omega/2]$ with $\Delta \omega = 0.07$. The different lines indicate different packing fractions, $\Delta \varphi = 10^{-1}$ (red), $10^{-2}$ (green), $10^{-3}$ (blue), $10^{-4}$ (orange), $10^{-5}$ (magenta), $10^{-6}$ (black), from right to left or from top to bottom. The detailed description of presented quantities is given in Sec. \[sec.moduli\].](fig8.eps){width="47.50000%"} ![\[fig.decompositiona\] (Color online) Eigenmode decomposition of non-affine *shear* modulus in the stressed (left panels) and the unstressed (right panels) systems. We plot the non-affine modulus $G_N^k$ in (a),(b), force field $\left| \Sigma_G^k \right|$ in (c),(d), and the non-affine displacement field $\left| \delta R^k_{\text{na} G} \right|$ in (e),(f), as functions of the eigenfrequency $\omega$. See the caption of Fig. \[fig.decomposition\].](fig9.eps){width="47.50000%"} ![\[fig.picture\] (Color online) Spatial maps of the force field $\vec{\Sigma}_M$ ((a),(b)) and the non-affine displacement field $\delta \vec{R}_{\text{na} M}$ ((c),(d)) in real space, corresponding to bulk $M=K$ (left panels) and shear $M=G$ (right panels) deformations. The packing fraction is $\Delta \varphi = 10^{-5}$. We plot the vector fields at a fixed plane within the packing of thickness $\approx 1 [\sigma]$, which includes around $100$ particles ($10\%$ of all the particles). $\vec{\Sigma}_M$ and $\delta \vec{R}_{\text{na} M}$ are formulated as a superposition of the eigenvectors $\vec{e}^k$ weighted by the components of $\Sigma_M^k$ and $\delta {R}_{\text{na} M}^k$, respectively (see Eqs. (\[modulus7\]) and (\[modulus7a\])). In the figure, we show the fields obtained by a summation of all the eigenmodes $k=1,2,...,3N-3$ (red solid vectors), and those obtained by a partial summation over $\omega^k > \omega^h$ for $\vec{\Sigma}_M$, and $\omega^k < \omega^\ast$ for $\delta \vec{R}_{\text{na} M}$ (blue dashed vectors).](fig10.eps){width="48.00000%"} ![\[fig.decomposition2\] (Color online) Comparison of $M_N^k, \left| \Sigma_M^k \right|, \left| \delta R^k_{\text{na} M} \right|$ between the bulk $M=K$ (red solid line) and the shear $M=G$ (blue dashed line) moduli, for the stressed (left panels) and the unstressed (right panels) systems. We plot $M_N^k$ in (a),(b), $\left| \Sigma_M^k \right|$ in (c),(d), and $\left| \delta R^k_{\text{na} M} \right|$ in (e),(f), as functions of the eigenfrequency $\omega$. The packing fraction is $\Delta \varphi = 10^{-5}$. The data are same as those presented in Figs. \[fig.decomposition\] and \[fig.decompositiona\].](fig11.eps){width="48.00000%"} Eigenmode decomposition of non-affine moduli {#sec.nonaffine1} -------------------------------------------- In this section, we study the non-affine modulus $M_N$, which is decomposed by eigenmode $k$ contribution, $M_{N}^{k}$ ($k=1,2,...,3N-3$), as in Eq. (\[modulus10\]). Each component $M_{N}^{k}$ is formulated as the product of force $\Sigma_M^k$ and non-affine displacement $\delta {R}_{\text{na} M}^k$, and thus can be interpreted as an energy relaxation by the eigenmode $k$ excitation during non-affine deformation process. The values of $M_{N}^{k}$, $\left| \Sigma_M^k \right|$, $\left| \delta {R}_{\text{na} M}^k \right|$ are presented as functions of the eigenfrequency $\omega$, for the range of packing fraction, $\Delta \varphi = 10^{-1}$ to $10^{-6}$, in Fig. \[fig.decomposition\] for the bulk $M=K$ and Fig. \[fig.decompositiona\] for the shear $M=G$. Note that since $M_{N}^{k}$ is positive for all the modes $k$, $M_{N}^{k} = \left| \Sigma_M^k \right| \times \left| \delta {R}_{\text{na} M}^k \right|$ holds. The presented values are averaged over the frequency bins of $\log_{10} \omega^k \in [\log_{10} \omega-\Delta \omega/2,\log_{10} \omega + \Delta \omega/2]$ with $\Delta \omega = 0.07$. [**Eigenmode contribution $\boldsymbol{M_{N}^{k}}$.**]{} We first focus on the stressed system in the left panels of Figs. \[fig.decomposition\] and \[fig.decompositiona\]. Like the vibrational modes in Fig. \[fig.vibration\], the non-affine modulus $M_N^k$, in (a), also shows three distinct frequency regimes; (i) intermediate $\omega^\ast_M < \omega < \omega^h_M$ regime, (ii) low $\omega < \omega^\ast_M$ regime, and (iii) high $\omega > \omega^h_M$ regime. At intermediate frequencies, $\omega^\ast_M < \omega < \omega^h_M$, $M_N^k$ is practically $\omega$-independent and shows a plateau. In the low-frequency regime, $\omega < \omega^\ast_M$, $M_N^k$ increases from the plateau value as $M_N^k \sim \omega^{-2}$. Finally, in the high-frequency regime, $\omega > \omega^h_M$, $M_N^k$ drops and decreases as $\omega \rightarrow \omega_\text{max} \simeq 3$. Here, we remark that the bulk modulus $K_N^k$ is not strictly a plateau in the intermediate regime but slightly decreases at higher $\omega$, so that we cannot cleanly identify $\omega_K^\ast$ at higher $\Delta \varphi$, and $\omega_K^h$. Thus, we determined $\omega^\ast_{K}$ only for the lower $\Delta \varphi \le 5\times 10^{-3}$, and did not identify a specific $\omega_{K}^h$. Whereas the shear modulus $G_N^k$ shows a clear plateau region, and we can determine both $\omega^\ast_G$ and $\omega^h_G$ without ambiguity. We discuss this difference between $K_N^k$ and $G_N^k$ at the end of this section, but here we emphasize that at a qualitative level, $K_N^k$ can also be divided into three regimes as described above. In order to check if the crossover points coincide between the vDOS $g(\omega)$ and $M_N^k$, we compare $\omega^\ast, \omega^h$ from $g(\omega)$, to $\omega^\ast_{M}, \omega^h_{M}$ from $M_N^k$ in Fig. \[fig.frequency\](b). Figure \[fig.frequency\](b) indeed demonstrates that $g(\omega)$ and $M_N^k$ indicate the same crossover frequencies: $\omega^\ast \simeq \omega^\ast_M \sim \Delta \varphi^{1/2}$ and $\omega^h \simeq \omega^h_M \simeq 1.0$. [**Force $\boldsymbol{\left| \Sigma_M^k \right|}$ and non-affine displacement $\boldsymbol{\left| \delta {R}_{\text{na} M}^k \right|}$.**]{} We turn to the force $\left| \Sigma_M^k \right|$ in (c) of Figs. \[fig.decomposition\] and \[fig.decompositiona\], and the non-affine displacement $\left| \delta {R}_{\text{na} M}^k \right|$ in (e). As in Eq. (\[displace4\]), $\left| \Sigma_M^k \right|$ and $\left| \delta {R}_{\text{na} M}^k \right|$ are directly related to the net (compressing/stretching) displacement ${e}^{k \parallel}_\text{net}$. Indeed, we observe the following power-law behaviors of $\left| \Sigma_M^k \right|, \left| \delta {R}_{\text{na} M}^k \right|, M_N^k = \left| \Sigma_M^k \right| \times \left| \delta {R}_{\text{na} M}^k \right|$; $$\begin{aligned} \label{resultenedis} & \left| \Sigma_M^k \right| \sim {e}^{k \parallel}_\text{net} \sim \left\{ \begin{aligned} & \omega^0 & (\omega > \omega^h), \\ & \omega & (\omega^\ast < \omega < \omega^h), \\ & \omega^0 & (\omega < \omega^\ast), \end{aligned} \right. \\ % & \left| \delta {R}_{\text{na} M}^k \right| \sim \frac{ {e}^{k \parallel}_\text{net} }{ {\omega}^{2} } \sim \left\{ \begin{aligned} & \omega^{-2} & (\omega > \omega^h), \\ & \omega^{-1} & (\omega^\ast < \omega < \omega^h), \\ & \omega^{-2} & (\omega < \omega^\ast), \end{aligned} \right. \\ % & M_N^k \sim \frac{ { {e}^{k \parallel}_\text{net} }^2 }{ {\omega}^{2} } \sim \left\{ \begin{aligned} & \omega^{-2} & (\omega > \omega^h), \\ & \omega^{0} & (\omega^\ast < \omega < \omega^h), \\ & \omega^{-2} & (\omega < \omega^\ast), \end{aligned} \right. \end{aligned}$$ all of which are consistent with the behavior of ${e}^{k \parallel}_\text{net}$ in Eq. (\[resultvs2\]). As $\omega \rightarrow 0$, ${e}^{k \parallel}_\text{net} \rightarrow A^{\parallel}_\text{net} \sim \Delta \varphi^{1/2}$, leading to $\left| \Sigma_M^k \right| \sim A^{\parallel}_\text{net} \sim \Delta \varphi^{1/2}$, $\left| \delta {R}_{\text{na} M}^k \right| \sim A^{\parallel}_\text{net} \omega^{-2} \sim \Delta \varphi^{1/2} \omega^{-2}$, and $M_N^k \sim {A^{\parallel 2}_\text{net}} \omega^{-2} \sim \Delta \varphi \omega^{-2}$. Therefore, all of $\left| \Sigma_M^k \right|, \left| \delta {R}_{\text{na} M}^k \right|, M_N^k$ follow the net displacement ${e}^{k \parallel}_\text{net}$. Particularly, their crossovers at $\omega^\ast$ are controlled by the competition between the compressing/stretching $\delta E^{k \parallel}$ and sliding $\delta E^{k \perp}$ energies, whereas those at $\omega^h$ are determined by the balance between the compressing ${e}^{k \parallel}_\text{com}$ and stretching ${e}^{k \parallel}_\text{str}$ motions. [**Comparison to unstressed system.**]{} When comparing the stressed system (left panels of Figs. \[fig.decomposition\] and \[fig.decompositiona\]) to the unstressed system (right panels), both systems show the same behaviors of $\left| \Sigma_M^k \right|,\left| \delta {R}_{\text{na} M}^k \right|,M_N^k$, at $\omega > \omega^\ast$, particularly the same power-law scalings. However, since the unstressed system shows no crossover in ${e}^{k \parallel}_\text{net}$ (and ${e}^{k \parallel}$, $\delta E^{k \parallel}$) at $\omega=\omega^\ast$, as discussed in the previous Sec. \[sec.vibrationalstate\], it retains the same behaviors of $\left| \Sigma_M^k \right|,\left| \delta {R}_{\text{na} M}^k \right|,M_N^k$ at $\omega^\ast < \omega < \omega^h$ down to $\omega = 0$, i.e., at $0 <\omega <\omega^h$. Thence, below $\omega^\ast$, the two systems show distinct behaviors and scalings in their vibrational modes as well as the non-affine elastic moduli. This result is a direct consequence that the transverse energy $\delta E^{k \perp}$ in the stressed system is effective below $\omega^\ast$, but negligible above $\omega^\ast$. [**Physical interpretation of $\boldsymbol{M_{N}^{k}}$.**]{} We can interpret our results of $M_N^k = \left| \Sigma_M^k \right| \times \left| \delta {R}_{\text{na} M}^k \right|$ in Figs. \[fig.decomposition\] and \[fig.decompositiona\], and Eq. (\[resultenedis\]), in terms of energy relaxation during the non-affine deformation process. At the highest frequencies, $\omega > \omega^h$, there exists a bunch of closely spaced, localized eigenmodes of a sufficiently high energy that they are only weakly activated. As a result, their associated non-affine displacement fields are small, leading to minimal energy relaxation and $M_N^k$. At intermediate frequencies, $\omega^\ast < \omega < \omega^h$, the modes are of lower energies and are more readily excited. As a result, the nonaffine displacement grows as $\left| \delta {R}_{\text{na} M}^k \right| \sim \omega^{-1}$, whereas at the same time, the force $\left| \Sigma_M^k \right| \sim \omega$ becomes smaller with decreasing frequency. These two competing effects balance, resulting in the constant, plateau value of energy relaxation, $M^k_N \sim \omega^0$. Finally, at the low end of the frequency spectrum, $\omega < \omega^\ast$, for the stressed system, the stress, $\sim \phi'(r_{ij})$, enhances the force $\left| \Sigma_M^k \right|$ and drives the non-affine displacement $\left| \delta {R}_{\text{na} M}^k \right|$. Since the stress term, $\sim \phi'(r_{ij})$, reduces the mode energy by $\delta E^{k \perp}$ (see Eq. (\[vs1\])), the compressing/stretching energy $\delta E^{k \parallel}$ compensates this destabilization of the system, leading to the larger value of ${e}^{k \parallel}_\text{net}$ (and also ${e}^{k \parallel}$) and then the enhancements of $\left| \Sigma_M^k \right|$ and $\left| \delta {R}_{\text{na} M}^k \right|$. As a result, the energy relaxation grows with decreasing $\omega$ as $M^k_N \sim \omega^{-2}$. While, the unstressed system with zero stress, $\sim \phi'(r_{ij}) \equiv 0$, has a constant energy relaxation, $M_N^k \sim \omega^0$, even at $\omega < \omega^\ast$, as it does at $\omega^\ast < \omega < \omega^h$. [**Spatial structures of $\boldsymbol{\left| \Sigma_M^k \right|}$ and $\boldsymbol{\left| \delta {R}_{\text{na} M}^k \right|}$.**]{} As reported by Maloney and Lemaître [@Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006], the force field $\vec{\Sigma}_M$ exhibits a random structure (without any apparent spatial correlation) in real space, while the non-affine displacement field $\delta \vec{R}_{\text{na} M}$ shows a vortex-like structure (with apparent long-range spatial correlation). Indeed, such features are observed in Fig. \[fig.picture\], where $\vec{\Sigma}_M$ and $\delta \vec{R}_{\text{na} M}$ are visualized in real space, at a fixed plane within a slice of thickness of a particle diameter. As in Eqs. (\[modulus7\]) and (\[modulus7a\]), the real-space structures of $\vec{\Sigma}_M$ and $\delta \vec{R}_{\text{na} M}$ are constructed as a superposition of the eigenvectors $\vec{e}^k$ weighted by the components of $\Sigma_M^k$ and $\delta {R}_{\text{na} M}^k$. Figure \[fig.picture\] also compares the total contributions (red solid vectors) to those obtained by a partial summation over $\omega^k > \omega^h$ for $\vec{\Sigma}_M$, and $\omega^k < \omega^\ast$ for $\delta \vec{R}_{\text{na} M}$ (blue dashed vectors). It is seen that the partial summations can well reproduce the true fields (full summations) of $\vec{\Sigma}_M$ and $\delta \vec{R}_{\text{na} M}$. Therefore, our results indicate that the eigenvectors $\vec{e}^k$ at high frequencies $\omega^k > \omega^h$, which are highly localized fields [@Silbert_2009; @Xu_2010], mainly contribute to the random structure of $\vec{\Sigma}_M$ (Fig. \[fig.picture\](a),(b)). While the vortex-like, structure of $\delta \vec{R}_{\text{na} M}$ (Fig. \[fig.picture\](c),(d)) comes from the transverse fields with vortex features apparent in the eigenvectors $\vec{e}^k$ at low frequencies $\omega^k < \omega^\ast$ [@Silbert_2005; @Silbert_2009]. Here we should remark that on approach to the transition point, $\Delta \varphi \rightarrow 0$ and $\omega^\ast \rightarrow 0$, the contributions to $\delta \vec{R}_{\text{na} M}$ at $\omega^k < \omega^\ast$ become less and less, and finally the modes at $\omega > \omega^\ast$ also start to play a role in determining $\delta \vec{R}_{\text{na} M}$. [**Comparison between bulk $\boldsymbol{M=K}$ and shear $\boldsymbol{M=G}$ moduli.**]{} We close this section with a comparison of $M_{N}^{k},\left| \Sigma_M^k \right|,\left| \delta {R}_{\text{na} M}^k \right|$ between the bulk $M=K$ (Fig. \[fig.decomposition\]) and shear $M=G$ (Fig. \[fig.decompositiona\]) moduli. All of $M_{N}^{k}$, $\left| \Sigma_M^k \right|$, $\left| \delta {R}_{\text{na} M}^k \right|$ show similar behaviors and power-law scalings between $M=K$ and $G$, for both the stressed and unstressed systems. However, we observe some differences: At $\omega^\ast < \omega < \omega^h$, $G_N^k$ shows a clear plateau, while $K_N^k$ slightly depends on $\omega$. We focus on these differences in Fig. \[fig.decomposition2\], where we compare $M_N^k, \left| \Sigma_M^k \right|,\left| \delta {R}_{\text{na} M}^k \right|$ between $M=K$ and $G$. At lower frequencies $\omega \lesssim 10^{-1}$, the quantities coincide well between $M=K$ and $G$ [^1], whereas at higher frequencies $\omega \gtrsim 10^{-1}$, they are larger for $G$ than for $K$. Here we note that $K_N^k$ starts to deviate from its plateau value at $\omega \approx 10^{-1}$. Thus, eigenmodes with $\omega \gtrsim 10^{-1}$ are excited more under shear deformation than under compressional deformation, which results in more energy relaxation and a larger non-affine modulus $G_N$ than $K_N$. As we will see in Eq. (\[nonaffine7\]) in the next section, the critical value of $G_{Nc} \simeq 0.24$ is larger than $K_{Nc} \simeq 0.15$, which comes from the eigenmodes contributions at $\omega \gtrsim 10^{-1}$. Ellenbroek *et. al.* [@Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009] have demonstrated a distinction in non-affine responses under compression and shear: The non-affine response under shear is considered to be governed by more floppy-like motions than that under compression. From their result, we might expect that the floppy-like, vibrational modes at low frequencies are more enhanced under shear than under compression. However, our results indicate that this issue is more subtle and involves an interplay between the modes over the entire vibrational spectrum. While it is true that the large-scale nonaffine field, $\delta \vec{R}_{\text{na} M}$, comes from the lower frequency portion of the spectrum for both compression and shear, the difference between them appears at relatively high frequencies $\omega \gtrsim 10^{-1}$, *not* really low frequencies (for the example, $\Delta\varphi = 10^{-5}$, shown in Fig. \[fig.decomposition2\]). Therefore, if one associates “floppiness” with more non-affine or softer under shear than under compression, this is not a property restricted to just the low frequency modes. ![\[fig.nonaffine\] (Color online) Non-affine moduli $K_N, G_N$ in the stressed (left panels) and the unstressed (right panels) systems. We plot $K_{N}^{\omega < \omega^\ast}, G_{N}^{\omega < \omega^\ast}$ in (a),(b), $K_{N}^{\omega > \omega^\ast}, G_{N}^{\omega > \omega^\ast}$ in (c),(d), and total $K_{N}, G_{N}$ in (e),(f). In the figures, we compare numerical values presented in Fig. \[fig.phidependence\](a),(b) (symbols), to the formulations (solid lines) which are described in the main text (see Eqs. (\[nonaffine2\]), (\[nonaffine6\]), (\[nonaffine9\])). Note that the numerical values of $M_{N}^{\omega < \omega^\ast}$ and $M_{N}^{\omega > \omega^\ast}$ are obtained by replacing $\sum_{k=1}^{3N-3}$ in Eq. (\[modulus10\]) with partial summations, $\sum_{\omega^k < \omega^\ast}$ and $\sum_{\omega^k > \omega^\ast}$, respectively. In (a) and (e) for the stressed system, dashed lines indicate the formulation where we use the exponents of $a=1.5$ and $b=1.3$ (see Eqs. (\[nonaffine2x\]), (\[nonaffine9x\])).](fig12.eps){width="48.00000%"} Formulation of non-affine moduli {#sec.nonaffine2} -------------------------------- Based on observations in the previous Secs. \[sec.vibrationalstate\] and \[sec.nonaffine1\], we attempt to formulate the non-affine modulus $M_N=K_{N},G_N$. Following Refs. [@Lemaitre_2006; @Zaccone_2011], we assume that $M_{N}^{k}$ (also $\left| \Sigma_M^k \right|, \left| \delta {R}_{\text{na} M}^k \right|$) is a self-averaged quantity: In the thermodynamics limit $N \rightarrow \infty$, $M_{N}^{k}$ converges to a well-defined continuous function of $\omega$, i.e., $M_{N}^{k}(\omega)$, which can be then obtained by averaging over the frequency shells and different realizations, as we have done in Figs. \[fig.decomposition\] and \[fig.decompositiona\] for $M_{N}^{k} = K_{N}^{k}$ and $G_{N}^{k}$, respectively. Thus we replace the summation, $\sum_{k=1}^{3N-3}$, in $M_N$ of Eq. (\[modulus10\]) by the integral operator, $\int d\omega (3N-3) g(\omega) \simeq \int d\omega 3N g(\omega)$; $$\label{nonaffine1x} M_{N} = \frac{1}{V} \sum_{k=1}^{3N-3} M_{N}^{k} = 3\hat{\rho} \int d\omega g(\omega) M_{N}^k(\omega),$$ where we note $(3N-3) g(\omega) \simeq 3N g(\omega)$ is the total number of the eigenmodes $k$ per unit frequency at $\omega$. We then separate $M_N$ into two terms, by dividing the integral regime into $\omega < \omega^\ast$ and $\omega > \omega^\ast$; $$\begin{aligned} \label{nonaffine1} M_{N} &= 3\hat{\rho} \left( \int_{\omega<\omega^\ast} d\omega + \int_{\omega>\omega^\ast} d\omega \right) g(\omega) M_{N}^k(\omega),\\ &:= M_{N}^{\omega < \omega^\ast} + M_{N}^{\omega > \omega^\ast}. \end{aligned}$$ In the following, we deal with those two terms in turn. [**Formulation of $\boldsymbol{M_{N}^{\omega < \omega^\ast}}$.**]{} For $\omega < \omega^\ast$, we suppose a Debye-like density of states, as observed in Fig. \[fig.vibration\](a),(b); $$\label{assume1} g(\omega) = g^\ast \left( \frac{\omega}{\omega^\ast} \right)^a,$$ where $g^\ast$ is the plateau value of $g(\omega)$, and the exponent $a$ depends on the stressed or unstressed systems; $$a = \left\{ \begin{aligned} & \frac{3}{2} & \text{(stressed)}, \\ & 1 & \text{(unstressed)}. \end{aligned} \right.$$ In addition, from Figs. \[fig.decomposition\](a),(b) and \[fig.decompositiona\](a),(b), we also reasonably assume $$\label{assume2} M_{N}^k(\omega) = M_{N}^{\ast} \left( \frac{\omega}{\omega^\ast} \right)^{-b},$$ where $M_{N}^{\ast}$ represents the plateau value of $M_{N}^k (\omega)$, and the exponent $b$ is $$b = \left\{ \begin{aligned} & 2 & \text{(stressed)}, \\ & 0 & \text{(unstressed)}. \end{aligned} \right.$$ On performing the integral $\int_{\omega < \omega^\ast} d\omega$ in Eq. (\[nonaffine1\]), we obtain $M_{N}^{\omega < \omega^\ast}$ as $$\begin{aligned} \label{nonaffine2} M_{N}^{\omega < \omega^\ast} &= \left( \frac{ 1 }{a-b+1} \right) 3\hat{\rho} g^\ast M_N^{\ast} {\omega^\ast}, \\ & = \left\{ \begin{aligned} & 6 \hat{\rho}_c g^\ast M_N^{\ast} {\omega^\ast} + \mathcal{O}(\Delta \varphi) & \text{(stressed)}, \\ & \frac{3}{2} \hat{\rho}_c g^\ast M_N^{\ast} {\omega^\ast} + \mathcal{O}(\Delta \varphi) & \text{(unstressed)}. \end{aligned} \right. \end{aligned}$$ Note that in the stressed case, the integrand function, $g(\omega) M_{N}^k(\omega) \sim \omega^{a-b} \sim \omega^{-1/2}$, diverges to $+\infty$ as $\omega \rightarrow 0$, but its integral over $\omega=0$ to $\omega^\ast$ converges to a finite value. As $\Delta \varphi \rightarrow 0$, $\omega^\ast$ goes to zero, i.e., the Debye-like region disappears, and $M_{N}^{\omega < \omega^\ast}$ vanishes as $M_{N}^{\omega < \omega^\ast} \sim \omega^\ast \sim \Delta \varphi^{1/2} \rightarrow 0$. [**Formulation of $\boldsymbol{M_{N}^{\omega > \omega^\ast}}$.**]{} Next we consider the integral $\int_{\omega > \omega^\ast} d\omega$ in Eq. (\[nonaffine1\]), i.e., $M_{N}^{\omega > \omega^\ast}$. Since $g(\omega)$ and $M_{N}^k(\omega)$ are independent of $\Delta \varphi$ at $\omega > \omega^h$, the integral of $\int_{\omega > \omega^h} d\omega$ gives a constant value as; $$\label{nonaffine4} \int_{\omega > \omega^h} d\omega g(\omega) M_{N}^k(\omega) = M_{N}^h \ \text{(constant)}.$$ In the regime of $\omega^\ast < \omega < \omega^h$, both $g(\omega)$ and $M_{N}^k(\omega)$ show the plateau, thus we formulate $$\label{nonaffine5} \int_{\omega^\ast < \omega <\omega^h} d\omega g(\omega) M_{N}^k(\omega) = g^\ast M_{N}^{\ast} \left( \omega^h-\omega^\ast \right).$$ Therefore, we arrive at $$\label{nonaffine6} \begin{aligned} M_{N}^{\omega > \omega^\ast} &= 3\hat{\rho} \left(M_{N}^h + g^\ast M_{N}^{\ast} \omega^h \right) - 3\hat{\rho} g^\ast M_{N}^{\ast} \omega^\ast, \\ &= M_{Nc} - 3\hat{\rho}_c g^\ast M_{N}^{\ast} \omega^\ast + \mathcal{O}(\Delta \varphi), \end{aligned}$$ where $M_{Nc} = 3\hat{\rho}_c \left(M_{N}^h + g^\ast M_{N}^{\ast} \omega^h \right)$ is the critical value at $\varphi_c$. Thus, as $\Delta \varphi \rightarrow 0$ and $\omega^\ast \rightarrow 0$, the plateau region extends down to zero frequency, and $M_{N}^{\omega > \omega^\ast} \rightarrow M_{Nc}$. We note that $M_{Nc}$ is the critical value not only for $M_{N}^{\omega > \omega^\ast}$ but also for the total non-affine modulus $M_N$, since $M_{N}^{\omega < \omega^\ast} \rightarrow 0$ as $\Delta \varphi \rightarrow 0$. [**Summation of $\boldsymbol{M_{N}^{\omega < \omega^\ast}}$ and $\boldsymbol{M_{N}^{\omega > \omega^\ast}}$.**]{} Finally we sum up two terms of $M_{N}^{\omega < \omega^\ast}$ and $M_{N}^{\omega > \omega^\ast}$, and obtain the total modulus $M_N$ as $$\begin{aligned} \label{nonaffine9} M_{N} &= M_{Nc} - \left( \frac{ a-b }{a-b+1} \right) 3\hat{\rho}_c g^\ast M_{N}^{\ast} \omega^\ast + \mathcal{O}(\Delta \varphi), \\ & = \left\{ \begin{aligned} & M_{Nc} + 3 \hat{\rho}_c g^\ast M_N^{\ast} {\omega^\ast} + \mathcal{O}(\Delta \varphi) & \text{(stressed)}, \\ & M_{Nc} - \frac{3}{2} \hat{\rho}_c g^\ast M_N^{\ast} {\omega^\ast} + \mathcal{O}(\Delta \varphi) & \text{(unstressed)}. \end{aligned} \right. \end{aligned}$$ Here we note that $\omega^\ast \sim \Delta \varphi^{1/2}$ is the leading order term of $M_{N}^{\omega < \omega^\ast}$, $M_{N}^{\omega > \omega^\ast}$, $M_{N}$ in Eqs. (\[nonaffine2\]), (\[nonaffine6\]), (\[nonaffine9\]), respectively. We have extracted the values of parameters in Eq. (\[nonaffine9\]), from data presented in Figs. \[fig.vibration\], \[fig.decomposition\], and \[fig.decompositiona\]; $$\begin{aligned} & g^\ast = 0.390,\qquad K_N^{\ast}=0.0740,\qquad G_N^{\ast}=0.118,\\ & K_N^h = 0.0135,\qquad G_N^h = 0.0219, \end{aligned}$$ which are common to the stressed and unstressed systems. As mentioned in the previous Sec. \[sec.nonaffine1\] and Figs. \[fig.decomposition\] and \[fig.decompositiona\], $G_N^k (\omega)$ shows a clear plateau over the intermediate frequency range, $\omega^\ast < \omega < \omega^h$, while $K_N^k (\omega)$ slightly depends on $\omega$. Therefore, to take into account this dependence of $K_N^k (\omega)$, we determined the plateau value of $K_N^{\ast}$ as the average value of $K_N^{k} (\omega)$ over $\omega^\ast < \omega < \omega^h$ at the lowest packing fraction $\Delta \varphi = 10^{-6}$. From the above values of parameters, we obtain the critical value, $M_{Nc}=3\hat{\rho}_c \left(M_{N}^h + g^\ast M_{N}^{\ast} \omega^h \right)$; $$\label{nonaffine7} K_{Nc} \simeq 0.15, \qquad G_{Nc} \simeq 0.24.$$ Figure \[fig.nonaffine\] compares the simulation values (symbols) to the formulations of Eqs. (\[nonaffine2\]), (\[nonaffine6\]), (\[nonaffine9\]) (solid lines), for $M_{N}^{\omega < \omega^\ast}$ in (a),(b), $M_{N}^{\omega > \omega^\ast}$ in (c),(d), and the total $M_N$ in (e),(f). We note that the simulation values of $M_{N}^{\omega < \omega^\ast}$ and $M_{N}^{\omega > \omega^\ast}$ are obtained by replacing $\sum_{k=1}^{3N-3}$ in Eq. (\[modulus10\]) with partial summations, $\sum_{\omega^k < \omega^\ast}$ and $\sum_{\omega^k > \omega^\ast}$, respectively. It is seen that our formulation accurately captures $M_{N}^{\omega > \omega^\ast}$, while there is a discrepancy in $M_{N}^{\omega < \omega^\ast}$ of the stressed system (see Fig. \[fig.nonaffine\](a)). This discrepancy comes from the *smooth* crossovers at $\omega = \omega^\ast$ in $g(\omega)$ and $M^k_N (\omega)$ (see Figs. \[fig.vibration\](a), \[fig.decomposition\](a), \[fig.decompositiona\](a)), around which the assumptions of Eqs. (\[assume1\]) and (\[assume2\]) do not strictly hold. In the unstressed system, there is a sharp crossover in $g(\omega)$ (Fig. \[fig.vibration\](b)) and no crossover in $M^k_N (\omega)$ (Figs. \[fig.decomposition\](b) and \[fig.decompositiona\](b)), which leads to good agreement for $M_{N}^{\omega < \omega^\ast}$. The discrepancy in $M_{N}^{\omega < \omega^\ast}$ of the stressed system can be adjusted by tuning the exponents of $a$ and $b$ to take into account the smooth crossovers. In Fig. \[fig.nonaffine\](a), we also plot Eq. (\[nonaffine2\]) with $a=1.5$ and $b=1.3$ (dashed lines); $$\label{nonaffine2x} M_{N}^{\omega < \omega^\ast} = \left(2.5 \right) \hat{\rho}_c g^\ast M_N^{\ast} {\omega^\ast} + \mathcal{O}(\Delta \varphi) \quad \text{(stressed)},$$ which works better to capture the simulation values. The total modulus, $M_N=M_{N}^{\omega < \omega^\ast}+M_{N}^{\omega > \omega^\ast}$, is then acquired by Eq (\[nonaffine9\]), as demonstrated in Fig. \[fig.nonaffine\](e),(f). Again, for the stressed system in (e), the dashed line plots Eq (\[nonaffine9\]) with $a=1.5$ and $b=1.3$; $$\label{nonaffine9x} M_{N} = M_{Nc} - \left(0.5 \right) \hat{\rho}_c g^\ast M_{N}^{\ast} \omega^\ast + \mathcal{O}(\Delta \varphi) \quad \text{(stressed)}.$$ On approach to the transition point $\varphi_c$, the frequency $\omega^\ast$ goes to zero, hence the non-affine modulus $M_N$ tends towards the critical value $M_{Nc}$, as $M_N-M_{Nc} \sim \omega^\ast \sim \Delta \varphi^{1/2} \rightarrow 0$. We note that the critical value of $M_{Nc}$ is a finite positive value (see Eq. (\[nonaffine7\])), like the affine modulus $M_{Ac}$ in Eq. (\[affine5a\]), thus $M_N$ also discontinuously goes to zero, through the transition to the fluid phase, $\varphi < \varphi_c$, where $M_N \equiv 0$. ![\[fig.correlation\] (Color online) Correlation between two quantities $X$ and $Y$; $X=( n^x_{ij} n^y_{ij} )^2$, $Y=(\vec{e}^k_{ij}\cdot \vec{n}_{ij} )^2$ in (a),(b) and $X=( n^x_{ij} n^y_{ij} )^2$, $Y=(n^x_{i'j'} n^y_{i'j'} )^2$ in (c). In the main panel, we plot $\left< XY \right>$ and $\left< X \right> \left< Y \right>$, as a function of the eigenfrequency $\omega$ in (a),(b) and the packing fraction $\Delta \varphi$ in (c). In (a),(b), the values are averaged over frequency bins of $\log_{10} \omega^k \in [\log_{10} \omega-\Delta \omega/2,\log_{10} \omega + \Delta \omega/2]$ with $\Delta \omega = 0.07$, and the packing fraction is (a) $\Delta \varphi = 10^{-1}$ and (b) $\Delta \varphi = 10^{-6}$. If $X$ and $Y$ are uncorrelated, $\left< XY \right> = \left< X \right> \left< Y \right>$ holds. To see this quantitatively, we plot the relative error, $\left| \left< XY \right> - \left< X \right> \left< Y \right> \right|/\left| \left< X \right> \left< Y \right> \right|$, in the insets. We observe non-correlations (zero correlations) between $X$ and $Y$, in all the cases of (a), (b), and (c).](fig13.eps){width="48.00000%"} Critical values of elastic moduli at the transition {#sec.critical} --------------------------------------------------- Until now, we have shown that the affine modulus $M_A$ approaches the critical value $M_{Ac}$ as the excess contact number $\Delta z \sim \Delta \varphi^{1/2}$ vanishes, while the non-affine modulus $M_N$ likewise goes to $M_{Nc}$ as the crossover frequency $\omega^\ast \sim \Delta \varphi^{1/2}$ goes to zero. It is worth noting that $\Delta z$ and $\omega^\ast$ have the same power-law exponent $1/2$ with respect to $\Delta \varphi$; $\Delta z \sim \omega^\ast \sim \Delta \varphi^{1/2}$ [@Silbert_2005; @Wyart_2005; @Wyart_2006]. The behaviors of the affine, $M_A$, and non-affine, $M_N$, moduli are similar between the bulk $M_{A,N}=K_{A,N}$ and the shear $M_{A,N}=G_{A,N}$ moduli. However, the total moduli, $K=K_A-K_N$ and $G=G_A-G_N$, show distinct critical behaviors through the transition $\varphi_c$ to the fluid phase [@OHern_2002; @OHern_2003; @Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009; @Wyart]: The total bulk modulus $K$ discontinuously drops to zero, while the total shear modulus $G$ continuously goes to zero, which are described by the power-law scalings, $K \sim \Delta \varphi^0$ and $G \sim \Delta \varphi^{1/2}$ in Eq. (\[plmacro\]) and Fig. \[fig.phidependence\](a),(b). This difference is due to the distinct critical values of $K_c=K_{Ac}-K_{Nc}$ and $G_c=G_{Ac}-G_{Nc}$ at the transition $\varphi_c$. $K_{Ac}$ is larger than $K_{Nc}$, $K_{Ac}\simeq 0.40 >K_{Nc} \simeq 0.15$, leading to a finite value of $K_c=0.25$. On the other hand, $G_{Ac}$ and $G_{Nc}$ coincide, $G_{Ac}=G_{Nc} \simeq 0.24$, resulting in zero total shear modulus $G_c=0$. Our final goal in this section is to derive these critical values, using Eq. (\[modulus2\]) for $K_{Ac}, G_{Ac}$, and Eq. (\[modulus11\]) for $K_{Nc}, G_{Nc}$. [**Critical values of affine moduli $\boldsymbol{M_{Ac}}$.**]{} At the transition point $\varphi_c$, the system is in the isostatic state [@Wyart; @Wyart_2005; @Wyart_2006; @Maxwell_1864], where the number of contacts precisely equals the degrees of freedom $3N-3$; $$N^\text{ct}_c=3N-3 \left( =\frac{N z_c}{2} \right).$$ In addition, since the pressure is zero, $p=0$, there should be no overlaps at all the particle contacts $(i,j)$, i.e., $$\vec{r}_{ij} \equiv \vec{n}_{ij}, \qquad r_{ij} \equiv 1, \qquad \phi'(r_{ij}) \equiv 0,$$ hold for all $N^\text{ct}_c$ contacts $(i,j)$. Note that at $\varphi_c$, the stressed and unstressed systems are exactly same. We therefore use Eq. (\[modulus2\]) to evaluate the critical values $K_{Ac}, G_{Ac}$ as $$\begin{aligned} \label{loss1} K_{Ac} &= \frac{1}{V} \sum_{ (i,j) \in N^\text{ct}_c } \frac{1}{9} = \frac{ N^\text{ct}_c }{9V},\\ G_{Ac} &= \frac{1}{V} \sum_{ (i,j) \in N^\text{ct}_c } \left( {n_{ij}^{x}} {n_{ij}^{y}} \right)^2 = \frac{N^\text{ct}_c}{V} \left< \left( {n_{ij}^{x}} {n_{ij}^{y}} \right)^2 \right>, \end{aligned}$$ where $\left< \right>$ denotes the average value over all of $N^\text{ct}_c$ contacts. $K_{Ac}$ is exactly the same as that in Eq. (\[affine5a\]). Also, the isotropic distribution of the bond vector $\vec{n}_{ij}$, Eq. (\[affine2\]), recovers $G_{Ac}$ in Eq. (\[affine5a\]), as done in Sec. \[sec.affine\]. [**Critical values of non-affine moduli $\boldsymbol{M_{Nc}}$.**]{} We next formulate $K_{Nc}, G_{Nc}$ from Eq. (\[modulus11\]). The bulk modulus $K_{Nc}$ is formulated as $$\begin{aligned} \label{loss3a} & K_{Nc} = \frac{1}{V} \sum_{k=1}^{3N-3} \frac{1}{{\omega^k}^2} \left[ V \sum_{(i,j)} \frac{\partial p}{\partial \vec{r}_{ij}} \cdot \vec{e}^k_{ij} \right]^2, \\ & = \frac{1}{V} \sum_{k=1}^{3N-3} \frac{1}{{\omega^k}^2} \left[ \sum_{(i,j)} \frac{1}{3} \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right) \right]^2, \\ & = \frac{1}{9V} \sum_{k=1}^{3N-3} \frac{1}{{\omega^k}^2} \sum_{(i,j)} \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right)^2 \\ & + \frac{1}{9V} \sum_{k=1}^{3N-3} \frac{1}{{\omega^k}^2} \sum_{(i,j)} \sum_{(i',j') \neq (i,j)} \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right) \left( \vec{e}^k_{i'j'}\cdot \vec{n}_{i'j'} \right),\\ & = K_{Ac} \\ & + \frac{N^\text{ct}_c(N^\text{ct}_c-1)}{9V} \left[ \sum_{k=1}^{3N-3} \frac{\left< \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right) \left( \vec{e}^k_{i'j'}\cdot \vec{n}_{i'j'} \right) \right>}{{\omega^k}^2} \right]. \end{aligned}$$ In the derivation of Eq. (\[loss3a\]), we use Eq. (\[vs4\]) at the transition point $\varphi_c$, i.e., $$\label{loss2a} \sum_{(i,j) \in N^\text{ct}_c} \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right)^2 = {\omega^k}^2.$$ To formulate the shear modulus $G_{Nc}$, we assume that (i) ${n_{ij}^{x}} {n_{ij}^{y}}$ and $\left( \vec{e}^k_{ij} \cdot \vec{n}_{ij} \right)$ are uncorrelated in each mode $k$; $$\label{lossassume1} \left< \left( {n_{ij}^{x}} {n_{ij}^{y}} \right) \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right) \right> = \left< {n_{ij}^{x}} {n_{ij}^{y}} \right> \left< \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right>,$$ and (ii) ${n_{ij}^{x}} {n_{ij}^{y}}$ and ${n_{i'j'}^{x}} {n_{i'j'}^{y}}$ at different contacts, $(i,j) \neq (i',j')$, are also uncorrelated; $$\label{lossassume2} \left< \left({n_{ij}^{x}} {n_{ij}^{y}} \right) \left({n_{i'j'}^{x}} {n_{i'j'}^{y}} \right) \right> = \left< {n_{ij}^{x}} {n_{ij}^{y}} \right>^2.$$ Those two assumptions are numerically verified by Fig. \[fig.correlation\], for (i) in (a),(b) and (ii) in (c), where for convenience, we study correlations of the quantities $\left( n_{ij}^{x} n_{ij}^{y} \right)^2$ and $\left( \vec{e}^k_{ij} \cdot \vec{n}_{ij} \right)^2$, instead of $n_{ij}^{x} n_{ij}^{y}$ and $\vec{e}^k_{ij} \cdot \vec{n}_{ij}$. We have also confirmed that the assumptions (i) and (ii) hold for the range of packing fraction, $\Delta \varphi = 10^{-1}$ to $10^{-6}$. Using Eqs. (\[lossassume1\]) and (\[lossassume2\]), we can formulate the shear modulus $G_{Nc}$ as $$\begin{aligned} \label{loss3b} & G_{Nc} = \frac{1}{V} \sum_{k=1}^{3N-3} \frac{1}{{\omega^k}^2} \left[ V \sum_{(i,j)} \frac{\partial \sigma_s}{\partial \vec{r}_{ij}} \cdot \vec{e}^k_{ij} \right]^2, \\ & = \frac{1}{V} \sum_{k=1}^{3N-3} \frac{1}{{\omega^k}^2} \left[ \sum_{(i,j)} {n_{ij}^{x}} {n_{ij}^{y}} \left( \vec{e}^k_{ij} \cdot \vec{n}_{ij} \right) \right]^2, \\ & = \frac{1}{V} \sum_{k=1}^{3N-3} \frac{1}{{\omega^k}^2} \sum_{(i,j)} \left( {n_{ij}^{x}} {n_{ij}^{y}} \right)^2 \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right)^2 \\ & + \frac{1}{V} \sum_{k=1}^{3N-3} \frac{1}{{\omega^k}^2} \sum_{(i,j)} \sum_{(i',j') \neq (i,j)} {n_{ij}^{x}} {n_{ij}^{y}} {n_{i'j'}^{x}} {n_{i'j'}^{y}} \\ & \times \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right) \left( \vec{e}^k_{i'j'}\cdot \vec{n}_{i'j'} \right), \\ & = \frac{N^\text{ct}_c}{V} \sum_{k=1}^{3N-3} \frac{1}{{\omega^k}^2} \left< \left( {n_{ij}^{x}} {n_{ij}^{y}} \right)^2 \right> \left<\left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right)^2 \right> \\ & + \frac{N^\text{ct}_c(N^\text{ct}_c-1)}{V} \sum_{k=1}^{3N-3} \frac{1}{{\omega^k}^2} \left< {n_{ij}^{x}} {n_{ij}^{y}} {n_{i'j'}^{x}} {n_{i'j'}^{y}} \right> \\ & \times \left< \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right) \left( \vec{e}^k_{i'j'}\cdot \vec{n}_{i'j'} \right) \right>, \\ & = \frac{N^\text{ct}_c}{V} \left< \left( {n_{ij}^{x}} {n_{ij}^{y}} \right)^2 \right> + \left< {n_{ij}^{x}} {n_{ij}^{y}} \right>^2 \\ & \times \left\{ \frac{N^\text{ct}_c (N^\text{ct}_c-1)}{V} \left[ \sum_{k=1}^{3N-3} \frac{\left< \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right) \left( \vec{e}^k_{i'j'}\cdot \vec{n}_{i'j'} \right) \right>}{{\omega^k}^2} \right] \right\},\\ & = G_{Ac}. \end{aligned}$$ In the final equality of Eq. (\[loss3b\]), we use $\left< {n_{ij}^{x}} {n_{ij}^{y}} \right> = 0$, which is obtained by the isotropic distribution of $\vec{n}_{ij}$, Eq. (\[affine2\]). Therefore, the non-affine value $G_{Nc}$ *exactly* coincides with the affine value $G_{Ac}$. [**Critical values of total moduli $\boldsymbol{M_{c}}$.**]{} From Eqs. (\[loss3a\]) and (\[loss3b\]), we obtain $$\begin{aligned} \label{loss4a} & K_c = K_{Ac} - K_{Nc}, \\ &= - \frac{N^\text{ct}_c(N^\text{ct}_c-1)}{9V} \left[ \sum_{k=1}^{3N-3} \frac{\left< \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right) \left( \vec{e}^k_{i'j'}\cdot \vec{n}_{i'j'} \right) \right>}{{\omega^k}^2} \right], \\ & G_c = G_{Ac} - G_{Nc}, \\ &= \left< {n_{ij}^{x}} {n_{ij}^{y}} \right>^2 \times \left(9 K_c \right) = 0. \end{aligned}$$ The finite value of the bulk modulus $K_c$ is given by the correlations of the angle of vibrational motion relative to bond vector, between different contacts $(i,j) \neq (i',j')$, $\left< \left( \vec{e}^k_{ij} \cdot \vec{n}_{ij} \right) \left( \vec{e}^k_{i'j'} \cdot \vec{n}_{i'j'} \right) \right>$. We numerically get $$\label{loss5} \left[ \sum_{k=1}^{3N-3} \frac{\left< \left( \vec{e}^k_{ij}\cdot \vec{n}_{ij} \right) \left( \vec{e}^k_{i'j'}\cdot \vec{n}_{i'j'} \right) \right>}{{\omega^k}^2} \right] = -2.1\times 10^{-4},$$ which confirms the value of $K_c=\left[ {N^\text{ct}_c(N^\text{ct}_c-1)}/{9V} \right] \times \left( 2.1\times 10^{-4} \right) \simeq 0.25$. For the shear modulus $G_c$, the correlation term disappears due to the term, $\left< {n_{ij}^{x}} {n_{ij}^{y}} \right> = 0$, giving the zero value of $G_c=0$. The zero shear modulus $G_c$ is based on two features of jammed solids: (i) The bond vector $\vec{n}_{ij}$ and the contact vibration $\vec{e}_{ij} \cdot \vec{n}_{ij}$ are uncorrelated (see Eq. (\[lossassume1\])), and (ii) the bond vector $\vec{n}_{ij}$ is randomly and isotropically distributed (see Eqs. (\[affine2\]) and (\[lossassume2\])). Thus, it is those two features, (i) and (ii), that cause the distinction between the critical values and behaviors of the bulk $K$ and the shear $G$ moduli, in marginally jammed solids. Interestingly, Zaccone and Terentjev [@Zaccone_2014] have theoretically explained the finite value of bulk modulus $K_c$ by taking into account the excluded-volume correlations between different contacts, $(i,j) \neq (i',j')$. They also demonstrated that the excluded-volume correlations are weaker under shear, leading to a smaller value of shear modulus $G_c$. The correlation term, $\left< \left( \vec{e}^k_{ij} \cdot \vec{n}_{ij} \right) \left( \vec{e}^k_{i'j'} \cdot \vec{n}_{i'j'} \right) \right>$, in Eq. (\[loss4a\]) may be related to such excluded-volume correlations. Summary ======= [**Scaling behaviors with $\boldsymbol{\Delta z}$, $\boldsymbol{\omega^\ast}$, and $\boldsymbol{\Delta \varphi}$.**]{} In the present paper, using the harmonic formulation [@Lutsko_1989; @Maloney_2004; @Maloney2_2006; @Maloney_2006; @Karmakar_2010; @Lemaitre_2006; @Hentschel_2011; @Zaccone_2011; @Zaccone2_2011; @Zaccone_2014], we have studied the elastic moduli $M=K,G$ in a model jammed solid for a linear interaction force law, close to the (un)jamming transition point $\varphi_c$. As we approach the transition point $\varphi_c$, $\Delta \varphi \rightarrow 0$, the excess contact number goes to zero, $\Delta z \rightarrow 0$, and at the same time, vibrational eigenmodes in the plateau regime of $g(\omega)$ extend towards zero frequency, $\omega^\ast \rightarrow 0$. Accordingly, the affine modulus, $M_A=K_A,G_A$, tends towards the critical value, $M_{Ac}=K_{Ac},G_{Ac}$, as $M_A - M_{Ac} \sim \Delta z \rightarrow 0$ (Eqs. (\[affine5x\]), (\[affine5\]), Fig. \[fig.affine\]), whereas the non-affine modulus, $M_N=K_N,G_N$, converges to $M_{Nc}=K_{Nc},G_{Nc}$, following $M_N - M_{Nc} \sim \omega^\ast \rightarrow 0$ (Eqs. (\[nonaffine2\]), (\[nonaffine6\]), (\[nonaffine9\]), Fig. \[fig.nonaffine\]). Thus, the total modulus, $M = M_A-M_N$, is $$\label{rconclusion1} M = M_{c} + \alpha_M \Delta z - \beta_M \omega^\ast = M_{c} + \gamma_M \Delta \varphi^{1/2},$$ where $M_c=M_{Ac}-M_{Nc}$ is the critical value of $M$, and $\alpha_M, \beta_M, \gamma_M$ are coefficients. As numerically [@Silbert_2005] and theoretically [@Wyart_2005; @Wyart_2006] demonstrated, $\Delta z$ and $\omega^\ast$ have the same power-law scalings with $\Delta \varphi$, i.e., $\Delta z \sim \omega^\ast \sim \Delta \varphi^{1/2}$, which gives the second equality in Eq. (\[rconclusion1\]), and $M-M_c \sim \Delta z \sim \Delta \varphi^{1/2}$. [**Origin of distinct critical values between bulk and shear moduli.**]{} Both the bulk, $M=K$, and shear, $M=G$, moduli share the same behavior of Eq. (\[rconclusion1\]), but, crucially, a difference between the two elastic moduli appears in their critical values, $K_c, G_c$. For the bulk modulus, $K_{Ac}$ is larger than $K_{Nc}$, and the total value $K_c$ is a finite, positive constant. In contrast, $G_{Ac}$ and $G_{Nc}$, exactly match, and the total shear modulus $G_c$ is zero. This difference causes distinct critical behaviors: $K = K_c + \gamma_K \Delta \varphi^{1/2} \sim \Delta \varphi^{0}$ and $G = \gamma_G \Delta \varphi^{1/2} \sim \Delta \varphi^{1/2}$ (Eq. (\[plmacro\]), Fig. \[fig.phidependence\]). Thus, through the unjamming transition into the fluid phase ($\varphi < \varphi_c $), $K$ discontinuously drops to zero, whereas $G$ continuously vanishes [@OHern_2002; @OHern_2003; @Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009; @Wyart]. In the present work, we showed that the finite bulk modulus $K_c$ is controlled by correlations between contact vibrational motions, $\vec{e}^k_{ij} \cdot \vec{n}_{ij}$ and $\vec{e}^k_{i'j'} \cdot \vec{n}_{i'j'}$, at different contacts $(i,j) \neq (i',j')$ (Eq. (\[loss4a\])), which might be related to excluded-volume correlations as suggested by Zaccone and Terentjev [@Zaccone_2014]. In the case of the shear modulus $G_c$, such correlations are washed out by two key features of jammed, disordered solids: (i) The contact bond $\vec{n}_{ij}$ and the contact vibrational motions $\vec{e}^k_{ij} \cdot \vec{n}_{ij}$ are uncorrelated, and (ii) the contact bond $\vec{n}_{ij}$ is randomly and isotropically distributed (Eqs. (\[affine2\]), (\[lossassume1\]), (\[lossassume2\]), Figs. \[fig.angle\], \[fig.correlation\]). In the end, the critical value $G_c$ becomes *exactly* zero (Eq. (\[loss4a\])). [**Eigenmode decomposition of non-affine moduli $\boldsymbol{M_N}$.**]{} A main result of the present work is the eigenmode decomposition of the non-affine elastic moduli $M_N$, as presented in Figs. \[fig.decomposition\] and \[fig.decompositiona\] for $M_N=K_N$ and $G_N$, respectively. The modal contribution to the non-affine modulus, $M^k_N$, shows three distinct regimes in frequency $\omega$ space, with two crossovers at $\omega =\omega^\ast$ and $\omega = \omega^h$, which match precisely the regimes already apparent in the vDOS $g(\omega)$ (Figs. \[fig.vibration\] and \[fig.frequency\]). We showed that the crossover point $\omega^\ast$ is controlled by the competition between two vibrational energies, the compressing/stretching energy, $\delta E^{k \parallel}$, and the sliding energy, $\delta E^{k \perp}$, whereas the crossover at $\omega^h$ is determined by the balance between two vibrational motions along the bond vector $\vec{n}_{ij}$, compressing motion, ${e}^{k \parallel}_\text{com}$, and stretching motion, ${e}^{k \parallel}_\text{str}$. The behavior of $M^k_N = \left| \Sigma_M^k \right| \times \left| \delta {R}_{\text{na} M}^k \right|$ (dependence of $M^k_N$ on $\omega$) is understood in terms of the energy relaxation during non-affine deformation process. During the non-affine deformation, high-frequency modes with $\omega > \omega^h$ are only weakly activated, leading to a relatively small contribution to the non-affine modulus. At intermediate frequencies, $\omega^\ast < \omega < \omega^h$, modes of lower energy are more readily activated, which increases $\left| \delta {R}_{\text{na} M}^k \right| \sim \omega^{-1}$ and thereby enhances $M_N^k$. However the lower $\omega$ modes also generate smaller forcings, $\left| \Sigma_M^k \right| \sim \omega$, reducing $M_N^k$ with decreasing frequency. These two opposite $\omega$-dependences of $\left| \Sigma_M^k \right|$ and $\left| \delta {R}_{\text{na} M}^k \right|$ lead to the frequency-independent behavior of $M^k_N$, as $M^k_N = \left| \Sigma_M^k \right| \times \left| \delta {R}_{\text{na} M}^k \right| \sim \omega^{0}$. Finally at the lower end of the spectrum, $\omega < \omega^\ast$, for the stressed system, the stress, $\sim \phi'(r_{ij}) \sim \Delta \varphi$, enhances the force $\left| \Sigma_M^k \right| \sim \omega^0$ and drives the non-affine displacement $\left| \delta {R}_{\text{na} M}^k \right| \sim \omega^{-2}$. As a result, the energy relaxation $M_N^k$ grows with decreasing $\omega$ as $M^k_N \sim \omega^{-2}$. Such effects are not observed for the unstressed system, with zero stress, $\phi'(r_{ij}) \equiv 0$, i.e., the unstressed system retains the frequency-independent behavior, $M^k_N \sim \omega^{0}$. In all the cases, the above behaviors of $M_N^k$ (and $\left| \Sigma_M^k \right|,\left| \delta {R}_{\text{na} M}^k \right|$) are controlled by the net compressional/stretching motions $e^{k \parallel}_\text{net}$ (Eqs. (\[resultvs2\]),(\[resultenedis\]), Figs. \[fig.vibration\], \[fig.decomposition\], \[fig.decompositiona\]). [**Non-affine motions and low frequency mode excitations.**]{} Large-scale, non-affine motions of particles have been reported for athermal jammed solids [@Maloney_2004; @Maloney2_2006; @Maloney_2006; @Lemaitre_2006], and also for thermal glasses [@Wittmer_2002; @Tanguy_2002; @leonforte_2005; @DiDonna_2005]. Our results indicate that such large-scale non-affine displacement fields are induced through the low-frequency eigenmodes excitations (Eq. (\[resultenedis\]), Figs. \[fig.decomposition\], \[fig.decompositiona\], and \[fig.picture\]): On approach to the transition point $\varphi_c$, lower frequency modes $k$ are more readily activated, resulting in larger non-affine displacements $\left| \delta {R}_{\text{na} M}^k \right|$. Since the lower frequency modes exhibit more floppy-like vibrational motions (Fig. \[fig.alpha\]), the non-affine motions correspondingly exhibit floppy-like character closer to $\varphi_c$, which is consistent with previous works [@Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009]. As reported in Refs. [@Ellenbroek_2006; @Ellenbroek_2009; @Ellenbroek2_2009], the floppy-like, non-affine motions are more prominent under shear deformation than under compression, which thereby makes a distinction between these two elastic responses. Thus, at first sight, it seems natural to associate the low-frequency, floppy-like modes as being wholly responsible for such the distinction between compression and shear. However, we have shown that the difference in the nonaffine responses between compression and shear is largely controlled by relatively high-frequency eigenmodes with $\omega \gtrsim 10^{-1}$, not solely by low frequency modes (Fig. \[fig.decomposition2\]). Low frequency mode excitations for $\omega \lesssim 10^{-1}$ are very similar between bulk and shear deformations, while it is those modes with $\omega \gtrsim 10^{-1}$ that are more readily activated under shear than under compression. Thus, the mode excitations at $\omega \gtrsim 10^{-1}$ contribute significantly to the non-affine shear modulus $G_N$, causing it to become enhanced over the bulk modulus $K_N$. Ultimately, the critical value $G_{Nc} \simeq 0.24$ is larger than $K_{Nc} \simeq 0.15$. Conclusions and perspectives {#conclusions} ============================ [**Characterization of the frequency $\boldsymbol{\omega^\ast}$.**]{} The vibrational modes are directly related to the elastic properties (non-affine elastic moduli) of the system. In the case of the marginally jammed packings studied here, the modal contribution $M^k_N$ to the non-affine elastic modulus shows a frequency-independent plateau, $M^k_N \sim \omega^0$, above the frequency $\omega^\ast$. This characteristic feature is attributed to the fact that only compressing/stretching vibrational motions contribute to the mode energy, whereas the sliding vibrations feel few constraints, making a negligible contribution to the vibrational energy. However, below $\omega^\ast$, sliding motions play a role in the total mode energy, causing the crossover behavior of $M^k_N$ at $\omega^\ast$, from $\sim \omega^0$ to $\sim \omega^{-2}$. Wyart *et. al.* [@Wyart_2005; @Wyart_2006; @Xu_2007; @Goodrich_2013] have characterized the frequency $\omega^\ast$ in terms of a purely geometric property (variational arguments), where the excess contact number $\Delta z$ controls $\omega^\ast$. In addition, the energy diffusivity in heat transport as well as the dynamical structure factor show crossover behaviors at $\omega^\ast$ in the *unstressed* system [@Xu_2009; @Vitelli_2010], which have been then theoretically described using the effective medium approach [@Wyart_2010; @DeGiuli_2014]. In the present work, we have marked $\omega^\ast$ as the characteristic frequency where the two vibrational energies, the compressing/stretching and the sliding vibrational energies, become comparable to each other in the *stressed* system, which induces a crossover in the energy-related quantities including the elastic modulus $M^k_N$. [**Debye regime in vDOS and continuum limit.**]{} As shown in Fig. \[fig.vibration\](a),(b) of vDOS $g(\omega)$, we do not observe the expected Debye scaling regime, $g(\omega) \sim \omega^{2}$, in the low frequency limit. We have also confirmed that even the lowest eigenmodes in our frequency window are far from plane-wave modes which are also expected to appear at low frequencies. The Debye scaling and the plane-wave modes are likely to be observed by employing larger system sizes to access lower frequencies. Yet, this aspect of the vDOS remains an open issue for jammed particulate systems whereby the so-called Boson peak appears to extend down to zero frequency $\omega = 0$ as $\Delta\varphi \to 0$. We might expect that the deviations from traditional Debye scaling in the low-$\omega$ tail of the vDOS are generic to amorphous materials and also tunable through packing structure [@Schreck2_2011; @Mizuno3_2016]. In a different, yet related, context, recent numerical works [@Ellenbroek_2006; @Ellenbroek_2009; @Lerner_2014; @Karimi_2015] have discussed the continuum limit by studying the mechanical response to *local* forcing. This continuum limit corresponds to a scale above which the elastic properties match those of the entire, bulk system. Whereas below this length scale the elastic response differs from the bulk average, and local elasticity becomes apparent. At the low frequencies corresponding to wavelengths comparable to this continuum limit, we might expect the vibrational modes to be compatible with the plane-wave modes described by continuum mechanics. Although here we caution that the length scale at which a local elastic description coincides with bulk behavior diverges as $\Delta \varphi \to 0$ [@Ellenbroek_2006; @Ellenbroek_2009; @Lerner_2014; @Karimi_2015]. [**System size effects on elastic moduli values.**]{} In the present work, we have employed relatively small systems with $N \approx 1000$ ($L \approx 9$). As mentioned above, we do not access very low frequency modes where one might expect to observe the Debye scaling regime in the vDOS. We therefore consider that the lack of lower frequency modes may cause some finite system size effects in the non-affine elastic moduli values, $M_N$. Indeed, Ref. [@Tanguy_2002] has reported system size effects appearing in two-dimensional Lennard-Jones glasses with small system sizes. For the present jammed systems, recent numerical work [@Goodrich_2012] calculated the elastic moduli, changing the system size from $N=64$ to $4096$. In the results of Ref. [@Goodrich_2012], for our studied pressure regime, we do not find any noticeable differences in the elastic moduli values between different system sizes of $N \gtrsim 1000$. Particularly, the scaling laws with packing fraction $\Delta \varphi$ are consistent for all the system sizes of $N \gtrsim 1000$. Also, we have confirmed that our values of the elastic moduli and scaling laws with $\Delta \varphi$ are consistent with the values of $N \gtrsim 1000$ in Ref. [@Goodrich_2012]. This observation indicates that our moduli values are not influenced by system size effects. Thus, we conclude that for system sizes $N \gtrsim 1000$, the lack of accessing lower frequency modes, including those in the Debye regime, does not qualitatively impact our results for the elastic moduli, and therefore, does not change the scaling laws with $\Delta \varphi$. In order to demonstrate this conclusion more explicitly, it could an interesting future work to measure the modal contribution of $M^k_N$ in the Debye regime, using large systems. [**Effects of friction, particle-size ratio, particle shape, and deeply jammed state.**]{} It has been reported that jammed packings, composed of frictional particles [@Somfai_2007; @Silbert_2010; @Still_2014], mixtures with large particle-size ratio [@Xu2_2010], and non-spherical particles (e.g., ellipse-shaped particles) [@Zeravcic_2009; @Mailman_2009], show some distinct features in the vibrational and mechanical properties, from those of the frictionless sphere packings studied in the present work. Effects of friction, particle-size ratio, and particle shape on the mechanical properties are a timely subject. The modal decomposition of the non-affine moduli allows us to connect unusual features apparent in the vibrational spectrum to the elastic moduli properties, as we have performed here on the sphere packings. Another interesting study could be on deeply jammed systems at very high packing fractions [@Zhao_2011; @Ciamarra_2013]. Deeply jammed systems show anomalous vibrational and mechanical properties, particularly different power-law scalings from those of the marginally jammed solids [@Zhao_2011]. In addition, high-order jamming transitions accompanying the mechanical and density anomalies have been reported [@Ciamarra_2013]. It would be an interesting subject to explore the role of vibrational anomalies on the mechanical properties of such systems. [**Local elastic moduli distribution, soft spot, and low frequency modes.**]{} Amorphous materials exhibit spatially heterogeneous distributions of local elastic moduli, as has been demonstrated by simulations [@Mayr_2009; @tsamados_2009; @Mizuno_2013; @yoshimoto_2004; @makke_2011; @Mizuno2_2013; @Mizuno_2014] and experiments [@Wagner_2011]. Recent numerical works [@Mizuno_2016; @Cakir_2016] have studied the local elastic moduli distributions in jammed packings. Manning and co-workers [@Manning_2011; @chen_2011] proposed that “soft spots” can be associated with regions of atypically large displacements of particles in the quasi-localized, low-frequency vibrational modes. It has been reported that particle rearrangements, which are activated by mechanical load [@Manning_2011; @Tanguy_2010] and by thermal energy [@chen_2011; @widmer_2008], tend to occur in those so-called soft spots. Thus, we could assume that the soft spots, which are detected by the low frequency (localized) modes, are linked to the low elastic moduli regions. In the present work, we demonstrated that the non-affine elastic modulus is determined mainly by the vibrational modes excitations at $\omega >\omega^\ast$, whereas the low frequency modes with $\omega < \omega^\ast$ make only small contributions to elastic moduli. Our result therefore indicates that the low frequency modes themselves do not influence the elastic properties, but rather they are just driven by the elastic moduli distributions constructed by the modes with $\omega > \omega^\ast$. In the case of marginally jammed solids, the shear modulus becomes orders of magnitude smaller than the bulk modulus (see Fig. \[fig.phidependence\]), thus the low frequency modes are most likely related to the shear modulus. In addition, in our study [@Mizuno_2016], we have demonstrated that spatial fluctuations of the local shear modulus grow on approach to the jamming transition $\varphi_c$. Therefore, those observations could assume that the growing *shear* modulus heterogeneities drive the low frequency modes excitations, particularly the localizations of low frequency modes. Schirmacher *et. al.* [@Marruzzo_2013; @Schirmacher_2015; @Schirmacher2_2015] have constructed such a heterogeneous-elasticity theory based on this picture, where the shear modulus heterogeneities determine the behavior of low frequency modes, e.g., the Boson peak. Those topics, focusing on the local elastic moduli, soft spots, and low frequency modes, could be an important future work. [**Generalization to other contacts, and non-linear effects.**]{} We have studied the linear elastic properties throughout the present paper. As long as we stay in the linear elastic regime, our results, which have been obtained from the harmonic potential, can be extended to other potentials; $$\phi(r_{ij}) = \left\{ \begin{aligned} & \frac{\varepsilon}{a} \left(1-\frac{r_{ij}}{\sigma} \right)^a & (r_{ij} < \sigma), \\ & 0 & (r_{ij} \ge \sigma), \end{aligned} \right. \\$$ where $a>0$ characterizes the potential, e.g., $a=2$ is the present harmonic potential case, while $a=2.5$ provides Hertzian contacts, by considering “normalized variables", e.g., normalized elastic modulus and frequency; $$\begin{aligned} \hat{M}= \frac{M}{k_\text{eff}}, \qquad \hat{\omega} = \frac{\omega}{k_\text{eff}^{1/2}}, \end{aligned}$$ where the values are normalized by the effective spring constant, $k_\text{eff} \sim \phi'' \sim \Delta \varphi^{a-2}$ [@Vitelli_2010]. However, marginally jammed solids are highly sensitive to non-linear effects caused by thermal agitation or finite large strain, as actively discussed in recent works [@Schreck_2011; @Henkes_2012; @Ikeda_2013; @Goodrich2_2014; @Mizuno2_2016; @Otsuki_2014; @Coulais_2014]. Even the elastic regime shrinks and disappears on approach to the jamming transition $\varphi_c$. Thus, to understand more generally the mechanical and vibrational properties of systems on the edge of marginal stability, inevitably it might be necessary to take into account non-linear effects, which should be distinct between different potentials, i.e., different values of $a$. Finally, we highlight a prescient feature to our findings. Our results show that the linear elastic response of the systems studied here reflects the nature of the vibrational spectrum. Therefore, it is reasonable to expect that materials with different distributions of vibrational modes should exhibit different elastic responses [@Schreck_2011; @Koeze_2016; @Cakir_2016]. Given that the density of vibrational states is accessible through various scattering [@Sette_1998; @ruffle_2003; @Monaco_2006; @bruna_2011] and covariance matrix measurement [@brito_2010; @Ghosh_2010; @Chen_2010; @Henkes_2012] techniques, it should be possible to pin down the expected elastic behavior through such measurements. Also, our results highlight the concept that materials of desired functionality or tunable mechanical behavior may be fashioned through adaptive manufacturing techniques whereby desirable constituent motifs are structured to achieve designer vibrational mode distributions. This work was initiated at the Deutsches Zentrum für Luft- und Raumfahrt (DLR), Cologne for which L.E.S. greatly appreciates the support of the German Science Foundation DFG during a hospitable stay at the DLR under Grant No. FG1394. We also thank Th. Voigtmann, S. Luding, M. Otsuki, A. Zaccone, and A. Ikeda for useful discussions. H.M. acknowledges support from DAAD (German Academic Exchange Service). K.S. is supported by the NWO-STW VICI grant 10828. [^1]: Here we remark that the average values of $\left| \Sigma_M^k \right|,\left| \delta {R}_{\text{na} M}^k \right|,M_N^k$ over different realizations and frequency shells coincide between $M=K$ and $G$ at $\omega \lesssim 10^{-1}$. However, those quantities of one realization and one mode $k$ show different values between $M=K$ and $G$. Thus, the vector fields of $\vec{\Sigma}_M$ and $\delta \vec{R}_{\text{na} M}$ of one realization are different between $M=K$ and $G$, even if they are constructed by a partial summation over $\omega^k \lesssim 10^{-1}$. This point is indeed seen by comparing $\delta \vec{R}_{\text{na} K}$ and $\delta \vec{R}_{\text{na} G}$ in Fig. \[fig.picture\](c),(d), where we plot $\delta \vec{R}_{\text{na} M}$ constructed by the modes with $\omega < \omega^\ast (\ll 10^{-1})$ (see the blue dashed vectors).
--- author: - 'Joel H. Kastner' - 'B. Zuckerman' - Thierry Forveille date: 'Received ...; accepted ...' title: | Molecules in the Circumstellar Disk Orbiting BP Piscium\ [*Research Note*]{} --- [BP Psc is a puzzling late-type, emission-line field star with large infrared excess. The star is encircled and enshrouded by a nearly edge-on, dusty circumstellar disk, and displays an extensive jet system similar to those associated with pre-main sequence (pre-MS) stars. However, the photospheric absorption features of the star itself appear more consistent with post-main sequence status.]{} [We seek to characterize the molecular gas component of the BP Psc disk, so as to compare the properties of its molecular disk with those of well-studied pre-main sequence stars.]{} [We conducted a mm-wave molecular line survey of BP Psc with the 30 m telescope of the Institut de Radio Astronomie Millimetrique (IRAM). We use these data to investigate the kinematics, gas mass, and chemical constituents of the BP Psc disk.]{} [We detected lines of $^{12}$CO and $^{13}$CO and, possibly, very weak emission from HCO$^+$ and CN; HCN, H$_2$CO, and SiO are not detected. The CO line profiles of BP Psc are well fit by a model invoking a disk in Keplerian rotation. The mimumum disk gas mass, inferred from the $^{12}$CO line intensity and $^{13}$CO/$^{12}$CO line ratio, is $\sim$0.1 Jupiter masses. ]{} [The weakness of HCO$^+$ and CN (relative to $^{13}$CO) stands in sharp contrast to the strong HCO$^+$ and CN emission that characterizes most low-mass, pre-main sequence stars that have been the subjects of molecular emission-line surveys, and is suggestive of a very low level of X-ray-induced molecular ionization within the BP Psc disk. These results lend some support to the notion that BP Psc is an evolved star whose circumstellar disk has its origins in a catastrophic interaction with a close companion.]{} Introduction ============ Circumstellar disks around young stars serve both as the sources of material for accreting young stars and as the sites of nascient planets orbiting such stars. Circumstellar disks around main sequence and evolved stars may represent debris resulting from planetary collisions, the destruction of planetary-mass companions, or similarly devastating interactions with stellar-mass companions. Studies of such disks typically rely on dust emission (i.e., infrared excess) to ascertain fundamental disk properties (e.g., disk dimensions and mass; e.g., Backman & Paresce 1993; Lagrange et al.2000; Zuckerman 2001). Complementary, sensitive measurements of (sub)millimeter emission from CO (after H$_2$, the second-most abundant molecular species) as well as less abundant molecular species (such as HCN, CN, and HCO$^+$) toward disk-enshrouded stars provide the best available means to determine the residual molecular gas content and chemistry of circumstellar disks (e.g., Zuckerman, Forveille & Kastner 1995; Kastner et al. 1997; Dutrey et al 1997; Thi et al 2004). Such measurements provide unique tests for theories describing disk chemical and thermal evolution, particularly for those models concerned with the effects of irradiation of protoplanetary disks by high-energy photons – UV and (perhaps more importantly) X-rays – on circumstellar disk chemistry and energetics (e.g., Glassgold et al. 2004, 2007). The enigmatic H$\alpha$ emission-line field star BP Psc (= StH$\alpha$ 202; Stephenson 1986) stands as an important object in this regard (Zuckerman et al.2008, hereafter Z08). Z08 established that this little-studied system consists of a late G-type or early K-type star surrounded by a compact, dusty, gaseous disk, and that the star-disk system is the source of highly collimated jets. Z08 furthermore infer, on the basis of both its classical, double-peaked CO line profiles and the dark-lane morphology of near-IR adaptive optics images, that the BP Psc disk is viewed nearly edge-on ($i\sim75^\circ$) and that the disk absorbs and reradiates 75% of the incident stellar luminosity as seen from Earth. Like the intensively-studied, nearby classical T Tauri star (cTTS) TW Hya (Kastner et al. 1997; Webb et al. 1999; and references therein), BP Psc is isolated and found at high galactic latitude. Z08 show that if BP Psc is an early K-type pre-main sequence (pre-MS) star then, like TW Hya, it would be one of the closest (and perhaps oldest) classical T Tauri stars known. However, unlike TW Hya, no young stellar association has been identified in the vicinity of BP Psc (for a list of nearby \[$D\stackrel{<}{\sim}100$ pc \] young associations, see review in Zuckerman & Song 2004). Indeed, Z08 also present two lines of evidence suggesting that BP Psc may be an evolved star — most likely, a first-ascent giant: its $\lambda$6709.6 Li absorption line is far weaker than in early K-type stars of age $<100$ Myr, and gravity-sensitive lines in its optical spectrum suggest it is of luminosity class IV or III, i.e., its surface gravity is lower than a typical 10 Myr-old, K-type, pre-MS star. Whether BP Psc is an isolated, relatively old cTTS or a post-MS star undergoing an episode of collimated mass loss, its nearly edge-on disk and its system of jets and Herbig-Haro objects — which is as spectacular as the jet systems commonly associated with very young (still cloud-embedded) pre-MS stars — make BP Psc an exceedingly unusual object. To better characterize the disk orbiting BP Psc, we undertook a mm-wave molecular line survey with the 30 m telescope of the Institut de Radio Astronomie Millimetrique (IRAM[^1]). In addition to constraining the mass, kinematics, and chemistry of the BP Psc disk, the results point out significant differences between this system and those pre-MS star molecular disks that have also been the subjects of extensive radio emission-line surveys. Observations ============ Data Acquisition and Reduction ------------------------------ We conducted our molecular line survey of BP Psc with the IRAM 30 m telescope during the period 4–6 Dec. 2007. The molecules and transitions observed are listed in Table 1. We observed simultaneously in either the 100 GHz (3 mm) and 230 GHz (1 mm) or 150 GHz (2 mm) and 270 GHz (1 mm) bands, and in both polarizations in each band, using receiver combinations A100+B100 and C230+D230 or A150+B150 and C270+D270 (all in SSB mode), respectively. The 1 MHz filter banks served as the spectral line backends. The weather was excellent to good ($\tau_{225} \sim 0.1$ to 0.3) throughout the period; time-averaged system temperatures in both the 3 mm and 1 mm bands were in the range 250–400 K. We checked pointing and focus (using Uranus as the reference) every 1-2 hours, and both were found to be stable and reliable; typical pointing errors were $<3''$, i.e., less than 1/8 beamwidth for 3 mm (FWHP beamwidth $21''$) and less than 1/4 beamwidth for 1 mm (FWHP beamwidth $12''$). Individual spectral scans were of duration 200 s, with total integration times (per polarization) ranging from $\sim40$ minutes (for HCN(3–2)) to $\sim8.5$ hours (for HCO$^+$(3–2)). We used the CLASS[^2] radio spectral line data reduction package to sum all individual spectral scans obtained in both polarizations for a given transition, and then to subtract a linear-fit baseline from each of these integrated spectra, calculating channel-to-channel noise levels in the process. A few individual scans were discarded due to baseline anomalies. All antenna temperature measurements reported in Table 1 (i.e., $T_{B,max}$ and $I$; see §3.1), have been corrected for beam efficiency assuming $B_{eff} = 0.76, 0.70, 0.57$ and 0.45 at observing frequencies of 86, 115, 230, and 270 GHz, respectively[^3]. Results ------- ![Radio (mm-wave) molecular spectra of BP Psc. Ordinate is velocity with respect to the Local Standard of Rest (LSR) and abscissa is antenna temperature corrected for beam efficiency. Spectral baselines are offset in T$_B$ for clarity. All spectra except $^{12}$CO(3–2) were obtained with the IRAM 30 m; $^{12}$CO(3–2) was obtained at the JCMT 15 m. See text and Table 1. []{data-label="fig:specFig"}](9742fig1.ps){width="9cm"} Results are summarized in Table 1 and Fig. 1. Including the previous observations obtained with the IRAM 30 m and the 15 m James Clerk Maxwell Telescope (JCMT) reported in Z08, the BP Psc molecular line survey has yielded detections of $^{12}$CO(1–0), $^{12}$CO(2–1), $^{12}$CO(3–2) and $^{13}$CO(2–1) emission; only tentative detections of HCO$^+$(3–2) and CN(2–1) emission; and nondetections of HCN, H$_2$CO, and SiO (maser) emission. Given the marginal ($\sim$2$\sigma$) significance of the possible detections of the HCO$^+$(3–2) and CN(2–1) lines, these observations are reported as upper limits in Table 1 and are considered as such in the discussion below. For HCN, H$_2$CO, and SiO, the upper limits on peak antenna temperature $T_{B,max}$ and integrated line intensity $I$ listed in Table 1 were obtained from the channel-to-channel noise level measured via spectral baseline fitting, assuming a linewidth of 15 km s$^{-1}$ FWHM (as estimated from CO line profile fitting; §3.1). The measurements of the CO line profile parameters and of the upper limits on $T_{B,max}$ and $I$ for the marginal detections of HCO$^+$ and CN are described in §3.1. Relative to the intensities of the CO lines, the HCO$^+$ and CN emission from the BP Psc disk is evidently quite weak in comparison to pre-MS star disks (Dutrey et al. 1997; Thi et al. 2004; see §3.3). The upper limits on HCN emission in Table 1 are not similarly constraining. Analysis and Discussion ======================= CO line profiles: constraints on disk structure and kinematics -------------------------------------------------------------- Some of the CO lines displayed in Fig. 1 (see also Z08) appear to display the double-peaked profile characteristic of a circumstellar disk in Keplerian rotation (e.g., Beckwith & Sargent 1993, hereafter BS93; Omodaka et al. 1992). In contrast, the $^{13}$CO(2–1) line appears flat-topped, and is reminiscent of the $^{12}$CO(3–2) line profile measured with the SubMillimeter Array (Z08). All of the CO lines display broad wings extending $\pm\sim15$ km s$^{-1}$ to either side of the systemic velocity of BP Psc ($V_{LSR} = -15$ km s$^{-1}$). In addition, the $^{12}$CO(1–0) line appears to display a broad negative-velocity shoulder; a feature near $V_{LSR} \sim -25$ km s$^{-1}$ may be present in the other CO profiles as well. The CO line profile asymmetries, and the slight variation from transition to transition, suggests departures from the “ideal” flattened disk with sharp outer edge in Keplerian rotation. Nevertheless, the BP Psc CO profiles can be reasonably well described by a simple parameterization of the results of detailed numerical models of Keplerian molecular disks surrounding T Tauri stars (Eqs. 27, 28 of BS93). The parameterization applies in the case of a disk viewed edge-on — a reasonable approximation of the BP Psc viewing geometry — and is in terms of the Keplerian velocity $v_d$ at the outer edge of the disk ($v_d = (GM_\star/R_d)^{1/2}$, where $M_\star$ is the stellar mass and $R_d$ the disk outer radius) and the index $q$ of the (assumed) power law dependence of disk gas temperature $T$ on radial position $r$ (i.e., $T(r) \propto r^{-q}$). Hence, $v_d$ defines the peaks of the “twin horns” in the ideal line profile, with the value of $v_d$ roughly equal to half the peak-to-peak velocity separation of the horns. Because the velocity dependence of line flux is $F(v) \propto v^{3q-5}$ for $v>v_d$ (BS93), $q$ dictates the steepness of the line wings in the model profiles. Values of $q$ larger than the canonical $q=1/2$ yield an “excess” of high-$v$ disk material relative to the “standard” BS93 disk models and, hence, broader wings than those of the BS93 profiles. In modeling the BP Psc CO line profiles, we modified the low-$v$ BS93 profile parameterization (BS93, Eq. 28) by introducing a variable index for the power-law dependence of the line intensity on velocity (i.e., $F(v) \propto v^{p_d}$ for $v<v_d$). This parameter, $p_d$, in effect accounts for the fact that the disk likely does not have a sharp cutoff at $R_d$; values of $p_d<1.0$ tend to fill in the central regions of the line profile. Fitting the BP Psc CO lines with this simple model thereby allows the empirical determination of $v_d$ and the peak line intensity $T_{B,max}$, as well as the temperature profile and outer edge cutoff power-law indices $q$ and $p_d$. For the three $^{12}$CO lines, all four parameters were left free during the model fitting. For the (lower signal-to-noise) $^{13}$CO(2–1) line, the value of $p_d$ was fixed to the value estimated from the $^{12}$CO(2–1) profile fitting (see below). The total CO line intensities ($I$) were then obtained by integrating the best-fit models over the velocity range $-15$ to $+15$ km s$^{-1}$ with respect to the systemic velocity of BP Psc. For the (marginal significance) CN and HCO$^+$ lines, all parameters except $T_{B,max}$ were fixed to the values obtained from the fit to the $^{12}$CO(2–1) profile, such that $T_{B,max}$ was left as the only free parameter; the 3$\sigma$ upper limits on $T_{B,max}$ and $I$ in Table 1 are then based on the formal uncertainties in the resulting fit to $T_{B,max}$ (these upper limits are similar to those estimated from the baseline fitting procedure). ![image](9742fg2a.ps) ![image](9742fg2b.ps) ![image](9742fg2c.ps) ![image](9742fg2d.ps) The results of this profile fitting exercise are listed in cols. 3–7 of Table 1. In Fig. 2 we display the best-fit model CO profiles overlaid on the IRAM and JCMT spectra. For three of the four CO lines measured thus far, we obtain best-fit values of $v_d$ in the range 4.0–4.7 km s$^{-1}$. The $^{12}$CO(1–0) line yields a somewhat larger best-fit value of $v_d = 5.3\pm0.3$ km s$^{-1}$. These values of $v_d$ are all systematically larger than those obtained from simple, two-Gaussian model fits to the single-dish (Fig. 1) and interferometer (Z08) CO profiles, which yield peak-to-peak separations in the range 6.0–7.5 km s$^{-1}$ (i.e., $v_d$ in the range 3.0–3.75 km s$^{-1}$). The discrepancy is due to the fact that, in the BS93 model parameterization, the value of $v_d$ corresponds to the outer edges of the “twin peaks” in the line profile, whereas the Gaussian fits find the velocity centroids of these peaks. We (tentatively) conclude that $v_d$ lies in the range 3.0–4.0 km s$^{-1}$. For such a range of $v_d$, the mass of the central star $M_\star$ would lie in the range 0.5–0.9 (1.0–1.8) $M_\odot$, given a distance of 100 (300) pc (Z08). The Keplerian profile fits also indicate values of $q$ and $p_d$ in the ranges 0.7–0.9 and 0.1-0.3, respectively (Table 1), suggesting that the gas temperature in the BP Psc disk falls off somewhat more steeply than $r^{-1/2}$ and that the disk does not have a sharp outer edge. Disk gas mass, density, and gas-to-dust ratio --------------------------------------------- Due to lack of knowledge of the optical depth in $^{12}$CO(2–1), Z08 were only able to estimate an approximate lower limit to the mass of gas in the BP Psc disk. Detection of $^{13}$CO(2–1) emission (Fig.1; Table 1), in combination with our previous measurement of $^{12}$CO(2–1) at the 30 m (Z08), allows us to refine this disk gas mass estimate. Specifically, the $^{12}$CO(2–1)/$^{13}$CO(2–1) line ratio is now determined to be 9$\pm$2 (Table 1). Assuming the solar value of 89 for the number ratio of $^{12}$C:$^{13}$C, and that the $^{13}$CO(2–1) emission is optically thin (see below), this implies an optical depth in the $^{12}$CO(2–1) line of $\tau\sim10$. With this $\tau$ estimate, and adopting standard formalism (e.g., Eq. 4 in Z08) and standard assumptions (i.e., a mean gas temperature of $\sim40$ K and a CO:H$_2$ number ratio of 10$^{-4}$), we estimate a disk gas mass of $\sim10^{-4}$ $M_\odot$ ($\sim0.1$ Jupiter masses) assuming that BP Psc is a pre-MS star at a distance of $\sim$100 pc. The disk would contain about a Jupiter mass of gas if BP Psc were instead a first-ascent giant at 300 pc. Adopting the Z08 estimate for the mass of cold ($T<200$ K) dust in the disk, 0.7 Earth masses (for an assumed distance of 100 pc), these gas mass estimates imply a (distance-independent) gas-to-dust ratio of $\sim50$. Meanwhile, assuming BP Psc is pre-MS, such that its disk outer radius is $R_d = 50$ AU and disk scale height is 5 AU (the latter based on the heavy obscuration of the central star for an assumed inclination $\sim75^\circ$; Z08), the implied mean H$_2$ number density is $n_{H_2}\sim2\times10^8$ cm$^{-3}$. This mean density — which depends only weakly on the assumed distance to, hence evolutionary status of, BP Psc — is significantly larger than the critical densities of excitation of all of the molecular transitions in Table 1 (see, e.g., Table 1 of Dutrey et al. 1997). Our disk gas mass, gas-to-dust ratio, and mean H$_2$ number density estimates remain uncertain — and may still only represent lower limits — given the possibility that most CO molecules in the disk are frozen out onto (i.e., trapped in icy mantles surrounding) dust grains, or are preferentially photodissociated relative to H$_2$. Indeed, the assumptions invoked above for the BP Psc disk — optically thin $^{13}$CO emission and CO:H$_2$ number ratio of 10$^{-4}$ — may be mutually exclusive. Adopting the standard ISM gas-to-dust ratio of 100, Thi et al. (2004) and Dutrey et al. (1997) estimate that, in pre-MS disks, CO gas is depleted by factors $\sim$10–200. Thi et al. (2004) further estimate $^{13}$CO optical depths $< 1$ for each of the 4 pre-MS disks that they observed with the JCMT, whereas the BS93 CO line profile models indicate that the $^{13}$CO optical depth should be $>> 1$ given a gas-to-dust mass ratio of 100 and CO:H$_2$ $= 10^{-4}$ by number. Weakness of HCO$^+$ and CN: inefficient molecular ionization? ------------------------------------------------------------- Models indicate that both HCO$^+$ and CN should be sensitive tracers of X-ray molecular ionization rate at a given gas column density. Glassgold et al. (2004) note that the abundance of HCO$^+$ is likely to be sharply elevated in pre-MS circumstellar disks that are irradiated by X-rays from the vicinity of the central star. Although the Glassgold et al. disk models do not include HCN or CN, the abundance of the latter (the dissociation product of HCN) should also be enhanced by exposure to stellar X-rays (Lepp & Dalgarno 1996). Kastner et al. (1997) and Thi et al (2004) considered both X-rays and UV as potential drivers of large HCO$^+$/CO and CN/HCN abundance ratios (relative to the molecular cloud values of these ratios) measured in T Tauri and Ae/Be star disks. Ionization of H$_2$ by soft ($\sim1$ keV) X-rays was also implicated as the source of enhanced HCO$^+$ in the molecular envelope surrounding the planetary nebula NGC 7027 (Deguchi et al. 1990), an interpretation supported by the subsequent detection of an extended region of luminous, soft X-ray emission within this object (Kastner et al. 2001). ![HCO$^+$/$^{13}$CO vs. $^{13}$CO/$^{12}$CO line ratios for BP Psc and for the (6) pre-MS stars measured to date (including the evolved cTTS TW Hya). Data points corresponding to JCMT 15 m measurements of the integrated line intensities in the HCO$^+$(4-3) and $^{13}$CO(3–2) transitions (Thi et al. 2004) are indicated with asterisks; points corresponding to IRAM 30 m measurements of the integrated line intensities in the HCO$^+$(3-2) and $^{13}$CO(2–1) transitions are indicated with diamonds (BP Psc, this paper; other stars, Dutrey et al. (1997).[]{data-label="fig:CO_HCO+_Ratios"}](9742fig3.ps){width="6cm"} ![CN/$^{13}$CO vs. HCO$^+$/$^{13}$CO line ratios for BP Psc and pre-MS stars. Data points corresponding to JCMT 15 m measurements of the integrated line intensities in the HCO$^+$(4–3), CN(3–2) and $^{13}$CO(3–2) transitions (Thi et al. 2004) are indicated with asterisks; points corresponding to IRAM 30 m measurements of the HCO$^+$(3-2), CN(2–1), and $^{13}$CO(2–1) transitions are indicated with diamonds (BP Psc, this paper; other stars, Dutrey et al. (1997). The lone data point near the BP Psc upper limits is the Herbig Ae star MWC 480.[]{data-label="fig:HCO+_CN_ratios"}](9742fig4.ps){width="6cm"} In Figs. \[fig:CO\_HCO+\_Ratios\], \[fig:HCO+\_CN\_ratios\] we plot the Table 1 results for $^{13}$CO/$^{12}$CO, HCO$^+$/$^{13}$CO, and CN/$^{13}$CO line ratios measured for BP Psc along with the same ratios for all (6) other circumstellar disk sources for which line intensities have been published to date (TW Hya: Kastner et al. 1997 and Thi et al. 2004; DM Tau and GG Tau: Dutrey et al. 1997; LkCa 15, HD 163296, and MWC 480: Thi et al. 2004). The ratios have been calculated from line intensity data obtained at transitions that lie within $\sim15$% or less in frequency, so are relatively insensitive to beam dilution effects; these and other systematic errors in the line ratios (e.g., corrections for the different relative contributions from unmeasured CN hyperfine structure lines at each rotation transition; Skatrud et al. 1983) are similar to or smaller than the typical measurement uncertainties. Assuming the $^{13}$CO emission from all stars is optically thin, the $^{13}$CO/$^{12}$CO line ratio should provide a measure of $^{12}$CO column density (§3.2). Meanwhile, because the typical disk gas densities are sufficient to well-excite the observed transitions (§3.2), the HCO$^+$/$^{13}$CO and CN/$^{13}$CO ratios serve as measures of the relative abundances (as opposed to the ease of excitation) of HCO$^+$ and CN, respectively (again assuming $^{13}$CO is optically thin). In both Fig. \[fig:CO\_HCO+\_Ratios\] and Fig. \[fig:HCO+\_CN\_ratios\], five of the six previously observed pre-MS stars appear clustered together. The outlier among the pre-MS stars is the “old” (age $\sim8$ Myr) cTTS TW Hya. Although $^{12}$CO is optically thick in all of the objects, the $^{12}$CO optical depth of the TW Hya disk is smaller than that of the other pre-MS stars (see also Table 8 of Thi et al. 2004), likely reflecting its relatively evolved state. TW Hya also displays the largest HCO$^+$/$^{13}$CO and CN/$^{13}$CO line ratios among the pre-MS stars. Fig. \[fig:HCO+\_CN\_ratios\] is indicative of a correlation between the HCO$^+$/$^{13}$CO and CN/$^{13}$CO line ratios in pre-MS circumstellar molecular disks. Furthermore, the star with the largest HCO$^+$/$^{13}$CO and CN/$^{13}$CO line ratios, TW Hya, exhibits the largest quiescent X-ray luminosity: $L_X = 1.4\times10^{30}$ erg s$^{-1}$ (Kastner et al. 2002), compared with $L_X\stackrel{<}{\sim}5\times10^{29}$ erg s$^{-1}$ for those stars for which X-ray data have been published to date (HD 163296, Stelzer et al. 2006; DM Tau, Güdel et al. 2006; GG Tau, Stelzer & Neuhauser 2001). TW Hya also possesses the smallest molecular disk radius and mass (Dutrey et al. 1997, Thi et al. 2004, and references therein). Hence the apparent correlation of HCO$^+$/$^{13}$CO and CN/$^{13}$CO, combined with the inferred large (2–3 order of magnitude) enhancement of the CN/HCN and HCO$^+$/CO abundance ratios in all of the pre-MS disks, relative to values of these ratios in molecular cloud cores (Thi et al 2004), supports the interpretation that high molecular ionization rates — most likely due to irradiation by X-rays emitted from stellar coronae and/or from star-disk interfaces — enhance the abundances of both HCO$^+$ and CN in these disks. If disk ionization by central X-ray sources is responsible for the potential correlation between HCO$^+$/$^{13}$CO and CN/$^{13}$CO apparent in Fig. \[fig:HCO+\_CN\_ratios\], then this correlation indicates that the molecular gas disks lie in a regime where both HCO$^+$ and CN abundances are roughly proportional to X-ray ionization rate (Lepp & Dalgarno 1996, their Figs. 2, 3). More specifically — noting that the HCO$^+$/$^{12}$CO and CN/$^{12}$CO number ratios in the pre-MS disks are inferred to be as large as $\sim3\times10^{-4}$ and $\sim2\times10^{-3}$, respectively (Thi et al. 2004) — the Lepp & Dalgarno models indicate that ionization rates in the 6 previously measured pre-MS disks lie in the range $10^{-15}$–$10^{-13}$ s$^{-1}$ (given a representative disk number density $n\sim10^7$ cm$^{-3}$ for the molecular line-emitting regions; Thi et al. 2004). This range is several orders of magnitude larger than the canonical molecular ionization rate due to cosmic rays. The $^{13}$CO/$^{12}$CO ratio of BP Psc is similar to that of TW Hya, and is smaller than the ratios characteristic of the (younger) pre-MS stars in Fig. \[fig:CO\_HCO+\_Ratios\]. If BP Psc were pre-MS, this comparison would suggest that the BP Psc disk is as highly evolved as the disk orbiting TW Hya. However, the HCO$^+$/$^{13}$CO and CN/$^{13}$CO line ratio upper limits measured for BP Psc are at least a factor $\sim$30 lower than those of TW Hya and a factor $\sim$3–10 lower than all but one of the other pre-MS stars. The only pre-MS star near which BP Psc may lie in Fig \[fig:HCO+\_CN\_ratios\] is the (intermediate-mass) Herbig Ae star MWC 480 — a star with which BP Psc, if a (K-type, low-mass) pre-MS star, otherwise would have little in common. If the trend observed in Fig \[fig:HCO+\_CN\_ratios\] is indeed indicative of X-ray ionization of H$_2$, then its low HCO$^+$/$^{13}$CO and CN/$^{13}$CO ratios imply BP Psc has an anomalously low X-ray flux at its disk surface, compared with the other (pre-MS) star-disk systems (with the possible exception of MWC 480). Unfortunately, the only X-ray observation of BP Psc obtained thus far — a nondetection in the ROSAT All-Sky Survey (RASS) — cannot be used to test this hypothesis. The RASS nondetection (PSPC count rate $\stackrel{<}{\sim} 0.1$ s$^{-1}$) implies an intrinsic X-ray flux upper limit $F_X \stackrel{<}{\sim}9\times10^{-12}$ erg s$^{-1}$ cm$^{-2}$ (0.1–2.0 keV) assuming[^4] $T_X = 10^7$ K and an intervening absorbing column $N_H = 8\times10^{21}$ cm$^{-2}$ (adopting the $N_{H_2}$ value obtained by Z08 and correcting for the $^{12}$CO optical depth determined in §3.1), or an X-ray luminosity $L_X \stackrel{<}{\sim}10^{31}$ erg s$^{-1}$ for an assumed source distance of $\le100$ pc. Hence, if BP Psc is a pre-MS star, the RASS nondetection would be consistent with a quiescent $L_X$ even larger than that of TW Hya ($1.4\times10^{30}$ erg s$^{-1}$; Kastner et al. 2002). If BP Psc is, instead, a post-MS G star at a distance $\sim300$ pc (Z08), the RASS nondetection does not preclude the possibility that its $L_X$ is comparable to that of the more X-ray-luminous G-type giants (Gondoin 2005). Conclusions =========== The suite of molecular line data obtained thus far for BP Psc (Table 1; Fig. 1) confirms that its circumstellar disk in certain respects resembles those of pre-MS stars — but also reveals some fundamental differences. The BP Psc CO line profiles indicate Keplerian rotation of at least $\sim0.1$ Jupiter masses of disk gas around a central star(s) whose mass lies the range 0.5–0.9 $M_\odot$, assuming pre-main sequence status (§§3.1, 3.2 and Z08), although the profiles are also suggestive of a disk temperature gradient that is somewhat steeper than the canonical $r^{-1/2}$ characteristic of pre-MS star molecular disks (BS93). The $^{12}$CO optical depth of the BP Psc disk (as inferred from its $^{13}$CO(2–1)/$^{12}$CO(2–1) line ratio) is similar to that of the highly evolved (age $\sim8$ Myr) cTTS TW Hya, consistent with the interpretation that — like TW Hya — BP Psc is an isolated, “old” (yet actively accreting) TTS. On the other hand, the HCO$^+$/$^{13}$CO and CN/$^{13}$CO line ratios of BP Psc are smaller than those of most (if not all) pre-MS star disks observed to date in these molecules. In particular, its disk chemistry differs sharply from that of TW Hya, its presumed closest pre-MS analog (Fig.4). Indeed, in this key respect, the circumstellar molecular disk of BP Psc would appear to have more in common with, e.g., the expanding envelopes of yellow supergiants (which display HCO$^+$(3–2)/$^{13}$CO(2–1) ratios $\stackrel{<}{\sim}10$ and CN(2–1)/$^{13}$CO(2–1) ratios $\stackrel{<}{\sim}5$; Quintana-Lacaci et al. 2007) than with pre-MS disks. These results therefore are consistent with the notion that the BP Psc disk may have its origins not in the star formation process but, rather, in a catastrophic interaction with a close companion during the primary’s ascent of the red giant branch (Z08). The minimum disk gas mass and angular momentum inferred under the assumption that BP Psc is a post-MS star at a distance of 300 pc — i.e., $\sim$1 Jupiter mass (§3.2) distributed over a disk with radius $\sim$150 AU — is consistent with such a companion-engulfing scenario (see discussion in §4.7 of Z08). The feeble output from the BP Psc disk in the HCO$^+$ and CN lines further implies a low molecular ionization rate, suggesting that — in contrast to pre-MS star-disk systems — the BP Psc system lacks a strong, central X-ray source. A deep [*Chandra*]{} observation of BP Psc, presently scheduled for late 2008, should result in a sensitive measurement of the “hard” (1.0–10 keV) X-ray flux incident on the BP Psc disk, providing a test of this interpretation. However, detection of a large X-ray flux from BP Psc — while leaving open the question of the origin of its anomalous molecular emission line ratios — would shed little additional light on its evolutionary status. This is because its projected rotational velocity may be as large as $v\sin{i} \sim 32$ km s$^{-1}$ (Z08) which, given a radius typical of late-G giants ($\sim10 R_\odot$), would imply a period of only $\sim2$ days. This period is similar both to those of rapidly rotating pre-MS stars [*and*]{} to those of X-ray-luminous, G-type giants of the FK Com class (Gondoin 2005 and references therein). The approximate upper limit on the kinematic mass of BP Psc assuming a distance of 300 pc, 1.8 $M_\odot$ (§3.1), would be consistent with FK Com status. As FK Com stars are thought to be the products of stellar mergers (Heunemoerder et al. 1993 and references therein), the comparison has interesting ramifications for the recent history of BP Psc, under the hypothesis that it is a post-MS star: if BP Psc is indeed a giant now engulfing a close companion (see Soker 1998) — forming a disk and driving jets in the process (e.g., Nordhaus & Blackman 2006 and references therein) — we may be witnessing the “birth” of FK Com. Backman, D.E., & Paresce, F. 1993, in [*Protostars and Planets III*]{}, eds. E.H. Levy & J.I. Lunine, p. 1253 Beckwith, S., & Sargent, A. 1993, ApJ, 402, 280 (BS93) Deguchi, S., Izumiura, H., Kaifu, N., Mao, X., Nguyen-Q-Rieu, & Ukita, N. 1990, ApJ, 351, 522 Dutrey, A., Guilloteau, S., & Guelin, M. 1997, A&A, 317, L55 Glassgold, A.E., Najita, J.R., Igea, J. 2004, ApJ, 615, 972 Glassgold, A.E., Najita, J.R., Igea, J. 2007, ApJ, 656, 515 Gondoin, P. 2005, A&A, 444, 531 Guedel, M. Briggs, K. R., Arzner, K., et al. 2007, A&A, 483, 353 Huenemoerder, D. P., Ramsey, L. W., Buzasi, D. L., & Nations, H. L. 1993, ApJ, 404, 316 Kastner, J.H., Zuckerman, B., Forveille, T., & Weintraub, D.A. 1997, Science, 277, 67 Kastner, J.H., Vrtilek, S. D., & Soker, N. 2001, ApJ, 550, L189 Kastner, J.H., Huenemoerder, D.P., Schulz, N., Canizares, C.R., & Weintraub, D.A. 2002, ApJ, 567, 434 Lagrange, A.-M., Backman, D. E., & Artymowicz, P. 2000, in [*Protostars and Planets IV*]{}, eds. V. Mannings, A.P. Boss, & S. Russell, p. 639 Lepp & Dalgarno 1996, A&A, 306, L21 Nordhaus, J., & Blackman, E. G. 2006, MNRAS, 370, 2004 Omodaka, T., Kitamura, Y., & Kawazoe, E. 1992, ApJ, 396, L87 Quintana-Lacaci, G., Bujarrabal, V., Castro-Carrizo, A., & Alcolea, J. 2007, A&A, 471, 551 Skatrud, D.D., De Lucia, F.C., Blake, G.A., & Sastry, K.V.L.N. 1983, J. Mol. Spec., 99, 35 Stephenson, C.B. 1986, ApJ, 300, 779 Stelzer, B., & Neuhauser, R. 2001, A&A, 377, 538 Stelzer, B., Micela, G., Hamaguchi, K., & Schmitt, J. H. M. M. 2006, A&A 457, 223 Soker, N. 1998, ApJ, 496, 833 Thi, W.-F., van Zadelhoff, G.-J., & van Dishoeck, E.F. 2004, A&A, 425, 955 Webb, R., Zuckerman, B., Platais, I., Patience, J., White, R. J., Schwartz, M. J., & McCarthy, C. 1999, ApJ, 512, L63 Zuckerman, B., Forveille, T., & Kastner, J.H. 1995, Nature, 373, 494 Zuckerman, B. 2001, ARAA, 39, 549 Zuckerman, B., & Song, I. 2004, ARAA, 42, 685 Zuckerman, B., et al. 2008, ApJ, in press (Z08; astro-ph/0802.0226) [^1]: http://iram.fr/ [^2]: See http://iram.fr/IRAMFR/GILDAS/ [^3]: See http://iram.fr/IRAMFR/ARN/aug05/node6.html [^4]: The characteristic $T_X$ of BP Psc would be lower than $10^7$ K if its X-ray spectrum resembles that of TW Hya ($T_X\sim3\times10^6$ K; Kastner et al. 2002). If so, the RASS nondetection of BP Psc would provide even poorer constraints on its intrinsic X-ray flux $F_X$ and, hence, $L_X$.
--- abstract: 'We introduce a generalized approach to one-dimensional (1D) conduction based on Haldane’s concept of fractional exclusion statistics (FES) and the Landauer formulation of transport theory. We show that the 1D ballistic thermal conductance is independent of the statistics obeyed by the carriers and is governed by the universal quantum $\kappa^{univ}=\frac{\pi^2}{3}\frac{k_B^2 T}{h}$ in the degenerate regime. By contrast, the electrical conductance of FES systems is statistics-dependent. This work unifies previous theories of electron and phonon systems and explains an interesting commonality in their behavior.' address: 'Department of Physics, Simon Fraser University, Burnaby, B.C., Canada V5A 1S6' author: - 'Luis G. C. Rego and George Kirczenow' title: 'Fractional Exclusion Statistics and the Universal Quantum of Thermal Conductance: A Unifying Approach' --- [2]{} Introduction ============ Recent theoretical investigations of quantum transport have revealed an intriguing commonality in the behavior of some apparently very dissimilar systems: It has been predicted that in one dimension the low temperature ballistic thermal conductances of ideal electron gases,[@Gut; @extra] of phonons,[@Reg] and of interacting electrons that form chiral[@Kan] or normal[@Fazio] Luttinger liquids should all be quantized in integer multiples of a universal quantum $\kappa^{univ}=\frac{\pi^2}{3} \frac{k_B^2 T}{h}$, where $T$ is the temperature, $k_B$ the Boltzmann constant and $h$ is Planck’s constant. That is, a 1D band populated with bosons described by a Planck distribution (phonon modes) has been predicted to transport the [*same*]{} amount of heat as one populated by fermions (the ideal electron gas) or a Luttinger liquid. Also, experimental evidence has been reported that ropes of single walled nanotubes conduct heat in amounts proportional to $\kappa^{univ}$ [@Hone]. However each of these systems was studied separately using a different theoretical approach. Thus it has been unclear whether this convergence of the results that have been obtained is simply a coincidence or whether it has a deeper significance and broad ramifications. The purpose of this work is to resolve this question with the help of the concept of fractional exclusion statistics (FES), proposed by Haldane[@Hal], that allows one to discuss the behavior of bosons, fermions and particles having fractional statistical properties, all on the same footing. Besides the universal thermal conductance, we also obtain naturally from this theory the quantized electrical conductance for ballistic electrons in 1D quantum wires and in the fractional quantum Hall (FQH) regime. FES[@Hal] extends the concept of anyons,[@Wil] i.e., particles with fractional statistics, from two dimensions to arbitrary spatial dimensions by introducing a generalization of the Pauli exclusion principle, and has yielded novel insights into fractional quantum Hall systems,[@Hal; @Elb] spinons in antiferromagnetic spin chains[@Hal], systems of interacting electrons in 2D quantum dots,[@Bha] and the Calogero-Sutherland model.[@Mur1] In Haldane’s sense the statistics of a system composed of different species of particles (or quasi-particles) is defined by the relation $\Delta d_i=-\sum_j g_{ij} \Delta N_j$, where $N_i$ is the number of particles of species $i$ and $d_i$ is the dimension of the $N_i$-particle Hilbert space, holding the coordinates of the $N_i-1$ particles fixed. The parameter $g_{ij}$ is the statistical interaction. For a system of identical particles [*g*]{} is a scalar quantity, with $g=1$ (0) for fermions (bosons). Wu [@Wu] has used this definition of FES to establish the statistical distribution function for an ideal gas of particles with fractional statistics. It has been proposed that such ideal FES gases provide an accurate representation of the physics of a number of interacting electron systems.[@Bha; @Mur1] While much attention has been given to the thermodynamic properties of FES systems, [@Mur1; @Wu; @Nay; @Raj; @Huang1; @Igu] their transport properties have not received the same consideration. In this paper we use the Landauer formulation of transport theory[@Lan] to study conduction in ideal one-dimensional FES systems. Remarkably, we find that their low temperature thermal conductance is quantized in integer multiples of the universal quantum $\kappa^{univ}=\frac{\pi^2}{3} \frac{k_B^2 T}{h}$, [*irrespective of the value of the statistical parameter*]{} $g_{ij}$. Thus we demonstrate that the quantization of thermal conductance and the associated quantum are statistics-independent and truly universal. By contrast we find the electrical conductances of FES systems to be statistics-dependent. Single Species {#ss} ============== Consider a two terminal transport experiment where two infinite reservoirs are adiabatically connected to each other by an one-dimensional channel. Each reservoir is characterized by a temperature ($T$) and a chemical potential ($\mu$), considered to be independent variables. In the case of reservoirs with charged particles $\mu$ can be redefined as the electrochemical potential, that is, a combination of the chemical potential and an electrostatic particle energy governed by an external field. In terms of $T$ and $\mu$ the electric ($I$) and energy ($\dot{U}$) currents in the linear response regime are $$\begin{aligned} \delta I &=& \left. \frac{\partial I}{\partial \mu} \right|_T \delta\mu + \left. \frac{\partial I}{\partial T} \right|_{\mu} \delta T \label{e1} \\ \delta \dot{U} &=& \left. \frac{\partial \dot{U}}{\partial \mu} \right|_T \delta \mu + \left. \frac{\partial \dot{U}}{\partial T} \right|_{\mu} \delta T \ , \label{e2}\end{aligned}$$ where $\delta T = T_R - T_L$ and $\delta \mu = \mu_R - \mu_L$, with $R$ ($L$) representing the right (left) reservoir. Using Landauer theory we write the fluxes between the two reservoirs as $$\begin{aligned} I &=& \sum_{n} \ q \int_0^{\infty} \frac{dk}{2\pi} \ v_n(k) \ \left[\eta_R -\eta_L \right]\ \zeta_n(k) \label{e3} \\ \dot{U} &=& \sum_{n} \int_0^{\infty} \frac{dk}{2\pi} \varepsilon_n(k) \ v_n(k) \left[ \eta_R - \eta_L \right] \ \zeta_n(k) \ . \label{e4}\end{aligned}$$ The sum over $n$ takes into account the independent propagating modes admitted by the channel. $\varepsilon_{n}(k)$ and $v_{n}(k)$ are the energy and velocity of the particle with wave-vector $k$, $\zeta_{n}(k)$ is the particle transmission probability through the channel, $\eta_i$ represents the statistical distribution functions in the reservoirs and $q$ is the particle charge. In one-dimension the particle velocity $v_n(k) = \hbar^{-1}(\partial \varepsilon_n/\partial k)$ is canceled by the 1D density of states ${\cal D} (\varepsilon_n)=\partial k/\partial\varepsilon_n$ and the fluxes become independent of the dispersion $$\begin{aligned} I &=& \frac{q}{h} \sum_{n} \ \int_{\varepsilon_{n}(0)}^{\infty} d\varepsilon \ \left[ \eta_R - \eta_L \right] \zeta_n(\varepsilon) \label{e5} \\ \dot{U} &=& \frac{1}{h} \sum_{n} \ \int_{\varepsilon_{n}(0)}^{\infty} d\varepsilon\ \varepsilon \left[ \eta_R- \eta_L \right] \zeta_n(\varepsilon) \ . \label{e6}\end{aligned}$$ Throughout the remainder of this paper we will assume $\zeta_n(\varepsilon)=1$, which corresponds to ballistic transport and a perfectly adiabatic coupling between the reservoirs and the 1D system. This assumption can be considered realistic in view of the present stage of the mesoscopics technology. Substitution of expressions (\[e5\]) and (\[e6\]) for the fluxes into Eqs.(\[e1\]) and (\[e2\]), while taking the limit $\delta T \rightarrow 0$ and $\delta \mu \rightarrow 0$, gives us the transport coefficients. Having introduced the model, we consider systems of generalized statistics, which can be investigated within FES theory. Initially we concentrate on identical particle systems and the distribution function derived by Wu [@Wu] for an ideal gas of particles obeying FES $$\begin{aligned} \eta_g = \frac{1}{{\cal W}(x,g)+g} \label{e12}\end{aligned}$$ with $x \equiv \beta(\varepsilon -\mu)$, $\beta \equiv 1/(k_B T)$ and ${\cal W}(x,g)$ given by the implicit equation $$\begin{aligned} {\cal W}^g(x,g)[1+{\cal W}(x,g)]^{1-g} = e^{x} \ . \label{e13}\end{aligned}$$ Making $g=0$ or $g=1$ Eq.(\[e12\]) becomes the Bose-Einstein or the Fermi-Dirac distribution function, respectively. For a system of generalized statistics the transport coefficients are $$\begin{aligned} L_{11} &=& \left. \frac{\partial I}{\partial \mu} \right|_T = \frac{q}{h} \sum_{n} \int_{x_{0n}}^{\infty} dx\ F(x,g) \label{e14} \\ L_{12} &=& \left. \frac{\partial I}{\partial T} \right|_{\mu} = \frac{q}{h}k_B \sum_{n} \int_{x_{0n}}^{\infty} dx\ x\ F(x,g) \label{e15} \\ L_{21} &=& \left. \frac{\partial \dot{U}}{\partial \mu} \right|_T = \frac{1}{h\beta} \sum_{n} \int_{x_{0n}}^{\infty} dx\ (x + \mu\beta) \ F(x,g) \label{e16} \\ L_{22} &=& \left. \frac{\partial \dot{U}}{\partial T} \right|_{\mu} = \frac{k_B}{h\beta} \sum_{n} \int_{x_{0n}}^{\infty} dx\ (x^2+x\mu\beta)\ F(x,g) \label{e17}\end{aligned}$$ with $x_{0n} \equiv \beta(\varepsilon_{n}(0) -\mu)$ and $$\begin{aligned} F(x,g) = \frac{{\cal W}(x,g)[{\cal W}(x,g) + 1]}{[{\cal W}(x,g) + g]^3} \ . \label{e18}\end{aligned}$$ Fermions and bosons are the special cases of the theory, however, our interest is to develop a formalism able to treat all FES systems. Analytic solutions of Eq.(\[e13\]) can also be obtained for the special cases $g=\frac{1}{4},\frac{1}{3},\frac{1}{2},2,3$ and 4, but for general $g$ the approach of analytically solving the equation for ${\cal W}$ is not possible. We now present a comprehensive method to treat this problem. Initially, solve Eq.(\[e13\]) for $x=\beta(\varepsilon-\mu)$ $$\begin{aligned} x({\cal W},g) = ln({\cal W}+1) + \left[ ln({\cal W}) - ln({\cal W}+1) \right] g \ . \label{ex}\end{aligned}$$ We notice that $\lim_{{\cal W}\rightarrow 0} x = - \infty$ for $g \not = 0$, which corresponds to the lowest energy for the degenerate non-bosons. Moreover, $\lim_{{\cal W}\rightarrow 0} x = 0$ when $g=0$, that corresponds to the lowest energy modes of bosons described by the Planck distribution. On the other hand, $\lim_{{\cal W}\rightarrow \infty} x = \infty$ for any $g \geq 0$. This shows that $x$ can be supplanted by $\cal{W}$ as the variable of integration in our general FES expressions for $L_{ij}$, with $\cal{W}$ ranging from 0 to infinity. Notice that no other specification on the functional form of the particle spectra is made in this derivation. Then, using Eq.(\[ex\]), we can write $$\begin{aligned} F(x,g)dx = \frac{d{\cal W}}{({\cal W} + g)^2} \ . \label{FW}\end{aligned}$$ The transport coefficients $L_{ij}$ can then be evaluated analytically for arbitrary $g$. When $g > 0$ $$\begin{aligned} L_{11} &=& M \frac{q}{h} \ \int_0^{\infty} \frac{d{\cal W}}{({\cal W} + g)^2} = M \frac{q}{h}\ \frac{1}{g} \label{e34} \\ L_{12} &=& M \frac{q}{h}k_B \int_0^{\infty} d{\cal W} \ \frac{x({\cal W},g)}{({\cal W} + g)^2} = 0 \label{e35} \\ L_{21} &=& M \frac{1}{h\beta}\ \int_0^{\infty} d{\cal W} \ \frac{x({\cal W},g) + \mu\beta}{({\cal W} + g)^2} = M \frac{\mu}{h}\ \frac{1}{g} \label{e36}\end{aligned}$$ and for all $g \ge 0$ $$\begin{aligned} L_{22} &=& M \frac{k^2_B T}{h}\ \int_0^{\infty} d{\cal W} \ \frac{x^2({\cal W},g)+\mu\beta x({\cal W},g)}{({\cal W} + g)^2} \nonumber \\ &=& M \frac{k^2_B T}{h}\ \frac{\pi^2}{3} \label{e37}\end{aligned}$$ with $x({\cal W},g)$ given by Eq.(\[ex\]). Here $M$ is an integer number that takes into account the number of occupied modes (assuming a degenerate population in each one of them). Therefore the transport equations for a system of identical particles of generalized statistics are $$\begin{aligned} \delta I &=& \frac{1}{g} \frac{q}{h} M \delta \mu \label{e27} \\ \delta \dot{U} &=& \frac{1}{g} \frac{\mu}{h} M \delta \mu + \frac{\pi^2}{3}\frac{k_B^2 T}{h} M \delta T \ . \label{e28}\end{aligned}$$ One important result that we obtain with this formalism is the universal thermal conductance, [*valid for all ballistic FES systems*]{}. Since the electro-chemical potential is an independent variable in this model, we can set $\delta \mu =0$ so that no electric current flows between the reservoirs. This also eliminates the energy flow that is due to a net flux of particles between the two reservoirs, leaving us with only the coefficient $L_{22}$. In this case the energy current is equal to the heat current that is generated by $\delta T$, and so the 1D universal thermal conductance is $$\begin{aligned} \kappa^{univ} = \frac{\pi^2}{3}\frac{k_B^2 T}{h} \ . \label{univ}\end{aligned}$$ Therefore a 1D subband populated with bosons described by the Planck distribution transports the same amount of heat as one populated by fermions, despite the fact these systems have very different statistical behaviors. The thermopower vanishes because of the assumptions made: degenerate systems and unitary transmission coefficients independent of the energy. For Planck bosons $\mu$ is not a parameter describing the system, therefore only $L_{22}$ is present and the result (\[univ\]) is recovered. We note that, in contrast to the thermal conductance, the 1D ballistic electrical conductance is [*not*]{} statistics-independent since $g$ appears explicitly in Eq.(\[e27\]) for the electric current. For instance when $\delta T = 0$, the fermion case is readily obtained by setting $g=1$ and we obtain the well known 1D electrical conductance $G = (e^2/h)M$ for ballistic electrons. Equations (\[e27\]) and (\[e28\]) should also describe the transport properties of the Laughlin states of the FQHE, for which the Landau level filling fraction $\nu = 1/(2m+1)$ with m integer. We use the composite fermion (CF) picture [@Jain1] to derive the statistical interaction parameter [*g*]{} of these particles. Integrating the expression $\Delta d = - g \Delta N$ for $g = g^{CF}$ we get $d_N = d_0^{CF} - g^{CF}(N-1)$, where $d_N$ is the dimension of the one particle Hilbert space when $N$ composite fermions exist in the system whereas $d_0^{CF}$ is its analog in the absence of CF. The term $d_0^{CF}$ is the degeneracy of the CF Landau level, that can be written in terms of the CF density as $d_0^{CF} = (eB/hc) - 2mN$, with $B$ representing the external magnetic field. Using this relation in the expression for $d_N$ along with the fact that the CF behave like fermions ($g^{CF} = 1$) we get $d_N = (eB/hc) - (2m+1)N + 1$. This means that, from the perspective of the FES theory, the transport properties of these states are due to particles of charge $|q|=e$ and a fractional exclusion rule given by $g=(2m+1)$ in the thermodynamic limit. In other words, each electron added to the system excludes $2m+1$ single particle states. Returning to the transport equations, substitution of $g=(2m+1)$ in Eq.(\[e27\]) gives us immediately the well known values of the conductance plateaus of the Laughlin states, $G = \frac{1}{2m+1}\frac{e^2}{h}$. Since on the FQH plateaus the two-terminal conductance is equal to the quantized Hall conductance, this result is in agreement with experimental data [@Ts] on FQH devices. It is important to mention that, since we are concerned with transport, our analysis applies to the electrons themselves as distinct from the quasi-particle excitations studied in reference [@Wu98], that was concerned with thermodynamics. The universality presented by the degenerate 1D systems at finite $T$ can be physically understood if we consider the total energy flux for a single band $$\begin{aligned} \dot{U} = \frac{\mu^2}{2gh} + \frac{\pi^2}{6}\frac{(k_B T)^2}{h} = \dot{U}_{pot} + \dot{U}_{thermal}\ , \label{U}\end{aligned}$$ which shows that the energy current flowing through the one-dimensional system can be divided in two independent components: one due exclusively to the flux of particles and carrying no heat ($\dot{U}_{pot}$) and the other entirely determined by the temperature of the emitting reservoir irrespective of the number of particles ($\dot{U}_{thermal}$). The last term gives rise to the thermal conductance being the same for Planck bosons and all other FES particles. This division is possible because of the cancellation of the density of states by the particle velocity in the 1D system along with the degenerate condition of the system. On the other hand, the electric current for degenerate systems depends only on the number of particles regardless their temperature, which leads to $L_{12}=0$. Generalized Exclusion and FQHE ============================== In the previous section we have shown that the generalized exclusion approach for a system of identical particles leads naturally to the transport coefficients of the Laughlin fractions of the FQHE. In the remainder of this paper we extended this formalism to treat systems composed of multiple species with a mutual statistical interaction acting among them. Exclusion Statistics for various species ---------------------------------------- In its most general form the occupation numbers $\eta_i$ of each species that assembles into an ideal gas of FES particles are given by $$\begin{aligned} {\cal W}_i = \frac{1}{\eta_i} - \sum_{j=1}^S g_{ij} \frac{\eta_j}{\eta_i} \label{multi1}\end{aligned}$$ and $$\begin{aligned} (1+{\cal W}_i) \prod_{j=1}^S \left( \frac{{\cal W}_j}{{\cal W}_j + 1} \right)^{g_{ji}} = e^{x_i} \ , \label{multi2}\end{aligned}$$ where $x_i = \beta_i(\epsilon_i - \mu_i)$ and $S$ is the number of species. Details of this derivation can be found in reference [@Wu]. To proceed with the construction of the statistics of the model it is convenient to introduce the actual values of $g_{ij}$. To do so we use Jain’s Composite Fermion picture [@Jain1] and the generalized exclusion principle $$\begin{aligned} G_{eff,i} = G_i - \sum_j g_{ij} (N_j - \delta_{ij}) \ . \label{w5}\end{aligned}$$ The main properties of the FQHE can be understood if we attach an even number of fictitious flux quanta to each single electron by a Chern-Simons gauge transformation [@Lopez]. In this picture a dressed particle is formed which has the same charge and the statistical properties of the electron. In the mean field approximation the CFs form a Fermi liquid. From this perspective the FQHE is then seen as the IQHE of the CF particles, which experience an [*effective*]{} magnetic field that depends on the density of carriers $B_{eff} = B - 2mN\Phi_0$, where $B$ is the external magnetic field, $\Phi_0 = ch/e$ is the quantum of magnetic flux and $N$ is the total density of CFs in the mean field approximation (the same as the electronic density). Therefore, according to this picture, the quasi-Landau levels (qLLs) occupied by the CF have a degeneracy that is $$\begin{aligned} G^{CF} = B_{eff}/\Phi_0 = B/\Phi_0 - 2m\sum_{j=1}^S N_j \label{cfdeg}\end{aligned}$$ with the index $j$ representing the qLL index. Moreover, because CFs are fermions we have $g_{ij}^{CF} =\delta_{ij}$ and expression (\[w5\]) can be written as $$\begin{aligned} G_{eff,i} &=& G^{CF}_i - \sum_j g^{CF}_{i,j}(N_j-\delta_{ij}) \nonumber \\ &=& \frac{B}{\Phi_0} - \sum_j (2mN_j + \delta_{ij}) + 1 \ . \label{2s5}\end{aligned}$$ Regrouping the elements according to the densities $N_j$ of each qLL we obtain $g_{ii} = 2m + 1$ and $g_{ij} = 2m$, for $i \not = j$. In the FES theory the diagonal terms are the self-interaction parameters and rule the exclusion properties among particles of the same species whereas the non-diagonal terms are the statistical mutual interaction parameters which describe the exclusion relations among particles of different species. In this case the population of each qLL is viewed as a distinct species. What expression (\[2s5\]) shows is that we can incorporate the physics of Jain’s fractions $\nu = \frac{p}{2mp+1}$ ($m$ and $p$ integers) into the generalized exclusion principle of particles in the FES theory. These new particles have the same charge as the electron ($|q|=e$), but their exclusion statistics is governed by $g_{ij}$. The knowledge of $g_{ij}$ allows us to solve equation (\[multi1\]) to obtain the occupation functions $\eta_i$ for each species. For a system composed of a number $S$ of species that obey the exclusion rules derived above we have $$\begin{aligned} \eta_i = \frac{\prod_{j\not=i}^S ({\cal W}_j + 1)} {\prod_{j=1}^S ({\cal W}_j + 2m + 1) - \Lambda} \ , \label{etai}\end{aligned}$$ with $$\begin{aligned} {\cal W}_i \prod_{j=1}^S \left( \frac{{\cal W}_j}{{\cal W}_j + 1} \right)^{2m} = e^{x_i} \ . \label{omegai}\end{aligned}$$ The quantity $\Lambda$ is given by the series $$\begin{aligned} \Lambda = \lambda_0 + \lambda_1\sum_{j=1}^S {\cal W}_j + \lambda_2 \sum_{j}\sum_{k<j} {\cal W}_j{\cal W}_k + \cdots \label{Lambda}\end{aligned}$$ whose coefficients are ($l=0,1,2,\cdots,S-1$) $$\begin{aligned} \lambda_l = (2m+1)^{S-l} - [2(S-l)m+1] \ . \label{lambda}\end{aligned}$$ According to the equations above $\lim_{{\cal W}_i\rightarrow \infty} \eta_i = 0$ leaving us with $S-1$ species in the system. In this case, equations (\[etai\]) and (\[omegai\]) will automatically converge to represent a system with the shortage of one species. However, due to the energy structure of the CFs there is a hierarchy in the values of ${\cal W}_i$. The ratio ${\cal W}_{i+1}/{\cal W}_i$ can be obtained from (\[omegai\]) $$\begin{aligned} \frac{ {\cal W}_{i+1} }{ {\cal W}_i } = e^{x_{i+1}-x_i} = e^{\beta[\varepsilon_{i+1}(k) - \varepsilon_i(k)]} \ , \label{fracw}\end{aligned}$$ where we have assumed common temperature and chemical potential for all species. Therefore, we see that if $\varepsilon_i(k)$ is the energy of the ith qLL then ${\cal W}_i < {\cal W}_{i+1}$. If this model is intended to reproduce the behavior of CFs some caution is necessary because the gap energies depend on the total density $N(\vec{x})$ self-consistently $$\begin{aligned} E_{\delta}= \hbar \omega_{eff} = \frac{\hbar |e|}{m^* c} B [1-2m\sum_j N_j(\vec{x})] \ . \label{gap}\end{aligned}$$ In Figure \[occupation\] we represent the occupation values of the 3 lowest qLLs as a function of the generalized chemical potential $\mu$ of the FES particles. The parameter $m=1$ and the density is assumed to be homogeneous. The occupation values were calculated self-consistently using equations (\[etai\]), (\[omegai\]) and (\[gap\]). The generalized chemical potential is given in units of $E_{\delta}$ and the temperature is defined by $\beta E^0_{\delta} = 30$, where $E^0_{\delta}$ is given by (\[gap\]) when $N_j(\vec{x}) = 0$. We see that the occupation values have plateaus that correspond to the fractions $1/(2pm+1)$. These become poorly defined as the chemical potential raises since $E_{\delta}$ decreases with the increasing density. Having defined the statistical properties of this new particles we procced with the calculation of the transport coefficients in the next section. Transport Coefficients ---------------------- The transport coefficients $L_{ij}$ for any filling factor $\nu = p/(2pm+1)$ can be obtained numerically using the distributions $\eta_i$, these being calculated self-consistently with the gap energy. Nonetheless, for degenerate conditions analytical solutions are possible. The result of section \[ss\] for the Laughlin fractions can be obtained from the general formalism above when we make ${\cal W}_i \rightarrow \infty$ for all $i \not = 1$. In this limiting case $$\begin{aligned} \eta_1 = \frac{1}{{\cal W}_1 + 2m + 1} \label{tc1}\end{aligned}$$ with $g=2m+1$. We now consider the situation in which two qLLs are populated. This means that $\eta_i=0$ for $i>2$ and $$\begin{aligned} \eta_1 &=& \frac{{\cal W}_2 + 1}{({\cal W}_1+2m+1)({\cal W}_2+2m+1) - 4m^2} \label{2s1} \\ \nonumber \\ \eta_2 &=& \frac{{\cal W}_1 + 1}{({\cal W}_1+2m+1)({\cal W}_2+2m+1) - 4m^2} \label{2s2}\end{aligned}$$ where $\eta_1$ describes the lowest band and $\eta_2$ the highest one. Relation (\[fracw\]) is now $$\begin{aligned} {\cal W}_1 = {\cal W}_2 e^{-\beta E_{\delta}(k)} \ , \label{2s41}\end{aligned}$$ where $E_{\delta}(k)$ is the wave vector dependent spectral gap between the bands $$\begin{aligned} E_{\delta}(k) = \epsilon_2(k) - \epsilon_1(k) \ . \label{2s42}\end{aligned}$$ For positive $E_{\delta}(k)$, at low temperatures, $\beta E_{\delta}(k) \gg 1$ that leads to the other important relation ${\cal W}_1 \ll {\cal W}_2$. In Figure \[sample\], a schematic band structure shows the energy dispersion of the three lowest bands as a function of position, from the bulk towards the right edge of the sample. The generalized chemical potential is indicated by the horizontal line. The vertical line is a reference, it divides the band structure into regions ${\cal A}$ and ${\cal B}$, whose meaning will be discussed bellow. The energy values $\epsilon'_2$ and $\epsilon'_1$ indicate the points where this reference line crosses the bands. We assume that the system is degenerate, so that $\beta \epsilon'_1 \ll \beta \mu$ and $\beta \epsilon'_2 \gg \beta \mu$. Notice that the generalized chemical potential ($\mu$) is above the lowest unoccupied state and lies over the third qLL band. This does not mean, however, that this level is populated. It occurs because the statistical mutual interaction modifies the electronic chemical potential. Because of the mutual statistical interaction, the occupation functions $\eta_i$ depend on both $\epsilon_2(k)$ and $\epsilon_1(k)$. We take advantage of the degenerate condition of the system to circumvent this difficulty. Since composite fermions at the same position in space have the same $k$, if $k_B T \ll (\epsilon'_2 - \epsilon'_1)$ each ${\cal W}_i$ should go from 0 to $\infty$ at different values of $k$, although around the same energy ($\mu$). In other words, the transitions of ${\cal W}_i$ from 0 to $\infty$ are decoupled. We have\ [*region*]{} ${\cal A}$ : ${\cal W}_1 \ll 1$ while ${\cal W}_2$ makes the transition $0 \rightarrow \infty$,\ [*region*]{} ${\cal B}$ : ${\cal W}_2 \gg 1$ while ${\cal W}_1$ makes the transition $0 \rightarrow \infty$.\ Therefore the following approximations are possible. Focusing initially on the highest band, in region ${\cal A}$ $$\begin{aligned} \eta_2(k) = \frac{1}{(2m+1){\cal W}_2 + 4m +1} \label{eta2A} \\ \nonumber \\ \frac{{\cal W}_2^{4m+1}}{({\cal W}_2+1)^{2m}} = e^{x_2(k) + 2m\beta E_{\delta}(k)} \ , \label{w2a}\end{aligned}$$ while $\eta_2=0$ in region ${\cal B}$. So, integration over region ${\cal A}$ is enough to give us $I_2$ and $\dot{U}_2$ for the highest edge state. For the lowest band the entire energy dispersion ought to be considered. In region ${\cal A}$ $$\begin{aligned} \eta_1^{{\cal A}}(k) = \frac{{\cal W}_2 + 1}{(2m+1){\cal W}_2 + 4m +1} \label{qwe} \end{aligned}$$ with ${\cal W}_2(k)$ given by (\[w2a\]). In region ${\cal B}$ $$\begin{aligned} \eta_1^{{\cal B}}(k) = \frac{1}{{\cal W}_1 + 2m +1} \label{poi} \\ \nonumber \\ \frac{{\cal W}_1^{2m+1}}{({\cal W}_1+1)^{2m}} = e^{x_1(k)} \ . \label{lkj}\end{aligned}$$ Therefore, for this band we have $I_1 = I_1^{{\cal A}} + I_1^{{\cal B}}$ and $\dot{U}_1 = \dot{U}_1^{{\cal A}} + \dot{U}_1^{{\cal B}}$. Consider the current due to the highest band. $$\begin{aligned} I_2 = q \ \int_0^{\infty} \ \frac{dk}{2\pi} v_2(k) \eta_2(k) \ , \label{2s6}\end{aligned}$$ with $\eta_2$ given by (\[eta2A\]). It is convenient to define the new variable $E_2(k) = \epsilon_2(k) + 2m E_{\delta}(k)$, in terms of which we write the velocity $$\begin{aligned} v_2(k) = \frac{1}{\hbar} \ \frac{d\epsilon_2}{dk} = \frac{1}{\hbar} \left( \frac{dE_2}{dk} - 2m\frac{dE_{\delta}}{dk} \right) \ . \label{2s9}\end{aligned}$$ Using $E_2$ as the variable of integration $I_2$ is rewritten as $$\begin{aligned} I_2 = \frac{q}{h}\ \int_{E_2^0}^{\infty} \left[ 1 - 2m\frac{dE_{\delta}}{dE_2} \right] \eta_2(E_2) dE_2 \ . \label{2s10}\end{aligned}$$ The lower limit of integration $E_2^0 = \epsilon_2^0 + 2mE_2^{bulk}$ is a constant, with $E_2^{bulk}$ representing the value of the gap in the bulk of the sample. Another transformation eliminates the explicit dependence on the energy. From Eq.(\[w2a\]) we obtain $E_2$ as a function of ${\cal W}_2$ $$\begin{aligned} \frac{dE_2}{d{\cal W}_2} = \frac{1}{\beta F}\ \frac{dF}{d{\cal W}_2} \ , \label{2s11}\end{aligned}$$ with $$\begin{aligned} F({\cal W}_2) \equiv \frac{{\cal W}_2^{4m+1}}{({\cal W}_2+1)^{2m}} \ . \label{2s12}\end{aligned}$$ The gap $E_{\delta}$ can also be written in terms of ${\cal W}_2$. With this purpose we return to the CF picture that gives us $E_{\delta} = \hbar \omega_{eff} = \hbar (eB_{eff}/m^*c)$. The effective magnetic field depends on the local density of particles and so does the gap. Therefore $$\begin{aligned} E_{\delta}(k) &=& \frac{\hbar |q|}{m^*c} B \left[ 1 - 2m\nu \right] \\ &=& E_{\delta}^0 \left[ 1 - 2m (\eta_1(k) + \eta_2(k)) \right] \ , \label{2s13}\end{aligned}$$ where we have defined $E_{\delta}^0 = (\hbar |q|B/m^*c)$. In the limit ${\cal W}_1 \ll 1$ which characterizes region ${\cal A}$ $$\begin{aligned} \eta_1+\eta_2 \approx \frac{{\cal W}_2 +2}{(2m+1){\cal W}_2 + 4m+1} \ . \label{2s14}\end{aligned}$$ We are now able to write $I_2$ in a form that is independent of the details of the particle spectra. Substituting expressions (\[2s11\]) to (\[2s14\]) into Eq.(\[2s10\]) we obtain $$\begin{aligned} I_2 = \frac{q}{h\beta} \lim_{{\cal W}_2^0 \rightarrow 0} \int_{{\cal W}_2^0}^{\infty} & &\left[ \frac{1}{F}\frac{dF}{d{\cal W}_2} \right. \nonumber \\ & & + \left. a_m\frac{d(\eta_1+\eta_2)}{d{\cal W}_2} \right] \eta_2({\cal {\cal W}}_2) d{\cal W}_2 \ , \label{2s15}\end{aligned}$$ where $a_m = 4m^2\beta E^0_{\delta}$. Separating this expression, the integration of the first part gives us $$\begin{aligned} \frac{q}{h\beta} \ \int_{{\cal W}_2^0}^{\infty} \ \frac{\eta_2}{F}\frac{dF}{d{\cal W}_2} d{\cal W}_2 & &= \nonumber \\ & &\frac{q}{h\beta} \left[\ln{({\cal W}_2^0+1)} - \ln{({\cal W}_2^0)} \right] \ , \label{2s16}\end{aligned}$$ whereas the integration of the second term produces a quantity independent of temperature and chemical potential when we make ${\cal W}_2^0=0$, that will have no contribution to the transport coefficients. The limit ${\cal W}_2^0 \rightarrow 0$ of expression (\[2s16\]) can be obtained from (\[w2a\]) that gives us $$\begin{aligned} {\cal W}_2^0 = e^{\frac{\beta(E_2^0 -\mu)}{4m+1}} ({\cal W}_2^0 +1)^{\frac{2m}{4m+1}} \ . \label{2s17}\end{aligned}$$ Substituting this result in Eq.(\[2s16\]) the limit ${\cal W}_2^0 \rightarrow 0$ of $I_2$ can be easily obtained. It depends only on the chemical potential and, therefore, the coefficients that describe the electrical current for the highest band are $$\begin{aligned} \frac{\partial I_2}{\partial \mu} = \frac{1}{4m+1}\frac{q}{h} \ \ \ \ \ \ \ , \ \ \ \ \ \ \ \frac{\partial I_2}{\partial T} = 0 \ . \label{2s18}\end{aligned}$$ The electrical current $I_1$ due to the lowest band is obtained by same approach. We perform the integration dividing the whole domain in two regions (see Fig.\[sample\]). Along the first portion, designated by ${\cal A}$, ${\cal W}_1 \ll 1$ throughout the range whereas ${\cal W}_2$ increases from ${\cal W}_2 \ll 1$ to ${\cal W}_2 \gg 1$. In the second part, represented by ${\cal B}$, ${\cal W}_2$ remains very large while ${\cal W}_1$ makes the transition ${\cal W}_1 \ll 1$ to ${\cal W}_1 \gg 1$. Along region ${\cal A}$ the occupation $\eta_1$ is given by (\[qwe\]) and group velocity for this band is $$\begin{aligned} v_1(k) = \frac{1}{\hbar} \ \frac{d\epsilon_1}{dk} = \frac{1}{\hbar} \left( \frac{dE_2}{dk} - (2m+1)\frac{dE_{\delta}}{dk} \right) \ . \label{2s19}\end{aligned}$$ Using the Eq.(\[2s19\]) along with the identities (\[2s11\]) to (\[2s14\]) we are able to write $$\begin{aligned} I_1^{\cal A} &=& \frac{q}{h}\ \int_{\cal A} \left[ 1 - (2m+1)\frac{dE_{\delta}}{dE_2} \right] \eta_1^{\cal A}(E_2) dE_2 \\ &=& \frac{q}{h\beta} \lim_{{\cal W}_2^0 \rightarrow 0} \int_{{\cal W}_2^0}^{{\cal W}'_2} \left[ \frac{1}{F}\frac{dF}{d{\cal W}_2} \right. \nonumber \\ & & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \left. b_m\frac{d(\eta_1+\eta_2)}{d{\cal W}_2} \right] \eta_1^{{\cal A}}({\cal W}_2) d{\cal W}_2 \ , \label{2s21}\end{aligned}$$ where $b_m = 2m(2m+1)\beta E^0_{\delta}$ and ${\cal W}'_2 = {\cal W}(\varepsilon'_2)$ is independent of the thermodynamical parameters for degenerate systems. Once again the second term in the integrand will produce a quantity that is independent of temperature and chemical potential, therefore irrelevant for the transport coefficients. On the other hand, the first term is $$\begin{aligned} \frac{q}{h\beta} \ \int_{{\cal W}_2^0}^{{\cal W}'_2} \ \frac{\eta^{{\cal A}}_1}{F}\frac{dF}{d{\cal W}_2} d{\cal W}_2 = \frac{q}{h\beta} \left[\ln{({\cal W}'_2)} - \ln{({\cal W}_2^0)} \right] \ .\end{aligned}$$ Then using (\[2s17\]) we get $$\begin{aligned} \frac{\partial I_1^{\cal A}}{\partial \mu} = \frac{1}{4m+1}\frac{q}{h} \ \ \ \ \ \ \ , \ \ \ \ \ \ \ \frac{\partial I_1^{\cal A}}{\partial T} = 0 \ . \label{2s22}\end{aligned}$$ As for region ${\cal B}$, its contribution to the current due to the lowest band is $$\begin{aligned} I_1^{{\cal B}} = \frac{q}{h\beta}\ \int_{{\cal W}'_1}^{\infty} \frac{\eta_1^{{\cal B}}}{H}\frac{dH}{d{\cal W}_1} d{\cal W}_1 \ , \label{2s24}\end{aligned}$$ with $$\begin{aligned} H({\cal W}_1) = \frac{ {\cal W}_1^{2m+1} }{ ({\cal W}_1+1)^{2m} } \ .\end{aligned}$$ The lower limit ${\cal W}'_1 = {\cal W}_1(\varepsilon'_1)$ corresponds to a continuation from ${\cal W}'_2$ (see Fig. \[sample\]). The differentiation of (\[2s24\]) with respect to the thermodynamical parameters vanishes. Therefore we sum up the results of this calculation as $$\begin{aligned} L_{11} &=& \frac{\partial (I_1 + I_2)}{\partial \mu} = \frac{2}{4m+1}\frac{q}{h} \\ L_{12} &=& 0 \ .\end{aligned}$$ The calculations leading to the coefficients $L_{21}$ and $L_{22}$ are not presented since they follow the same formalism. However we write the final results $$\begin{aligned} \delta I &=& \frac{2}{4m+1} \frac{q}{h} \delta \mu \label{e29} \\ \delta \dot{U} &=& \left\{ \frac{2}{4m+1} \frac{\mu}{h} + C_m \frac{E_{\delta}}{h} \right\} \delta \mu + 2\ \frac{\pi^2}{3}\frac{k_B^2 T}{h} \delta T \ , \label{e30}\end{aligned}$$ where $C_m$ is a non-universal coefficient that depends on $m$. Although $\mu$ differs from the electrochemical potential $\delta \mu$ does not when the system is in a FQHE plateau, as evidenced by (\[e29\]). Despite the statistical coupling between the two CF quasi-Landau levels, once again the universal quantum of thermal conductance is obtained, this time multiplied by 2, which reflects the presence of the two modes of propagation. The two-terminal electrical conductance $G = 2/(4m+1)(e^2/h)$ is also obtained for this family of states, in agreement with experiment. Conclusions =========== In conclusion, we have presented a generalized theory of transport of 1D systems. We have shown that the ballistic thermal conductance of one-dimensional systems is statistics-independent and thus truly universal: $\kappa^{univ} = \frac{\pi^2}{3}(k_B^2 T/h)$. This result is valid in the degenerate regime for systems of particles obeying fractional exclusion statistics, whether they present a Fermi surface or are described by a Planck distribution. Electrical conductances for ballistic electrons in 1D quantum wires and in the FQH regime, although not universal (in the sense that they depend on statistics), also follow naturally from this theory. Acknowledgements ================ This work was supported by NSERC. C.R. Proetto, Sol. Stat. Commun. [**80**]{}, 909 (1991); G. D. Guttman, E. Ben-Jacob and D. J. Bergman, Phys. Rev. B [**53**]{}, 15856 (1996); A. Greiner, L. Reggiani, T. Kuhn, and L. Virani, Phys. Rev. Lett. [**78**]{}, 1114 (1997). For related theoretical and experimental work establishing the validity of the Wiedemann-Franz law in 1D electron systems see also G. V. Chester and A. Thellung, Proc. Phys. Soc. London [**77**]{}, 1005 (1961), C. Castellani, C. DiCastro, G. Kotliar and P. A. Lee, Phys. Rev. Lett.[**59**]{}, 477 (1987) and L.W. Molenkamp, Th. Gravier, H. van Houten, O.J.A. Buijk, M.A.A. Mabesoone, and C.T. Foxon, Phys. Rev. Lett. [**68**]{}, 3765 (1992). Luis G.C. Rego and G. Kirczenow, Phys. Rev. Lett. [**81**]{}, 232 (1998); D.E. Angelescu, M.C. Cross, M.L. Roukes, cond-mat/9801252; M. P. Blencowe, cond-mat/9803319. C.L. Kane and M.P.A. Fisher, Phys. Rev. Lett. [**76**]{}, 3192 (1996). R. Fazio, F.W. Hekking, and D.E. Khmelnitskii, Phys. Rev. Lett. [**80**]{}, 5611 (1998). J. Hone, M. Whitney, C. Piskoti, and A. Zettl, preprint. F.D.M. Haldane, Phys. Rev. Lett. [**67**]{}, 937 (1991). J.M. Leinaas and J. Myrhein, Nuovo Cimento [**37B**]{}, 1 (1977); F. Wilczek, Phys. Rev. Lett. [**49**]{}, 957 (1982). R.A.J. van Elburg and K. Schoutens, cond-mat/9801272. R. K. Bhaduri, M.V.N. Murthy and M.K. Srivastava, Phys. Rev. Lett. [**76**]{}, 165 (1996). M.V.N. Murthy and R. Shankar, Phys. Rev. Lett. [**73**]{}, 3331 (1994); Phys. Rev. Lett. [**72**]{}, 3629 (1994). Y-S. Wu, Phys. Rev. Lett. [**73**]{}, 922 (1994). C. Nayak and F. Wilczek, Phys. Rev. Lett. [**73**]{}, 2740 (1994). A.K. Rajagopal, Phys. Rev. Lett. [**74**]{}, 1048 (1995). W-H. Huang, Phys. Rev. B [**51**]{}, 3729 (1995); Phys. Rev. B [**53**]{}, 15842 (1996). K. Iguchi, Phys. Rev. Lett. [**78**]{}, 3233 (1997). R. Landauer, IBM J. Res. Dev. [**1**]{}, 223 (1957); R. Landauer, Phys. Lett. [**85A**]{}, 91 (1981). J.K. Jain, Phys. Rev. Lett. [**63**]{}, 199 (1989). Y.S. Wu, Y. Yu, Y. Hatsugai, and M. Kohmoto, Phys. Rev. B [**57**]{}, 9907 (1998). D. C. Tsui, H. L. Störmer and A. C. Gossard, Phys. Rev. Lett. [**48**]{}, 1559(1982). Ana Lopez and Eduardo Fradkin, Phys. Rev. B [**44**]{}, 5246 (1991).
--- abstract: 'Knowledge of the electron density distribution in the solar corona put constraints on the magnetic field configurations for coronal modeling and on initial conditions for solar wind modeling. We work with polarized SOHO/LASCO-C2 images from the last two recent minima of solar activity (1996–1997 and 2008–2010), devoid of coronal mass ejections. [The goals are to]{} derive the 4D electron density distributions in the corona by applying a newly developed time-dependent tomographic reconstruction method [and to compare the results between the two solar minima and with two magnetohydrodynamic models]{}. First, [we confirm that the values of the density distribution in thermodynamic models are more realistic than in polytropic ones.]{} The tomography provides more accurate distributions in the polar regions, and we find that the [density in tomographic and thermodynamic solutions]{} varies with the solar cycle in both polar and equatorial regions. Second, we find that the highest-density structures do not always correspond to the predicted large-scale heliospheric current sheet or its helmet streamer but can follow the locations of pseudo-streamers. [We deduce that]{} tomography offers reliable density distributions in the corona, reproducing the slow time evolution of coronal structures, without prior knowledge of the coronal magnetic field over a full rotation. Finally, we suggest that the highest-density structures show a differential rotation well above the surface depending on how they are magnetically connected to the surface. Such valuable information on the rotation of large-scale structures could help to connect the sources of the solar wind to their in situ counterparts in future missions such as *Solar Orbiter* and *Solar Probe Plus*.' author: - 'Judith de Patoul, Claire Foullon, and Pete Riley' bibliography: - 'article.bib' title: '3D electron density distributions in the solar corona during solar minima: assessment for more realistic solar wind modeling' --- Introduction {#sec_intro} ============ The distribution of the magnetic field generated in the solar interior and connected into the solar wind influences most coronal phenomena, including large-scale and slowly evolving coronal structures. The coronal density distribution can serve as a tracer of the configuration of the magnetic field (shape and general morphology rather than field strength), since the coronal plasma is frozen into the field . Of particular interest are the observations of streamers and pseudo-streamers, referring to structures associated with large magnetic loops that separate coronal holes of opposite polarity, and twin loop arcades that separate coronal holes of the same polarity respectively [@Wang2007ApJ_pseudostreamers]. Another example is the study of magnetic structures above the solar polar regions, where the measurements of the line-of-sight (LOS) magnetograms are generally less reliable owing to the larger viewing angle with the magnetic field. The observed electron density distributions in coronal holes and polar plumes [@Barbey2008SoPh; @dePatoul2013SoPh] could provide a better understanding of how the flux emergence near the equator affect the magnetic field configuration at the pole [@dePatoul2013AA]. Finally, an accurate determination of the ambient coronal electron density provides a better estimation of the mass and the propagation of coronal mass ejections (CMEs) [@Vourlidas2000; @Feng2015ApJa; @Feng2015ApJb]. In particular, the density is important for calculating the compression ratio of CME-driven shocks and the Alfvén Mach number, which has important implications for the localization of particle acceleration sites and hence space weather forecasts [@Bemporad2011; @Chen2014]. The first proposed empirical approach to obtain the electron density from remote sensing observations was an inversion method using measurements from eclipses in polarized white light, with the assumption that the coronal electron density is axisymmetric [@vandeHulst1950]. [@Saito1977] used this method to calculate electron densities from polarized brightness (pB) observations obtained by *Skylab* coronagraph data during the declining phase of the solar cycle from 1973 May to 1974 February. A good agreement of the density values was found using SOHO/LASCO-C2 data during 1998 February [@Hayes2001ApJ] and 1996 December [@QuemeraisAnA2002]. Empirical methods to obtain the full 3D density distribution are given by solar rotational tomography (SRT). SRT has been specifically developed for optically thin structures and uses LOS-integrated coronal images from multiple viewpoints taking advantage of solar rotation. White-light images of the K-corona, where the radiation is dominated by Thomson scattering, can be used to reconstruct density from 1.5 [${\mathrm R}_{\odot}$]{} up to 6.5 [${\mathrm R}_{\odot}$]{} using images from the LASCO-C2 or *STEREO*/COR1 [e.g., @Frazin2000; @Barbey2013SoPh; @Kramar2014SoPh; @Peillon2014]. When the sources for a tomographic inversion are EUV images, both density and temperature can be reconstructed by applying differential emission measure tomography [@Frazin2009ApJ]. However, even in the best cases, only reconstruction close to the surface from about 1.03 [${\mathrm R}_{\odot}$]{} to 1.25 [${\mathrm R}_{\odot}$]{} can be obtained. An alternative physics-based approach to obtain a quantitative 3D density distribution is given by magnetohydrodynamic (MHD) models, which provide the global configuration of the magnetic field and the plasma parameters (i.e., density, temperature, and velocity) in the corona [@RileyJGR2001; @RileyApJ2006; @Lionello2009]. Here we determine the electron density distribution in the corona during the two previous solar minima: 1996–1997 (solar cycle number 22/23) and 2008–2010 (solar cycle number 23/24). In section \[sec\_meth\], we determine the 4D electron density distribution ($N_{e}$) from a newly developed time-dependent tomographic method. We look at the general morphology of the density structures in the empirical model from tomography and compare with a simple potential field source surface (PFSS) model and more advanced MHD models. In section \[sec\_resu\], we contrast the density values found by tomography and the ones predicted by MHD models; especially, we discuss (1) the temporal and radial profiles of the density, (2) the location of the helmet streamer and pseudo-streamer, and (3) the presence of a differential rotation of the structures in the corona. Determination of the electron density distribution {#sec_meth} ================================================== $N_{e}$ from Tomography {#sec_meth_Tomo} ----------------------- Since 1996 the SOHO/LASCO-C2 coronagraph has continuously produced sets of white-light and polarized images of the solar corona with a field of view ranging from about 1.5 [${\mathrm R}_{\odot}$]{} up to 6.5[${\mathrm R}_{\odot}$]{} [@BruecknerSoPh1995]. To determine the electron density distribution ($N_e$) in the corona, we use the pB images that are extracted from the total brightness LASCO-C2 images pre-processed as described by [@llebaria_2006; @gardes2013; @LamyJGR2014]. The resulting pB images are dominated by the electron-scattered K corona, which is known to be strongly polarized [@Billings1966], and not contaminated by the dust-scattered F corona, which is essentially unpolarized at low heights and has been removed during the calibration. The intensity measured in pB images, $I_{\rm pB}$, observed from a view direction at a rotation angle, $\vartheta$, of the Sun relative to the observer’s longitude, is the integration of the electron density, $N_e$, along the LOS direction, $\vec{e}_{\rm LOS}(\vartheta)$, $$I_{\rm pB} (\rho,\vartheta) = \int_{\rm LOS} N_{e} \bigg(\vec{r} \big(l\ \vec{e}_{\rm LOS}(\vartheta)\big) \bigg) \ K \bigg( \vec{r} \big(l\ \vec{e}_{\rm LOS}(\vartheta)\big), \ \rho\ \vec{e}^{\perp}_{\rm LOS}(\vartheta) \bigg) \ {\rm d}l, \label{eq_IpB}$$ where $\vec{e}^{\perp}_{\rm LOS}$ is the vector unit orthogonal to $\vec{e}_{\rm LOS}$, $\rho$ is the distance from the Sun center to $\vec{e}_{\rm LOS}$ and $\vec{r}$ is the radial vector. The Thomson scattering function, $K$, is defined for a point source of luminosity, $4\pi L$, by [@Frazin2010]: $$K = \frac{3\sigma_{e}}{16}\frac{L}{\rho^2} \sin^4\Theta, \label{eq_ThomScatt}$$ where $\Theta$ is the scattering angle defined by $\sin\Theta = \frac{\rho}{\|\vec{r}(l\ \vec{e}_{\rm LOS}(\vartheta))\|}$ and $\sigma_e$ is the Thomson scattering cross section for a single electron. A typical example of a pB image is shown in Figure \[fig\_lasco\] (top panel), where a background subtraction has been applied to enhance the intensity along the radial direction. In this work, we consider coronal heights above 2.5 [${\mathrm R}_{\odot}$]{} to avoid artifacts due to diffraction surrounding the occulter. The pB image shown in Figure \[fig\_lasco\] (top panel) was taken when a CME occurred at a position of 271$^\circ$. This CME had an angular width of 114.1$^\circ$ and traversed the corona from 2.5 [${\mathrm R}_{\odot}$]{} to 7.5 [${\mathrm R}_{\odot}$]{} in approximately 2.5 hr [@Boursier2006]. Solar rotational tomographic methods cannot resolve fast temporal changes, and important artifacts are produced in the reconstructions. To minimize this effect, we remove the CMEs from the pB images. [@Morgan2010CME] proposed a method for separating CMEs from the quiescent corona in white-light coronagraph images based on the fact that the large-scale structures are close to radial, whilst CMEs are highly nonradial. Here we consider CMEs listed in the CACTus [@RobbrechtAnA2004] and the ARTEMIS [@Boursier2006] catalogs that have an intensity larger than 0.8$\times 10^3$W sr$^{-1}$ m$^{-2}$. Using the position angle and angular width from these catalogs, we simply exclude the angular portion of the pB image affected by the CME from the tomographic reconstruction procedure (Figure \[fig\_lasco\], bottom panel). The electron density is obtained by inverting equation (\[eq\_IpB\]) using SRT. We use the newly developed time-dependent tomographic method, which has been elaborated and described by [@Peillon2014; @Peillon2014Poster]. The method involves spatio-temporal regularization (Kalman filters) to mitigate the slow temporal variation of the corona and assumes a nearly solid rotation of the Sun of 27.2753 days corresponding to the Carrington rotation. It requires a continuous set of view directions uniformly distributed over half a rotation, with a minimum cadence of one pB image per day, i.e., a number of $n_I\geq$13 images for a given tomography reconstruction. The corona is divided into a spherical grid $(r, \phi, \theta; t)$ with a size of ($60\times60\times120\times n_I$), covering the heliocentric distances from 2.5 to 8.5 [${\mathrm R}_{\odot}$]{}. To assess the robustness and accuracy of the technique, the method has been tested using a set of 14 projected images of a time-dependent MHD volume as “observations”. The result could successfully reproduce the slow time-varying dynamic of the model. The estimated density distribution, $\tilde{\bf{x}}$, is constructed on the grid cells by solving the following least-squares minimization problem: $$\tilde{\bf{x}} = {\arg\!\min}_{\bf{x}\geqslant 0} \Big\{ \left\| \bf{y}-\bf{A}\bf{x}\right\|^{2}_{2} + \lambda_S^{2} \left\| \bf{R}_S \bf{x} \right\|^{2}_{2} + \lambda_T^{2} \left\| \bf{R}_T \bf{x} \right\|^{2}_{2} + \lambda_C^{2} \left\| \bf{R}_C \bf{x} \right\|^{2}_{2} \Big\}. \label{eq_tomo}$$ The vector $\bf{y}$ contains the intensity measured in each pixel from the set of pB images over half a rotation, i.e., the $I_{\rm pB} (\rho_{ij},\vartheta)$ defined in Equation (\[eq\_IpB\]) with $\vartheta \in [0,2\pi]$ and $\rho_{ij}$ giving the position of the pixel in the image. The vector $\bf{x}$ contains $N_e$ values defined in the spherical grid $(r, \phi, \theta; t)$. $\bf{A}$ is a diagonal-like matrix composed of blocks of projection matrices that are determined by the geometry and the physics of the problem, i.e., the relation between the volume element in $\bf{x}$ and the LOS-related pixel in the pB image defined by Thomson scattering function (\[eq\_ThomScatt\]). The matrices $\bf{R_S}, \bf{R_T}$ and $\bf{R_C}$ in equation (\[eq\_tomo\]) are the spatio-temporal regularization terms, which introduce a prior knowledge of the solution. This regularization minimizes the effects of the noise, the limited number of pB images available, and the unavoidable temporal change in the corona. The spatial regularization matrix, $\bf{R_S}$, described in [@Frazin2007], is a second derivative of the angular spherical coordinates $\theta$ and $\phi$, multiplied by $r^{-1}$ to reduce the radial distance noise. The temporal regularization matrix, $\bf{R_T}$, is a first derivative to enforce smoothness between two successive views of the Sun. The co-rotating regularization matrix, $\bf{R_C}$, is acting jointly in the space-time domain. Its purpose is to prevent the reconstruction from concentrating material in the vicinity of the plane of the sky (containing the Sun’s center). This is a plane that rotates in the Carrington coordinate system, and it is always orthogonal to the observer’s LOS. The regularization parameters, $(\lambda_S,\lambda_T,\lambda_C)$, are estimated by minimizing the normalized root means square error of the time-dependent 3D MHD model and its reconstruction ($\lambda_{S}=2.2\ 10^{-6}$, $\lambda_{T}=1.7\ 10^{-6}$ and $\lambda_{C}=0.2\ 10^{-6}$). Further details about the method and the construction of these regularization operators can be found in [@Peillon2014]; see, [in particular discussion on the use of the temporal regularization, including examples of 3D and 4D tomographic reconstruction]{}. A full 4D reconstruction is performed every 4 days, provided that a minimum of 13 pB images are available. During 1996–1997, several data gaps are present for which the tomography was not carried out. Panel (a) of Figure \[fig\_tomo\_predsci\_2077\] shows a typical result from tomography during a relatively quiet period of the solar activity when the number of CMEs is reduced. It was obtained using 14 pB images from 2008 November 21 to December 4, which is included in the Carrington rotation 2077. The left panel of Figure \[fig\_tomo\_predsci\_2077\] (a) shows the 2D longitude–latitude map at 3.5 [${\mathrm R}_{\odot}$]{} centered on 2008 December 2. The right panel shows the latitude–radial average map constructed by integrating over the longitudes (a radial contrast enhancement has been applied). It helps to represent the extent to which the helmet streamer spreads over the latitudes during this particular period. Panel (a) of Figure \[fig\_tomo\_predsci\_2097\] shows another result from tomography in the later phase of the extended solar minimum, when solar activity has started to increase. It was obtained using 15 pB images from 2010 June 6 to 20, during the Carrington rotation 2097. The latitudinal positions of the maximum of density evaluated for each longitude in the tomographic reconstruction are indicated by the white dots. Some voxels near the higher-density structure have a density value close to zero, for example, Figure \[fig\_tomo\_predsci\_2077\] (a), the region at longitude \[30$^{\circ}$, 34$^{\circ}$\] and latitude \[-29$^{\circ}$, -32$^{\circ}$\]. These zero-density artifacts are usually caused by the unavoidable rapid change in the corona. Indeed, the inverse problem can set a negative value to account for an unexplained variation of intensity in the data from a single viewpoint [@Barbey2013SoPh]. This could also be caused by remaining instrumental artifacts in a pB image. $N_{e}$ from MHD models {#sec_meth_MHDmod} ----------------------- The PFSS model is a simple and popular current-free model capable of reproducing the basic coronal magnetic field configuration. It requires only the synoptic maps of LOS photospheric magnetic field component as lower boundary, and it assumes that all field lines become radial at the upper boundary (the source surface) at about 2.5–3.5 [${\mathrm R}_{\odot}$]{}. The global magnetic field configuration predicted by the PFSS model can be used as a proxy of the density distribution in the corona. In particular, the neutral line at the source surface, which separates the large-scale opposite-polarity regimes of the coronal magnetic field, is often used to locate the heliospheric current sheet (HCS) and the helmet streamer. The PFSS/HCS calculated for a source surface at 2.5 [${\mathrm R}_{\odot}$]{} is displayed as the black line in Figures \[fig\_tomo\_predsci\_2077\] and \[fig\_tomo\_predsci\_2097\]. A more complex and elaborate way to predict the magnetic field configuration and the density distribution in the corona is to employ global MHD models. We use solutions from MHD models developed by the group at Predictive Science [@RileyJGR2001; @RileyApJ2006; @Lionello2009 see online, [www.predsci.com](www.predsci.com)]. For the lower boundary condition, the models use the radial component of the magnetic field provided by the observed LOS measurements of *SOHO*/MDI magnetograms and uniform characteristic values for the plasma density and temperature. It assumes also that the electron and proton density are equal. In the polytropic MHD model, the energy equation is approximated by a simple adiabatic energy equation with a polytropic index $\gamma=1.05$. Since this approximation significantly simplifies the problem and reduces the time necessary to complete a simulation, its solutions can be obtained more routinely and are available between 1 [${\mathrm R}_{\odot}$]{} and 30 [${\mathrm R}_{\odot}$]{} for all the Carrington rotations under study. This model reproduces well the geometrical and topological properties of the magnetic field, such as the location and evolution of coronal holes, streamer structures, and the HCS; however, such an approximation does not predict the density and temperature very accurately [@RileyApJ2006]. In particular, [@Vasquez2008ApJ] compared a static tomographic reconstruction of the density with two polytropic MHD models (Stanford: [@Hayes2001ApJ]; and Michigan: [@Cohen2007ApJ] during Carrington rotations 2029. They found that these polytropic MHD models could reproduced the density values only below 3.5 [${\mathrm R}_{\odot}$]{} and at low latitudes, while both models had problems reproducing the correct density in the polar regions. A more recent thermodynamic MHD model uses an improved equation for energy transport in the corona that includes parallel thermal conduction along the magnetic field lines, radiative losses, and parameterized coronal heating. This thermodynamic MHD model produces more accurate estimates of plasma density and temperature in the corona [@Lionello2009; @Riley2011SoPh]. The electron density estimated by the polytropic MHD model (pMHD/$N_{e}$) for the Carrington rotations 2077 and 2097 are shown in panels (b) of Figures \[fig\_tomo\_predsci\_2077\] and \[fig\_tomo\_predsci\_2097\], respectively. Panels (d) show the radial field calculated by the polytropic MHD model (pMHD/$B_{r}$) for the same Carrington rotations. [The density predicted by the thermodynamic MHD model (tMHD/$N_{e}$) is shown in panels (c) of Figures \[fig\_tomo\_predsci\_2077\] for the Carrington rotation 2077.]{} In the left panel, we show the longitude–latitude Carrington map at 3.5 [${\mathrm R}_{\odot}$]{}; in the right panel, we show the latitude–radial map obtained by averaging over the longitudes. The latitudinal locations of the density maximum in pMHD/$N_{e}$ are shown as a green dashed line. The latitudes of the density maximum in tMHD/$N_{e}$ for the thermodynamic MHD model are nearly identical since both models reproduced the general observed configuration of the magnetic field. It is important to note that the PFSS and the global MHD models require a series of magnetograms providing the nearest central meridian data on the photosphere and covering a full Carrington rotation (27.2753 days), while tomography requires observations of the coronal emission covering only half a rotation, since it relies on optically thin measurements. Moreover, the photospheric measurements beyond $75^{\circ}$ absolute latitude are not reliable owing to the larger viewing angle with the magnetic field. Therefore, errors in polar field strength estimation at the surface can lead to discrepancies in the modeled magnetic field configuration of the corona. This is especially true during the solar minimum, when the polar fields are the strongest. Analysis and Comparison {#sec_resu} ======================= The overall density structure from the MHD models reproduces the essential features of tomography. Nevertheless, we can see that the results obtained from tomography are more structured, in particular at the poles. The location of the density maximum in pMHD/$N_{e}$ [and tMHD/$N_{e}$]{} (green dashed line, Figures \[fig\_tomo\_predsci\_2077\] and \[fig\_tomo\_predsci\_2097\]) follows nearly exactly the HCS predicted by pMHD/$B_{r}$, which is expected since [the models MHD/$B_{r}$ and $N_{e}$ are not independent]{}. We observe a clear mismatch between the locations of highest densities from tomography (white dots), the PFSS/HCS (black line), and the density maximum from the MHD solution (green dashed line). Previous works showed a limitation of the PFSS model in adequately reproducing some of the observed magnetic structures, in particular when large parts of the solar atmosphere are filled with nonpotential magnetic fields owing to the presence of active regions [@Wang2007ApJ_pseudostreamers; @Kramar2014SoPh]. Here we show that this is also the case for the HCS predicted by the MHD solutions. The density values found for pMHD/$N_{e}$ spread over a narrower range (6.3 10$^5$ – 1.3 10$^6$ cm$^{-3}$) and overestimate the tomography values (3.1 10$^3$ – 3.2 10$^5$ cm$^{-3}$) by an order of 4 for the maximum values and up to 10$^2$ for the minimum values. Our comparison illustrates the extent to which the plasma parameters predicted by the polytropic MHD model are [less realistic compared to the thermodynamic values tMHD/$N_{e}$ (1.9 10$^4$ – 1.9 10$^5$ cm$^{-3}$)]{}. [Typical histograms of the density distributions over the radial distances in Figure \[fig\_tomo\_predsci\_histo\] show that tomography provides a larger range of density values at every solar radius.]{} Temporal evolution and radial profiles of the density ----------------------------------------------------- To investigate the temporal evolution of the density during the two solar cycle minima, we first average over longitude all solutions obtained from tomography, pMHD/$N_e$ and the thermodynamic MHD solutions (tMHD/$N_e$), as it was done for Figures \[fig\_tomo\_predsci\_2077\] and \[fig\_tomo\_predsci\_2097\] right panels of (a) and (b). We evaluate the “maximum equatorial” electron density, $P^{\rm eq}_{N_e}(r,t)$, by taking the maximum density value over the latitudes at each radial distance. We evaluate the electron density, $P^{\rm pl}_{N_e}(r,t)$, by averaging the density values obtained above 65$^\circ$ and below -65$^\circ$ latitude at each radial distance. Figure \[fig\_profile\_temp\] shows the temporal evolution of these densities in the equatorial (red) and polar (blue) regions at a radial distance $r=3.5$ [${\mathrm R}_{\odot}$]{}. Since the thermodynamic MHD model is more complex and takes more time to compute, we have fewer data solutions. We note first that the temporal evolution of the density distribution from tomography shows a good agreement with the solar cycle; for reference we show the daily sunspot number (SN) and the yearly smooth SN in the top panel of Figure \[fig\_profile\_temp\]. In particular, the density values at the equator are found to be lower during the 2008-2010 solar sunspot minimum ($N_{e}\sim$ 0.8 10$^{5}$ – 1.1 10$^{5}$ cm$^{-3}$) compared to the 1996-1998 minimum ($N_{e}\sim$ 1.5 10$^{5}$ – 2.0 10$^{5}$ cm$^{-3}$). The minimum in 2008–2009 had 818 days where no sunspot was recorded, and had a yearly smooth SN $\ge 2.1$, while the minimum in 1996–1997 had only 309 spotless days, with a yearly smooth SN $\ge 10.4$. To assess our methodology, we also show the values found by [@Saito1977] at $r=3.5$ [${\mathrm R}_{\odot}$]{} (squares) at the equator (1.8 10$^{5}$ cm$^{-3}$) and in the polar regions (0.5 10$^{5}$ cm$^{-3}$). Saito’s densities were evaluated during a previous minimum (solar cycle number 20/21, with 272 spotless days, and a yearly smooth SN $\ge 16.9$); nevertheless, [@Hayes2001ApJ] and [@QuemeraisAnA2002] observe good agreement during the first minimum [for polar and equatorial regions]{}. At the equator, we consider the higher-density values, while these authors estimate average values of density. The second minimum, in 2008–2010, shows a lower SN, which reveals how tomography can reproduce the variation of the density distributions that follow the solar cycle. [At the poles,]{} the density from tomography is about 40% that of Saito’s for both minima. [The density models from [@Saito1977], [@Hayes2001ApJ] and [@QuemeraisAnA2002] are evaluated using the axi-symmetric assumption, which is less reliable than a tomographic inversion.]{} During the separation of the K component in the processing and the calibration of the pB images, an overestimation of the F corona and the stray light cannot be excluded, which results in underestimating the K component and thus the estimated density. [On the other hand, these models might also suffer from the missestimation of the background, resulting in incorrect higher values]{}. In the future, a new calibration procedure as proposed by [@Morgan2015calib] could be used to refine these results. As already noted, pMHD/$N_{e}$ overestimates the density found in the tomographic reconstruction by an order of magnitude. On the other hand, tMHD/$N_{e}$ provides more accurate values of the density albeit overestimated at the equator (tomo/tMHD $\sim 52$%) and underestimated at the poles (tMHD/tomo $\ge 70$%).These differences could be linked to the way the equatorial and polar values are computed: recall that the equatorial values correspond to maximum values, while the polar values are averages. It would appear more difficult to obtain a true maximum of a local parcel of plasma with the tomography than it is with the MHD simulation. The lack of resolution at the poles could explain the lower densities in the tMHD model. No significant time evolution can be observed in pMHD/$N_{e}$, while the tMHD/$N_{e}$ values show time variations that follow the variations in tomography estimates during the minima of the two solar cycles. This is more obvious for the equatorial regions and during the second, more extended solar minimum. Therefore, we conclude that the main variations found in the tomography results are realistic and can be physically interpreted by changes in sunspot activity. We next study the differences between the two solar minima and estimate radial profiles for the tomographic, pMHD/$N_e$ and tMHD/$N_e$ results. The [equatorial]{} radial profiles are obtained by averaging the electron density profiles as follows: $$\langle P^{\rm eq}_{N_e}(r,t) \rangle_{\rm 1996<t<1997} \rm{\ \ \ and \ \ \ } \langle P^{\rm eq}_{N_e}(r,t) \rangle_{\rm 2008<t<2010}.$$ Similarly, we estimate the [polar]{} radial profiles of the density: $$\langle P^{\rm pl}_{N_e}(r,t) \rangle_{\rm 1996<t<1997} \rm{\ \ \ and \ \ \ } \langle P^{\rm pl}_{N_e}(r,t) \rangle_{\rm 2008<t<2010}.$$ Figure \[fig\_profile\_rad\] shows those radial profiles of the density for the first minimum ($1996<t<1997$) as a dashed line, for the second minimum ($2008<t<2010$) as a solid line, at the equator (red) and at the poles (blue). Error bars represent the variance of the density values in the tomographic reconstruction over the given time period. As a reference, we also show the radial profiles found by [@Saito1977]. The general radial profile trends are in reasonably good agreement. Tomography results show profiles slightly more complex, and important changes between the two solar minima are observed. First, at the equator the densities differ by 62% along the radial profile, showing that the variations between cycles at 3.5 [${\mathrm R}_{\odot}$]{} are found at all radial distances. Second, at the poles the profiles cross at 3.5 [${\mathrm R}_{\odot}$]{}, showing opposite variations between cycles, below and above this key radial distance, with larger densities in the outer corona during the second deeper minimum. While the tMHD/$N_e$ profiles at the equator differ by a larger factor of 92% between the two minima, there is no significant change at the poles. The tMHD/$N_e$ profiles are more consistent with tomography up to 3.4 [${\mathrm R}_{\odot}$]{} and produce lower values at larger radial distances. Location of the highest-density structures ------------------------------------------ During the 2008–2010 minimum, comparing the two latitude–radial maps in the declining phase of cycle 23 and the rising phase of cycle 24 (right panels (a) of Figures \[fig\_tomo\_predsci\_2077\] and \[fig\_tomo\_predsci\_2097\]) helps to show that the denser region, presumably above active regions, spread more in latitude when solar activity increases. It is not obvious that the denser regions always correspond to the helmet streamer. We investigate how locations in latitude of the density maximum and the HCS agree or differ with time during the 2008–2010 minimum. To do so, we estimate the position in latitude of the density maximum in all the tomographic reconstructions and in the pMHD/$N_e$ models for every Carrington rotation from 2065 to 2106. The latitude of the HCS is extracted both in the PFSS model at the source surface of 2.5 [${\mathrm R}_{\odot}$]{} and in the pMHD/$B_r$ model (as the neutral line where $B_r \simeq 0$) at 1.5 and 3.5 [${\mathrm R}_{\odot}$]{}. [Panels (a)–(c) of Figure \[fig\_lat\_temp\] show the time evolution of the spread in latitude over all longitudes from the HCS predicted by pMHD/$B_r$ and the higher-density regions in tomography.]{} [Panels (d)–(g) are longitude–time maps that show the latitudinal locations of the density maximum from tomographic reconstructions, pMHD/$N_e$, and the HCS from pMHD/$B_r$ and PFSS.]{} [While panels (a) and (b) show that the spread of the HCS predicted by pMHD/$B_r$ is more confined with higher radial distance from 1.5 [${\mathrm R}_{\odot}$]{} to 3.5 [${\mathrm R}_{\odot}$]{}, the longitude–time maps of pMHD/$B_r$ were found to be the same at 1.5 [${\mathrm R}_{\odot}$]{} and 3.5 [${\mathrm R}_{\odot}$]{}in panel (f). The latitudinal spread of the tomographic highest-density region in panel (c) follows well the predicted HCS spread in panel (b), notably with a widening of the latitude range at the end of 2009. This change coincides with the rise of the new solar cycle 24 when new sunspots appear at higher latitudes, which results in the streamer belt spanning over higher absolute latitudes.]{} As expected, the results from pMHD/$N_e$ and pMHD/$B_r$ in panels (e) and (f) are nearly the same, which show a good agreement between the location of the density maximum and the location of the current sheet predicted by the MHD solution. We see a reasonably good agreement between the PFSS/HCS in panel (g) and the current sheet predicted by pMHD/$B_r$, which is expected since both are based on the observed LOS measurements of the photospheric magnetic field and uniform characteristic values for the plasma density and temperature as lower boundaries. The tomographic highest-density structure generally follows the predicted HCS as observed by [@Kramar2014SoPh], especially close to the minimum of solar activity, from 2008 to mid-2009. Here this can be observed thanks to longitudinal drifts with time of the highest-density structures. However, this is less clear during the rising phase of the solar cycle 24, towards the end of 2009. To investigate this difference, we show latitude–radial planes in the extended minimum and rising phases of cycle 24. Figure \[fig\_tomo\_predsci\_2077\_LON\] shows latitude–radial planes at longitude 120$^\circ$ of the tomographic reconstruction (2008 November 21 to December 14) and of the pMHD/$N_e$ and pMHD/$B_r$ solution during Carrington 2077. In this period of extended minimum, the maximum density in tomography follows the current sheet predicted by pMHD/$N_e$ and pMHD/$B_r$. On the other hand, Figure \[fig\_tomo\_predsci\_2097\_LON\] shows two examples of planes taken during the rise of the solar cycle 24 at longitude 90$^\circ$ and 170$^\circ$ during Carrington rotation 2097, where we observe that the maximum density from tomography does not follow the HCS but more likely aligns with a pseudo-streamer. Therefore, a pseudo-streamer can be found to be denser than a helmet streamer at the same longitude. We conclude that the highest-density structures do not always correspond to the predicted large-scale HCS or its helmet steamer but can follow the locations of pseudo-streamers. Since both structures contribute to the denser regions near the equator, both play a role in the wider spread in latitude as the activity increases. Longitudinal drifts of the highest-density structures ----------------------------------------------------- Longitudinal drifts with time of [coronal structures at 4 [${\mathrm R}_{\odot}$]{}have been first reported by [@Morgan2011ApJ_LongitudinalDrifts_a; @Morgan2011ApJ_LongitudinalDrifts_b]. The author measured the rotation rate of structures within specific latitudinal regions (as opposed to the maximum of density studied here) between -80$^\circ$ and 80$^\circ$ using a back-projection tomographic method. The rotation rates were found to vary considerably between latitudes with values between -3$^\circ$ and 3$^\circ$day$^{-1}$ relative to the Carrington rotation rate. In Figure \[fig\_lat\_temp\] we observe a longitudinal drift at 3.5 [${\mathrm R}_{\odot}$]{}of the highest-density structures that]{} are toward higher longitudes in the extended minimum phase and toward lower longitudes in the rising phase. Knowing how the denser regions are spreading in latitudes as the activity increases, we propose that the highest-density structures show a differential rotation well above the surface depending on how they are magnetically connected to the surface. The tomographic reconstruction method and the MHD models use the approximation of solar Carrington rotation. The Carrington rotation rate of 27.2753 days corresponds to the rotation observed near $\pm 30^\circ$ latitudes on the surface of the Sun [e.g., @Snodgrass1990ApJ; @Beck2000]. Thus, depending on the latitude of a structure on the surface, its rotation rate, $\omega$ in $^\circ$day$^{-1}$, is larger or smaller than the Carrington rotation rate, $\omega_{\rm CR}=13.20^\circ$day$^{-1}$, $$\omega = \omega_{\rm CR} + \alpha \label{eq_SolRotation}$$ where $\alpha$ is [positive]{} for the structures located between the latitudes $-30^\circ$ and $+30^\circ$ (showing a faster rotation), [negative]{} for the structures above $|\pm 30^\circ|$ (showing a slower rotation), and zero for structures located near $-30^\circ$ or $+30^\circ$. During the extended minimum, the helmet streamer clustered near the equator. The structure rotated faster than $\omega_{\rm CR}$, and shifted toward the larger Carrington longitudes, resulting in a positive longitudinal drift. From 2008 up to mid-2009, we find a faster rotation rate with $\alpha\simeq 0.25^\circ$day$^{-1}$, which means that the structure took only about 26.77 days to make a full rotation. On the other hand, during the rising phase of the solar cycle, the denser regions spread over latitudes above $|\pm 30^\circ|$, and were associated with a negative longitudinal drift. We find a slower rotation rate than $\omega_{\rm CR}$ with $\alpha\simeq -0.75^\circ$day$^{-1}$, corresponding to about 28.89 days for a full rotation. [The reversal in rotation rate coincides with the observed sudden extension in latitudes of the structures associated with the rise of solar activity toward the end of 2009 (panel (c) of Figure \[fig\_lat\_temp\]).]{} This result shows that the effect of the differential rotation is still visible at 3.5 [${\mathrm R}_{\odot}$]{} although the structure might not spread above $\pm 30^\circ$ at this radial distance. It also suggests that the rotation of high-density structures is determined by where they are magnetically connected to the surface of the Sun. Conclusion {#sec_ccl} ========== The 3D electron density distribution in the corona was determined for two solar minima: 1996–1997 (solar cycle number 22/23) and 2008–2010 (solar cycle number 23/24) with both an empirical model from a newly time-dependent tomographic method and theoretical models from [both polytropic and thermodynamic]{} MHD solutions. [The density distribution is more structured in tomography than in the MHD solutions, in particular in the polar regions. In both MHD models the predicted density distribution is strongly related to the configuration of the calculated magnetic field, and the highest-density structures always follow the HCS. While in tomography the highest-density structures do not always correspond to the predicted current sheet, but can sometimes align with the locations of pseudo-streamers. ]{} In tomographic reconstructions, the highest density at the equator and the average density at the poles follow the temporal evolution observed in the sunspot cycle. The maximum values in thermodynamic MHD solutions, tMHD/$N_{e}$, along the HCS show also a solar cycle variation, while there is no temporal evolution in polytropic MHD solutions, pMHD/$N_{e}$. This confirms that tMHD/$N_{e}$ are more realistic values than pMHD/$N_{e}$ [@Lionello2009]. The equatorial values of both tomography and tMHD/$N_{e}$ are found to be lower during 2008–2010 compared to 1996–1998, in agreement with differences in the solar sunspot minimum. The tMHD/$N_{e}$ overestimate the tomographic values found at the equator by 52%, while at the poles the values are consistent up to 3.4 [${\mathrm R}_{\odot}$]{} and then differ. At the poles the density from tomography is about 40% lower compared to [@Saito1977] for both minima. [In 2008–2010 the highest-density structures and the HCS predicted by the MHD models show a longitudinal drift, which confirms that the structures do not perfectly follow the Carrington rotation rate, but have a differential rotation also visible well above the surface. Toward the end of 2009 a drastic change in the rotation rate is observed corresponding to the raising of the solar cycle with the emergence of sunspots at higher latitudes and the spreading of the current sheet across the latitudes. The results suggest that the rotation rate of streamers and pseudo-streamers depends on how the structures are magnetically connected to the surface.]{} [The following are possibilities for future investigation: (1) One could identify the specific rotation rates of latitudinal regions or single structures in the corona independently, as done in the study by [@Morgan2011ApJ_LongitudinalDrifts_a], and contrast the results with an extrapolated radial filed model. (2) One could improve]{} the tomographic method by including the model of the rotation in the reconstruction, as already done by [@dePatoul2013SoPh], who included the solar differential rotation modeled only at the surface. [(3) Accurate knowledge of the rotation rate of [streamers and pseudo-streamers]{}]{} from the surface to higher altitude in the corona could help to better connect the sources of the solar wind to their in situ counterparts [e.g. @Foullon2011ApJ; @RileyLuhmann2012SoPh][, which can in turn]{} provide valuable insight for future investigations with *Solar Orbiter* [@Muller2013_SolarOrbiter] and *Solar Probe Plus* [@Vourlidas2015_ProbePlus]. In particular, *Solar Orbiter* will co-rotate with the Sun and provide images of the polar regions from heliographic latitudes above 35$^\circ$. [(4)]{} Ultimately, the time-dependent tomography can be extended to EUV and X-ray ranges to reconstruct also the electron temperature [e.g. @Frazin2009ApJ; @Vasquez2009SoPh]. It can help to constrain the radial density gradients, base densities, and temperatures of global MHD simulations. Such extensions, combine with the MHD coronal modeling efforts, have the potential to increase the reliability for future space weather forecasting. The authors would like to thank the anonymous reviewer for his/her valuable comments and suggestions to improve the quality of the paper. J.d.P. is the beneficiary of an AXA Research Fund postdoctoral grant. C.F. acknowledges financial support from the UK Science and Technology Facilities Council (STFC) under her Advanced Fellowship ST/I003649. The SOHO/LASCO data used here are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut for Solar System Research (Germany), Laboratoire d’Astronomie (France), and the University of Birmingham (UK). SOHO is a project of international cooperation between ESA and NASA. \ \ [m[0.1cm]{} m[0.58]{} m[0.30]{}]{} (a) & &\ (b) & &\ (c) & &\ (d) & ![(a) $N_e$ from tomography (2008 November 21 to December 4). [MHD solutions for Carrington rotation 2077: (b) polytropic pMHD/$N_e$, (c) thermodynamic tMHD/$N_e$ and (d) polytropic pMHD/$B_r$.]{} The left side shows the longitude–latitude map at 3.5 [${\mathrm R}_{\odot}$]{}. See text for an explanation of the plotted lines. The right panel shows the latitude–radial maps obtained by integrating over the longitudes in tomographic and MHD/$N_e$ solutions, and by a median over the longitudes in the pMHD/$B_r$ solution (radial contrast enhancement has been applied). \[fig\_tomo\_predsci\_2077\]](MHD_LLmap_355R_2077_Br.pdf "fig:"){width="57.50000%"} & ![(a) $N_e$ from tomography (2008 November 21 to December 4). [MHD solutions for Carrington rotation 2077: (b) polytropic pMHD/$N_e$, (c) thermodynamic tMHD/$N_e$ and (d) polytropic pMHD/$B_r$.]{} The left side shows the longitude–latitude map at 3.5 [${\mathrm R}_{\odot}$]{}. See text for an explanation of the plotted lines. The right panel shows the latitude–radial maps obtained by integrating over the longitudes in tomographic and MHD/$N_e$ solutions, and by a median over the longitudes in the pMHD/$B_r$ solution (radial contrast enhancement has been applied). \[fig\_tomo\_predsci\_2077\]](MHD_LLmap_Maxlon_2077_Br.pdf "fig:"){width="30.00000%"} [m[0.1cm]{} m[0.58]{} m[0.30]{}]{} (a) & &\ (b) & &\ (c) & ![Same as Figure \[fig\_tomo\_predsci\_2077\]. (a) Tomographic solution from 2010 June 6 to 20. Polytropic MHD solution for Carrington rotation 2097 in (b) and (c). \[fig\_tomo\_predsci\_2097\]](MHD_LLmap_355R_2097_Br.pdf "fig:"){width="57.50000%"} & ![Same as Figure \[fig\_tomo\_predsci\_2077\]. (a) Tomographic solution from 2010 June 6 to 20. Polytropic MHD solution for Carrington rotation 2097 in (b) and (c). \[fig\_tomo\_predsci\_2097\]](MHD_LLmap_Maxlon_2097_Br.pdf "fig:"){width="30.00000%"} [m[0.1cm]{} m[0.3]{} m[0.3]{} m[0.3]{}]{} &           Tomography &     Polytropic MHD model & Thermodynamic MHD model\ (a) & & &\ (b) & ![[Histograms of the density distribution for the radial distances. (a) Tomographic result for 2008 November 21 to December 4, and MHD solutions for Carrington rotation 2077. (b) Tomography: 2010 June 24 to 2010 July 8, and MHD solutions for Carrington rotation 2098.]{} \[fig\_tomo\_predsci\_histo\]](lasco_20100624_20100708_histo.pdf "fig:"){width="30.00000%"} & ![[Histograms of the density distribution for the radial distances. (a) Tomographic result for 2008 November 21 to December 4, and MHD solutions for Carrington rotation 2077. (b) Tomography: 2010 June 24 to 2010 July 8, and MHD solutions for Carrington rotation 2098.]{} \[fig\_tomo\_predsci\_histo\]](MHD_2098_histo.pdf "fig:"){width="30.00000%"} & ![[Histograms of the density distribution for the radial distances. (a) Tomographic result for 2008 November 21 to December 4, and MHD solutions for Carrington rotation 2077. (b) Tomography: 2010 June 24 to 2010 July 8, and MHD solutions for Carrington rotation 2098.]{} \[fig\_tomo\_predsci\_histo\]](MHDt_2098_histo.pdf "fig:"){width="30.00000%"} \ \ \ [m[0.1cm]{} m[0.85]{}]{} (a) &\ (b) &\ (c) &\ (d) &\ (e) &\ (f) &\ (g) &\ [c]{} 120$^\circ$\ \ \ ![Latitude–radial maps at 120$^\circ$ longitude. Top to bottom: tomographic result (2008 November 21 – December 4), pMHD/$N_e$, and pMHD/$B_r$ (Carrington rotation 2077). \[fig\_tomo\_predsci\_2077\_LON\]](MHD_LLmap_120lon_2077_Br.pdf "fig:"){width="25.00000%"}\ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 90$^\circ$ 170$^\circ$ ![Latitude–radial maps at 90$^\circ$ and 170$^\circ$ longitude. Top to bottom: tomographic result (2010 Jun 6–20), pMHD/$N_e$, and pMHD/$B_r$ (Carrington rotation 2079). \[fig\_tomo\_predsci\_2097\_LON\]](MHD_LLmap_090lon_2097_Br.pdf "fig:"){width="25.00000%"} ![Latitude–radial maps at 90$^\circ$ and 170$^\circ$ longitude. Top to bottom: tomographic result (2010 Jun 6–20), pMHD/$N_e$, and pMHD/$B_r$ (Carrington rotation 2079). \[fig\_tomo\_predsci\_2097\_LON\]](MHD_LLmap_170lon_2097_Br.pdf "fig:"){width="25.00000%"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
--- abstract: 'The X-ray SOI pixel sensor onboard the FORCE satellite will be placed in the low earth orbit and will consequently suffer from the radiation effects mainly caused by geomagnetically trapped cosmic-ray protons. Based on previous studies on the effects of radiation on SOI pixel sensors, the positive charges trapped in the oxide layer significantly affect the performance of the sensor. To improve the radiation hardness of the SOI pixel sensors, we introduced a double-SOI (D-SOI) structure containing an additional middle Si layer in the oxide layer. The negative potential applied on the middle Si layer compensates for the radiation effects, due to the trapped positive charges. Although the radiation hardness of the D-SOI pixel sensors for applications in high-energy accelerators has been evaluated, radiation effects for astronomical application in the D-SOI sensors has not been evaluated thus far. To evaluate the radiation effects of the D-SOI sensor, we perform an irradiation experiment using a 6-MeV proton beam with a total dose of $\sim 5{\rm ~krad}$, corresponding to a few tens of years of in-orbit operation. This experiment indicates an improvement in the radiation hardness of the X-ray D-SOI devices. On using an irradiation of 5 krad on the D-SOI device, the energy resolution in the full-width half maximum for the 5.9-keV X-ray increases by $7\pm2\%$, and the chip output gain decreases by $0.35\pm0.09\%$. The physical mechanism of the gain degradation is also investigated; it is found that the gain degradation is caused by an increase in the parasitic capacitance due to the enlarged buried n-well.' address: - 'Department of Physics, School of Science and Technology, Tokyo University of Science, 2641 Yamazaki, Noda, Chiba 278-8510, Japan' - 'Department of Physics, Faculty of Science, Kyoto University, Kitashirakawa-Oiwakecho, Sakyo-ku, Kyoto 606-8502, Japan' - 'Department of Applied Physics, Faculty of Engineering, University of Miyazaki, 1-1 Gakuen-Kibanodai-Nishi, Miyazaki, Miyazaki 889-2192, Japan' - 'Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tskuba, Ibaraki 305-0801, Japan' - 'Department of Advanced Accelerator Technologies, High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tskuba, Ibaraki 305-0801, Japan' - 'National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, 4-9-1 Anagawa, Inage-ku, Chiba-City, Chiba, 263-8555, Japan' author: - Kouichi Hagino - Keigo Yarita - Kousuke Negishi - Kenji Oono - Mitsuki Hayashida - Masatoshi Kitajima - Takayoshi Kohmura - 'Takeshi G. Tsuru' - Takaaki Tanaka - Hiroyuki Uchida - Kazuho Kayama - Yuki Amano - Ryota Kodama - Ayaki Takeda - Koji Mori - Yusuke Nishioka - Masataka Yukumoto - Takahiro Hida - Yasuo Arai - Ikuo Kurachi - Tsuyoshi Hamano - Hisashi Kitamura bibliography: - 'ref.bib' title: 'Radiation Damage Effects on Double-SOI Pixel Sensors for X-ray Astronomy' --- Radiation damage ,SOI pixel ,X-ray ,Imaging spectroscopy ,Astronomy ,TID Introduction {#sec:intro} ============ We propose a future wide-band X-ray astronomical satellite FORCE (Focusing On the Relativistic universe and Cosmic Evolution) [@Mori2016; @Nakazawa2018]. The FORCE mission aims to trace cosmic formation history by observing high energy phenomena in the universe. For this purpose, it carries three pairs of an X-ray super-mirror and a focal-plane detector with a focal length of 10 m. The mirror is composed of thin Si substrates with multi-layer coatings, thereby achieving a high angular resolution in hard X-ray with low mass [@Zhang2018]. The focal-plane detector, which is termed as wideband hybrid X-ray imager (WHXI), comprises a stack of Si sensors and CdTe sensors, utilizing the same concept as the hard X-ray imager onboard Hitomi [@Sato2016; @Nakazawa2018a; @Hagino2018]. A combination of the light-weight Si mirror and the hybrid detector provides a wide energy coverage from 1 to 80 keV and a high angular resolution of $<15''$. We have been developing X-ray pixel sensors termed as “XRPIX” for the Si sensor of WHXI [@Tsuru2018]. XRPIX is a monolithic active pixel sensor composed of a Si sensor layer and a CMOS pixel circuit layer with a thin oxide layer (BOX: buried oxide) in between. The sensors were fabricated using silicon-on-insulator (SOI) technology, which enables using high and low resistivity Si wafers as the sensor and circuit layers, respectively. Owing to the high resistivity Si wafer, XRPIX has a depletion layer with a thickness of a few hundreds of ${\rm \mu m}$. In the CMOS circuit layer, each pixel circuit has a self-trigger function. This enables a timing resolution of $\sim10{\rm ~\mu s}$. Radiation hardness is one of the major challenges when developing SOI pixel sensors. SOI pixel sensors are sensitive to the total ionization dose (TID) effect [@Hara2019]. The TID effect is caused by the accumulated positive charges in the BOX layer. Under ionizing irradiation, electron-hole pairs are created in this BOX layer. As a fraction of these holes are trapped in the BOX, it forms a positive oxide-trap charge [@Schwank2008]. These accumulated charges affect the CMOS circuit layer and alter transistor characteristics such as threshold voltage and trans-conductance [@Hara2019]. The double SOI (D-SOI) structure was introduced to reduce the TID effect. The D-SOI device has an additional thin middle Si layer in the BOX layer [@Miyoshi2013]. This middle Si layer compensates for the positive potential induced by the accumulated charges by applying a small negative voltage ($-2.5{\rm ~V}$ in this work). In addition to TID compensation, the D-SOI structure is useful in terms of spectral performance. In the single SOI (S-SOI) structure, which is the conventional SOI structure, there is a capacitive coupling between the sense node in the sensor layer and the circuit layer. As the middle Si layer is biased at a fixed bias voltage, it acts as an electrostatic shield, and reduces this capacitive coupling [@Ohmura2016]. As a consequence, the chip output gain increased and the readout noise reduced in the D-SOI device [@Takeda2019]. Although the radiation hardness for D-SOI devices applied in high energy accelerators has been evaluated [@Honda2014; @Hara2015; @Hara2019], the devices used for astronomical applications have not been evaluated thus far. Therefore, we irradiated a 6 MeV proton beam on the D-SOI type XRPIX and evaluated its radiation hardness, from a perspective of astronomical applications. The proton irradiation experiment is described in Section \[sec:exp\] and its results are reported in Section \[sec:result\]. In Section \[sec:discussion\], we discuss the physical mechanism of gain degradation, and the conclusions of this study are summarized in Section \[sec:conclusion\]. Proton Irradiation Experiment {#sec:exp} ============================= ![Schematic picture of XRPIX6C[]{data-label="fig:xrpix6c"}](figures/xrpix6c.pdf){width="\hsize"} Radiation Hardness Required for FORCE ------------------------------------- In the orbit of the FORCE satellite (altitude of $\sim500{\rm~km}$ and orbital inclination of $\sim30^\circ$), the onboard sensors experience radiation damage mainly due to the cosmic-ray protons geomagnetically trapped at South Atlantic Anomaly. The dose rate of the trapped protons on the XRPIX is approximately $\sim0.1{\rm ~krad/year}$ [@Yarita2018]. As X-ray astronomical satellites operate for a few to $\sim10$ years in orbit, the total dose during the mission lifetime is less than a few krad. Compared with high energy accelerators, the typical dose level in astronomical satellites is significantly lower. However, in terms of device performance, more stringent requirements are imposed in astronomical satellites. XRPIX is required to have an energy resolution of $300{\rm ~eV}$ at $6{\rm ~keV}$ in full-width half maximum (FWHM) and a readout noise of $10{\rm ~e^{-}}$ in root mean square (rms) [@Tsuru2018]. Thus, we investigate the degradation of spectral and noise performances with an accuracy of $\sim10{\rm ~eV}$ (a few ${\rm e^{-}}$) under a radiation dose of a few krad. D-SOI device: XRPIX6C --------------------- We performed a proton irradiation experiment on the D-SOI device called “XRPIX6C,” at Heavy Ion Medical Accelerator in Chiba (HIMAC), in the National Institute of Radiological Sciences. A schematic of the cross-sectional structure of XRPIX6C is depicted in Fig. \[fig:xrpix6c\]. The sensor layer is composed of p-type Si with a resistivity of $4{\rm ~k\Omega~cm}$, which corresponds to the doping concentration of $3\times 10^{12}{\rm ~cm^{-3}}$. As the thickness is $300{\rm ~\mu m}$, the back bias voltage should be higher than $\sim200{\rm ~V}$ for full depletion. The pixel size is $36{\rm ~\mu m}\times 36{\rm ~\mu m}$. The size of imaging area is $1.728\times1.728{\rm ~mm^2}$ in this device, and will be $15\times 45{\rm ~mm^2}$ in the flight model [@Tsuru2018]. The power consumption of this device is $\sim 1{\rm ~W}$, including the readout board. Each pixel has a sense node surrounded by a buried n-well (BNW), which was originally introduced as an electric shield [@Arai2011]. In the D-SOI device, as the middle Si layer acts as an electric shield, the size of the BNW is as small as $3{\rm ~\mu m}$, which is significantly smaller than that of the BNW in S-SOI. Owing to the smaller BNW size in the D-SOI device, the parasitic capacitance of the sense node is significantly reduced, and the signal-to-noise ratio is improved [@Takeda2019]. XRPIX6C was operated during proton irradiation because the device will be operated in the space radiation environment. The sensor layer was fully depleted by applying a back bias voltage of $-250{\rm ~V}$, and the readout circuits were operated as usual. A negative voltage of $-2.5{\rm ~V}$ was applied to the middle Si layer. The device was placed in a vacuum chamber and cooled down to $\sim -70^\circ{\rm C}$, which is determined by the experimental setup of the cooler and the chamber. Although this temperature is lower than that in the space environment ($\sim -15^\circ{\rm C}$), the noise due to the leakage current is negligible even at $-15^\circ{\rm C}$. Experimental Setup ------------------ ![Schematic of the proton irradiation experiment. XRPIX is irradiated with a proton beam scattered to $45^\circ$.[]{data-label="fig:setup"}](figures/setup.pdf){width="\hsize"} ![image](figures/leak.pdf){width="0.48\hsize"} ![image](figures/noise.pdf){width="0.48\hsize"} The experimental setup is shown in Fig. \[fig:setup\]. The proton beam is scattered by a thin ($2.5{\rm ~\mu m}$)-gold film. XRPIX is installed at a scattered angle of $45^\circ$ at a distance of $\sim400{\rm ~mm}$ from the scatterer. One of the main advantages of this configuration is the spatial uniformity of the beam at the location of XRPIX. The device size of $\sim 1{\rm ~mm}$ corresponds to the angle difference $\Delta \theta=0.1^\circ\textrm{--}0.2^\circ$ from the scatterer. As the differential cross section of the Rutherford scattering is written as $d\sigma/d\Omega\propto \sin^{-4}\left(\theta/2\right)$, the non-uniformity of the beam flux due to the angle difference will be a few percent. In addition, by measuring the unscattered beam using a Faraday cup, the fluctuations in the beam intensity can be monitored during the irradiation. The energy of the incident proton beam is $6{\rm ~MeV}$ at the beam line. Although the spectrum of the geomagnetically-trapped protons in the FORCE orbit is a continuous spectrum peaked at $\sim 100{\rm ~MeV}$, the energy deposit on the Si sensor is dominated by $4\textrm{--}20{\rm ~MeV}$ protons [@Yarita2018]. Therefore, $6{\rm ~MeV}$ is a good approximation of the radiation environment in the orbit. In addition, since this energy is sufficient to penetrate the BOX layer of XRPIX6C, the vertical non-uniformity of the dose level in the BOX layer is negligible. The beam intensity at the location of XRPIX was measured using an avalanche photodiode; it was found to be $0.84\textrm{--}1.52\times 10^5{\rm ~protons/s/cm^2}$, corresponding to a dose rate of $0.26\textrm{--}0.47{\rm ~krad/hour}$, assuming a stopping power of SiO$_2$ of $53.85{\rm ~MeV~cm^2/g}$ from the NIST PSTAR database [@PSTAR]. The range of values denotes day-by-day variation of the beam intensity. Since such a variation was measured with the Faraday cup during the irradiation, it was properly corrected in the total dose estimation. XRPIX6C was intermittently irradiated with the proton beam up to a total dose level of $\simeq 5{\rm ~krad}$, and the device performance between the irradiations was evaluated. Performance Degradation of XRPIX6C {#sec:result} ================================== Leakage Current and Readout Noise --------------------------------- ![X-ray spectra of ${\rm Mn~K\alpha}$ and ${\rm Mn~K\beta}$ from $^{55}{\rm Fe}$ measured using the D-SOI device after proton irradiation.[]{data-label="fig:spec"}](figures/specs4.pdf){width="\hsize"} ![image](figures/gain.pdf){width="0.48\hsize"} ![image](figures/fwhm.pdf){width="0.48\hsize"} We evaluated the degradation in the leakage current by analyzing the pedestal values. In the pixel circuits of XRPIX, charges collected at the sense node are stored in sampling capacitors during a predefined integration time. As the leakage current also flows into the sampling capacitor, the pedestal values should be proportional to the leakage current accumulated during the integration time. Thus, we measured the pedestal values as a function of the integration time and subsequently estimated the leakage current based on its derivative. The left panel of Fig. \[fig:leak\_noise\] illustrates the degradation in the leakage current. Compared with the result of the S-SOI device, which is indicated in blue [@Yarita2018], the increase ratio of the leakage current of the D-SOI device is significantly lesser. Depending on the linear fitting, the leakage current of the D-SOI device increases by $9.9\pm4.0\%$ for an irradiation of $5{\rm ~krad}$. We should note that the reason of the large initial leakage current of the D-SOI device is not resolved. The readout noise was also evaluated based on the pedestal values. We fitted the histogram of the pedestal values for all pixels using a Gaussian function, and its standard deviation was used as a measure of the readout noise. The right panel of Fig. \[fig:leak\_noise\] depicts the degradation in the readout noise. Similar to the leakage current, the degradation in the readout noise of the D-SOI device is significantly reduced. The best fit line indicates that the readout noise increases by $1.8\pm 0.5\%$ for an irradiation of $5{\rm ~krad}$ in the D-SOI device. Moreover, the readout noise is improved owing to the electrical shielding due to the middle Si layer as described in Sec. \[sec:intro\]; it remains comparable with the requirement of $\sim 10{\rm ~eV}$ even after irradiation. X-ray Spectral Performance -------------------------- The spectral performance was evaluated by irradiating X-rays from the $^{55}$Fe radioisotope. As shown in Fig. \[fig:spec\], the spectral performance changes slightly after a few krad of dose. For a more quantitative evaluation, we fitted the ${\rm Mn~K\alpha}$ line at 5.9 keV, using a Gaussian function. The chip output gain and the energy resolution were estimated based on the peak position and FWHM of the best-fit Gaussian function, respectively. In this paper, the chip output gain is described in the unit of $\rm \mu V/e^{-}$, meaning the conversion coefficient of the signals from the Si sensor ($\rm e^{-}$) to charge signals in the chip output ($\rm \mu V$). Degradations in the chip output gain and energy resolution are depicted in Fig. \[fig:gain\_fwhm\]. Unlike the leakage current and readout noise, the gain and energy resolution do not exhibit a significant increase or decrease in the D-SOI device. At $5{\rm ~krad}$, the gain decreases by $0.35\pm0.09\%$, and the energy resolution degrades by $7.1\pm2.2\%$. Compared with the results of S-SOI, the gain of D-SOI degrades in the opposite direction. The gain of the D-SOI device decreases, whereas that of the S-SOI device increases. Although the increase ratio of the energy resolution improves, this increased value is evidently different between the D-SOI and S-SOI devices. The energy resolution of the D-SOI device increases by $\sim 20{\rm ~eV}$, whereas that of the S-SOI device increases by $\sim 200{\rm ~eV}$. This increased value does not change even if we consider the energy dependence of the energy resolution (i.e., Fano noise) because the Fano noises are as small as 120 eV and 230 eV at 5.9 keV (D-SOI XRPIX6C) and 22.1 keV (S-SOI XRPIX2b), respectively. Therefore, even after an irradiation of $5{\rm ~krad}$, the energy resolution is $\simeq260{\rm ~eV}$, which satisfies the requirement of the FORCE satellite. Discussion on the Degradation Mechanism {#sec:discussion} ======================================= Device Simulation ----------------- The proton irradiation experiment of the D-SOI device revealed the degradation of spectral performance. In particular, the gain degrades by a few tens of eV for the S-SOI device as well as the D-SOI device. In addition, the gain of these devices degrades in opposite directions, which would be an insightful feature. Thus, in this section, we discuss the degradation mechanism of the chip output gain in the D-SOI device. To investigate the mechanism of gain degradation, we calculated the electric field structure and carrier distributions in XRPIX6C using the semiconductor device simulator HyDeLEOS, which is a part of the TCAD system HyENEXSS [@TCAD]. The implementation of the device structure was identical to that in @Hagino2019. The sense nodes, p-stops, BPWs, BNWs, and middle Si layers were implemented based on the parameters provided by LAPIS Semiconductor Co. Ltd. In the simulation, the TID effect was reproduced by setting the positive fixed charges of $\sim 10^{11}{\rm ~cm^{-2}}$ in the BOX layer, based on the previous studies on SOI pixel devices [@Hara2019]. Based on the results of the device simulation, we found that the BNW size varies with the BOX charge $N_{\rm BOX}$. Fig. \[fig:bnw\] depicts the electron density map calculated via the TCAD simulation. As shown in the figure, the effective size of the BNW, whose electron concentration is $10^{16\textrm{--}18}{\rm ~cm^{-3}}$, increases from $\simeq2.6{\rm ~\mu m}$ at $N_{\rm BOX}=0{\rm ~cm^{-2}}$ to $\simeq3.4{\rm ~\mu m}$ at $N_{\rm BOX}=2\times10^{11}{\rm ~cm^{-2}}$. This enlargement of the BNW is attributed to the positive potential due to the BOX charges. Such a potential attracts electrons towards the interface between the sensor and the BOX layers, thereby enhancing the electron density at the interface. Thus, the BOX charge generated via irradiation enlarges the BNW. We believe that this effect is one of the causes of gain degradation. Relation of the BNW Size to the Gain:\ consideration with another D-SOI device XRPIX6D ----------------------------------------------- ![Two-dimensional electron density map around the BNW with BOX charges of $0$, $1\times10^{11}$, and $2\times 10^{11}{\rm ~cm^{-2}}$.[]{data-label="fig:bnw"}](figures/bnw.pdf){width="\hsize"} ![X-ray spectra of the D-SOI device with different BNW sizes. The peak position depends on the size of the BNW.[]{data-label="fig:allteg"}](figures/allteg.pdf){width="\hsize"} To investigate the relation between BNW size and chip output gain, we analyzed the experimental data of another D-SOI device called “XRPIX6D” [@Hagino2019]. Test element groups with different BNW sizes were implemented in this device. The spectra obtained for the different BNW size are depicted in Fig. \[fig:allteg\]. The chip output gain decreases with the increasing BNW size. This indicates that BNW enlargement due to the irradiation would probably result in gain degradation. As a more detailed physical mechanism of gain degradation, we considered the effect of the sense node capacitance on chip output gain. The chip output gain of XRPIX is determined using the closed-loop gain of a charge-sensitive amplifier (CSA) $G_{\rm CSA}$, source follower circuit gain $G_{\rm SF}\simeq0.82$, and gain from sample-hold to the output buffer circuit $G_{\rm SH}\simeq0.8$ [@Takeda2019]. The sense node capacitance affects the CSA gain. Considering a CSA circuit with input capacitance $C_{\rm SN}$ (in this case, sense node capacitance) and a feedback capacitance $C_{\rm FB}$, the inverse of chip output gain $G$ can be written as $$\frac{1}{G}=\frac{1}{G_{\rm CSA}G_{\rm SF}G_{\rm SH}}=\frac{1}{AG_{\rm SF}G_{\rm SH}}C_{\rm SN}+\frac{A+1}{AG_{\rm SF}G_{\rm SH}}C_{\rm FB},$$ where $A$ is the open-loop gain of the CSA. If the sense node capacitance increases from $C_{\rm SN}$ to $C_{\rm SN}+\Delta C_{\rm SN}$, the inverse of the gain changes as $$\Delta \left(\frac{1}{G}\right)=\frac{1}{AG_{\rm SF}G_{\rm SH}}\Delta C_{\rm SN}.\label{eq:dGdC}$$ As the parasitic capacitance between the BNW and the middle Si layer is considered to be a major contributor to the sense node capacitance $C_{\rm SN}$, we estimated the parasitic capacitance using the parallel-plate capacitor formula. Based on the distance between the BNW and the middle Si layer $d_{\rm BNW-MS}=0.145{\rm ~\mu m}$, the open-loop gain $A=108$ [@Takeda2019], and a permittivity of SiO$_2$ $\varepsilon=3.5\times 10^{-13}{\rm~F/cm}$, Eq. \[eq:dGdC\] is expressed as $$\Delta \left(\frac{1}{G}\right)\simeq 3.4\times10^{-3}\times \left(\frac{\Delta S_{\rm BNW}}{1{\rm ~\mu m^2}}\right) {\rm ~fF},\label{eq:dGdS}$$ where $\Delta S_{\rm BNW}$ is the change in the areas of the BNW. Utilizing the data of XRPIX6D depicted in Fig. \[fig:allteg\], we verified the validity of Eq. \[eq:dGdS\]. In this data, the BNW size changes from $3{\rm ~\mu m}$ to $13{\rm ~\mu m}$, corresponding to $\Delta S_{\rm BNW}\simeq160{\rm ~\mu m^2}$. Thus, the change in the inverse of the gain is calculated to be $\Delta (1/G)\simeq 0.54{\rm ~fF}$, using Eq. \[eq:dGdS\]. On the other hand, based on the experimental data of XRPIX6D, the change in the inverse of the gain is calculated to be $\Delta (1/G)\simeq 0.56 {\rm ~fF}$. Thus, the change in the gain corresponding to the change in the size of the BNW can be appropriately explained by the relation in Eq. \[eq:dGdS\]. Effect of the BNW Enlargement due to the Irradiation ---------------------------------------------------- Assuming the relation in Eq. \[eq:dGdS\], we calculated the change in the area of the BNW required to explain the gain degradation observed during the proton irradiation experiment. In the experiment, the gain degradation was found to be $0.35\%$ at an irradiation of $5{\rm ~krad}$, corresponding to $\Delta (1/G)\simeq 1.1\times 10^{-2}{\rm ~fF}$. According to Eq. \[eq:dGdS\], to explain this gain degradation, the BNW area should change by $\Delta S_{\rm BNW}\simeq 3.2{\rm ~\mu m^2}$. On the other hand, the simulation of the device indicates that the BNW area changes by $\simeq2\textrm{--}5{\rm ~\mu m^2}$ for the accumulated BOX charge of $1\textrm{--}2\times 10^{11}{\rm ~cm^{-2}}$. According to a previous study [@Hara2019], this value of the BOX charge is reasonable for an irradiation of $5{\rm ~krad}$. Moreover, the opposite direction of gain degradation in the S-SOI device XRPIX2b can also be explained by this scenario because XRPIX2b consists of an n-type substrate and p-type sense nodes surrounded by BPWs rather than BNW. In the n-type substrate, the electrons attracted to the sensor/BOX interface would shrink the BPW around the sense node, thereby increasing the gain. Thus, the gain degradation observed during the proton irradiation experiment can be quantitatively explained by the increase in the sense node capacitance owing to the enlargement of the BNW. The gain degradation due to BNW enlargement must also affect the readout noise. In previous studies, the readout noise in XRPIX was found to have a strong correlation with the gain [@Takeda2015; @Harada2018]. According to these studies, the readout noise $\sigma$ and the gain $G$ have a power law relation of $\sigma\propto G^{-0.7}$. Thus, the gain degradation of $\Delta G/G\simeq 0.35\%$ observed in our experiment corresponds to the readout noise increase of $\Delta \sigma\simeq 0.3\%$. In addition, the increase of the shot noise due to the degradation of the leakage current also contributes to the increase in the readout noise. As the readout noise was evaluated using an integration time of $1{\rm ~ms}$, the contribution of the increase in the shot noise to the readout noise is estimated to be $\simeq1.3\pm0.5\%$. Thus, the degradation in the readout noise can be completely explained by a combination of the contributions of gain degradation and the increase in the leakage current. Although the degradation of the gain and readout noise is explained by the above scenario, the degradation mechanism of the energy resolution is not fully understood. One possibility is the charge loss, which was analyzed in detail in our previous study [@Hagino2019]. In the D-SOI devices, a part of the signal charge generated by the incident X-ray is probably lost at the Si/SiO$_2$ interface between the sensor layer and the BOX layer. The charge loss makes a tail structure in the X-ray spectra, and degrades the energy resolution. Since the amount of the charge loss must be affected by the electric field and the carrier distribution in the sensor layer, the BOX charges generated via irradiation could also affect the energy resolution. However, in the current experimental data shown in Fig. \[fig:spec\], it is difficult to evaluate the change of the tail structure. Thus, in order to investigate the radiation effect from this aspect, it is necessary to study at much higher dose level, where the degradation of the energy resolution would be more significant. Conclusions {#sec:conclusion} =========== We evaluated the radiation hardness of the new XRPIX with a D-SOI structure, by irradiating a 6-MeV proton beam at HIMAC. We found that the degradation in the leakage current and readout noise was improved in the D-SOI device. Even after an irradiation of $\sim5{\rm ~krad}$, the energy resolution satisfies the requirement of the FORCE mission ($<300{\rm ~eV}$). Moreover, the gain degradation could be explained by the enlargement in the size of the BNW caused by the BOX charges ($1\textrm{--}2\times10^{11}{\rm ~cm^{-2}}$). The readout noise degradation was also found to be consistent with the effect of gain degradation and the increase in leakage current. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge the valuable advice and assistance provided by the personnel of LAPIS Semiconductor Co., Ltd. This study was supported by MEXT/JSPS KAKENHI Grant-in-Aid for Scientific Research on Innovative Areas 25109002 (Y.A.), 25109004 (T.G.T., T.T., K.M., A.T., and T.K.), Grant-in-Aid for Scientific Research (B) 25287042 (T.K.), Grant-in-Aid for Young Scientists (B) 15K17648 (A.T.), Grant-in-Aid for Challenging Exploratory Research 26610047 (T.G.T.), and Grant-in-Aid for Early-Career Scientists 19K14742 (A.T.). This study was also supported by the VLSI Design and Education Center (VDEC), the University of Tokyo in collaboration with Cadence Design Systems, Inc., Mentor Graphics, Inc., and Synopsys, Inc. References {#references .unnumbered} ==========
--- abstract: 'We measure equivalent widths (EW) – focussing on two unique features (NaI and TiO2) of low-mass stars ($\la 0.3$M$_{\odot}$) – for luminous red galaxy spectra from the the Sloan Digital Sky Survey (SDSS) and X-Shooter Lens Survey (XLENS) in order to study the low-mass end of the initial mass function (IMF). We compare these EWs to those derived from simple stellar population models computed with different IMFs, ages, \[$\alpha$/Fe\], and elemental abundances. We find that models are able to simultaneously reproduce the observed NaD $\lambda$5895 and $\lambda$8190 features for lower-mass ($\sim \sigma_{*}$) early-type galaxies (ETGs) but deviate increasingly for more massive ETGs, due do strongly mismatching NaD EWs. The TiO2 $\lambda$6230 and the $\lambda$8190 features together appear to be a powerful IMF diagnostic, with age and metallicity effects orthogonal to the effect of IMF. We find that both features correlate strongly with galaxy velocity dispersion. The XLENS ETG (SDSSJ0912+0029) and an SDSS ETG (SDSSJ0041-0914) appear to require both an extreme dwarf-rich IMF and a high sodium enhancement ($\mathrm{[Na/Fe]}=+0.4$). In addition, lensing constraints on the total mass of the XLENS system within its Einstein radius limit a bottom-heavy IMF with a power-law slope to $x\le 3.0$ at the $90\%$ C.L. We conclude that NaI and TiO features, in comparison with state-of-the-art SSP models, suggest a mildly steepening IMF from Salpeter ($dn/dm\,\propto\,m^{-x}$ with $x=2.35$) to $x \approx 3.0$ for ETGs in the range $\sigma = 200- 335$[$\mathrm{km\,s^{-1}}$]{}.' author: - 'C. Spiniello, S. C. Trager, L. V. E. Koopmans, Y. P. Chen' title: | Evidence for a mild steepening and Bottom-heavy IMF in Massive\ Galaxies from Sodium and Titanium-Oxide Indicators --- Introduction ============ When constraining the star formation, metallicity and gas/dust content of galaxies, the initial mass function (IMF) is often assumed to be universal and equal to that of the solar neighborhood (Kroupa 2001; Chabrier 2003; Bastian, Covey & Meyer 2010). However, evidence has recently emerged that the IMF might evolve (Davé 2008; van Dokkum 2008) or depend on the stellar mass of the system (e.g. Worthey 1992; Trager et al. 2000b; Graves et al. 2009; Treu et al. 2010; Auger et al. 2010b; Napolitano 2010; van Dokkum & Conroy 2010). van Dokkum & Conroy (2010; hereafter vDC10) suggested that low-mass stars ($\leq 0.3\,M_{\odot}$) could be more prevalent in massive early-type galaxies. The increase in the mass-to-light ratio (M/L) of galaxies with galaxy mass may thus be partly due to a changing IMF rather than an increasing dark matter fraction, consistent with previous suggestions (Treu et al. 2010, Auger et al. 2011, Barnabè et al. 2011, Dutton et al. 2012, Cappellari et al. 2012). vDC10 showed that some spectral features, such as the $\lambda\lambda8183,8195$ doublet (called NaI0.82 by CvD12), depend strongly on surface gravity at fixed effective temperature, betraying the presence of faint M dwarfs in integrated light spectra. If correct, the low-mass end of the IMF can be inferred directly from red/near-IR spectra of old populations. Hence, the strength of the doublet versus another sodium feature, such as the NaD doublet (called Na0.59 by CvD12), should provide a powerful means for separating the IMF from other effects. Specifically for the purpose of determining the low-mass IMF down to $\sim 0.1M_{\odot}$ for metal-rich stellar populations with ages of 3–13.5 Gyr, Conroy & van Dokkum (2012; hereafter CvD12) presented new population synthesis models. The NaD feature responds more strongly to Na-enhancement than IMF in the CvD12 models, while the doublet is strong in stars with mass $<0.3\,M_{\odot}$ and weak or absent in all other types of stars. Unfortunately, NaI0.82 is also sensitive to age and metallicity, and NaD is influenced by any interstellar medium. It is therefore necessary to test these models over a range of age and metallicity indicators, as well as against other lines caused by low-mass stars. In this letter, we focus on the NaI feature as indicator of low-mass stars. We use NaD as indicator of a change in sodium abundance and H$\beta$ and \[MgFe\] as indicators of age and metallicity, respectively. This allows us to assess model degeneracies and deficiencies. We propose the use of the TiO feature at $\lambda$6230 as an indicator of the presence of low-mass stars. We find that both of these features (NaI and TiO) correlate with galaxy velocity dispersion, implying a steepening of the IMF slope in ETGs with $\sigma>\sigma_{*}$. We assume $H_{\rm 0}=70 \,\mathrm{km \, s^{-1}\,Mpc^{-1}}$, $\Omega_{\rm m}=0.3$ and $\Omega_{\Lambda}=0.7$ throughout this letter. ![Galaxy (continuous lines) and model (dashed lines) spectra in the regions of the NaD (top) and NaI (bottom) features. The observed NaD EWs do not match the models for the most massive ETGs ($\ge 300$[$\mathrm{km\,s^{-1}}$]{}). absorption is stronger in the XLENS system and SDSSJ0041-0914 and appears in both cases to require an IMF steeper than Salpeter, while the stacked SDSS spectrum shows a weaker feature that matches a model with a Salpeter IMF. The bottom panels show the noise spectrum of the XLENS system.[]{data-label="fig:spectra"}](f1.pdf){height="12"} The data ======== As part of the [XLENS]{}[^1] project, we obtained a UVB-VIS X-shooter spectrum of the massive and luminous early-type SLACS (Sloan Lens ACS Survey, Bolton et al. 2006) lens galaxy SDSS J0912+0029 at $z=0.1642$, with high enough signal-to-noise to perform stellar population analyses. The lens galaxy shows a surprisingly deep NaI0.82 feature (Fig.\[fig:spectra\]), making it an extremely interesting target for studying the low-mass end of the IMF in ETGs. We measure the luminosity-weighted velocity dispersion of the lens galaxy from the reduced flux-calibrated 1D UVB–VIS spectrum using the Penalized Pixel Fitting (pPXF) code of Cappellari & Emsellem (2004). We obtain $\langle \sigma_{*}\rangle(\la R_{\rm eff})=325\pm10\pm12\,{\ensuremath{\mathrm{km\,s^{-1}}}}$, in agreement with the previously published value ($\sigma \simeq 313 \pm 12\,{\ensuremath{\mathrm{km\,s^{-1}}}}$; Bolton et al. 2006). We also used the spectra of $\sim$250 galaxies with similar morphology and colors (all LRGs) from the Sloan Digital Sky Survey DR8 (SDSS; Aihara et al. 2011), in five velocity-dispersion bins spread over 200–335[$\mathrm{km\,s^{-1}}$]{}($\sim$50 galaxies per bin). We examine one system, SDSSJ0041-0914, separately because it has a NaI0.82 feature comparably deep to the XLENS system. Stellar Population Synthesis Modeling ===================================== We use the synthetic spectra of CvD12 to analyze the stellar populations of these galaxies. The models make use of two separate empirical libraries, the MILES library covering 3500–7400Å (Sánchez-Blázquez et al. 2006) and the IRTF library of cool stars covering 8100–24000Å  (Cushing et al. 2005; Rayner et al. 2009). They also incorporate synthetic spectra with the purpose of investigating changes in the overall metallicity or changes in the abundances of individual elements and to cover the gap in wavelength between the two empirical libraries. We refer to CvD12 for details. The abundance variations of single elements are implemented at fixed \[Fe/H\], which implies that the total metallicity $Z$ varies from model to model. We measure line-strength indices in the range 4000–8400Å, including the standard Lick indices H$\beta$, Mg$b$, Fe5270, Fe5335, NaD and a TiO index (TiO2) using the definitions of Trager et al. (1998), and the commonly-used \[MgFe\] combination[^2]. We define a modified index around the doublet $8183,8195$Å, which seems to be strongly dependent on the low-mass end of the IMF (Table 1). This index is slightly different from that used by vDC10 and CvD12, having a wider central index bandpass and slightly wider pseudo-continua. Our definition is more stable against velocity dispersion variations and more suitable for massive ETGs. We convolve all the galaxy and model spectra to an effective velocity dispersion of $\sigma=335\,{\ensuremath{\mathrm{km\,s^{-1}}}}$ (the upper limit in our sample), to correct for kinematic broadening, before measuring indices. Indices in both the observed and synthetic spectra are measured with the same definitions and method (SPINDEX2; Trager et al. 2008). We do not place our indices on the zero-point system of the Lick indices and quote them as equivalent widths (EWs) in units of Å, except for TiO2, which is given in magnitudes.\ [lcc]{} NaI & 8168.500 – 8234.125 & 8150.000 – 8168.400\ && 8235.250 – 8250.000\ ![Index-index plots of the main absorption features. Lines and crosses are different SSP models from CvD12 with increasing IMF (Chabrier, Salpeter with a slope of $x=2.35$, a bottom-heavy IMF with slope of $x=3.0$ and an extremely dwarf-rich IMF with a slope of $x=3.5$). Points colored according to their velocity dispersions are individual SDSS galaxies, with index errors similar to SDSS J0041-0914. In the plots showing sodium, the XLENS system SDSSJ0912+0029 requires a very steep IMF, violating lensing contraints on its total mass (see the text for further details). *Panel (a):* H$\beta$ as a function of \[MgFe\]. The most massive ETGs ($>300$[$\mathrm{km\,s^{-1}}$]{}) best match an old stellar population (13.5 Gyr) with super-solar total metallicity. Lower-mass systems are younger. *Panel (b):* NaD as a function of NaI. Only low-mass ($<250$[$\mathrm{km\,s^{-1}}$]{}) systems match the models. More massive ETGs require a higher \[Na/Fe\] and the XLENS system and SDSSJ0041-0914 also require a very steep IMF slope. *Panel (c):* TiO2 as a function of \[MgFe\]. The most massive ETGs require an IMF slope slightly steeper than Salpeter. A Chabrier-type IMF systematically underestimates the SDSS TiO2 EWs. *Panel (d):* TiO2 as a function of NaI. The ETGs match with the models using a Salpeter or slightly steeper IMF, but the XLENS system and SDSSJ0041-0914 still do not match the SSP models well.[]{data-label="fig:indices"}](f2.pdf){height="11.2cm"} Results and discussion ====================== H$\beta$ is primarily an age indicator, while a combination of Mg$b$, Fe5270, and Fe5335 yields information on the mean metallicity \[Z/H\] of the population (Worthey 1994) while minimizing the effects of abundance ratio variations (e.g., Gonz[á]{}lez 1993; Trager et al. 2000a). These indices (Panel (a), Fig. \[fig:indices\]) show a good agreement between the models and the galaxies EWs for old stellar populations, with an age of $13.5 \pm 3$ Gyr for $\sigma \ge 300$[$\mathrm{km\,s^{-1}}$]{} (black points) and younger ages for lower mass ETGs. The statistical error is deduced directly from variations in H$\beta$ [^3]. The most massive ETGs have values of $[\alpha/\mathrm{Fe}]$ between solar and super-solar ($\sim$0.2), in good agreement with the prediction that massive galaxies have significantly super-solar abundance ratios because of rapid, high-efficiency star formation (Trager et al. 2000b; Thomas et al. 2005, Spolaor et al. 2009, 2010). Given the uncertainties in the line-strengths of the two individual galaxies (SDSSJ0912+0029 and SDSSJ0041-0914), we are unable to determine their ages and metallicities precisely, but their line strengths are similar to the mean of the highest-mass SDSS sample, with a deviation from the average EW smaller than $1\sigma$ in both age and metallicity. The NaI and NaD indices can in principle be used to constrain the IMF slope (CvD12), and this relation is shown in Figure 2b. Although the data match the models for low-dispersion systems ($\la$250[$\mathrm{km\,s^{-1}}$]{}), the models with solar \[Na/Fe\] abundance do not match the NaD strengths and only models with $\mathrm{[Na/Fe]}=+0.3 \,-\,+0.4$ dex match the NaD indices for higher-mass ETGs. We suggest two possible explanations for this behavior: (i) NaD is highly contaminated by the interstellar medium (ISM) for higher mass ETGs; for example, dust lanes provide additional absorption in this resonance line (Sparks et al. 1997). Interstellar absorption within a galaxy may alter the stellar absorption profile and therefore the calculated EW, leading to an incorrect inference of the underlying stellar population. (ii) Very massive ETGs have higher \[Na/Fe\] abundances ($>0.3$ dex) *and* slightly bottom-heavy IMFs which correlate with their stellar velocity dispersions. Therefore, if we explain the strengths of these features in giant ETGs using abundance ratios, we require an average iron abundance in excess of solar (\[Fe/H\] $\sim 0.2$), a IMF with $x=3.0$ and a high sodium abundance (\[Na/H\] $> 0.3$). In the $\alpha-$enhanced bulge of the Galaxy, Fulbright et al. (2006) find an averaged \[Na/Fe\]=0.2 dex, and that \[Na/Fe\] $\le 0.3$ dex in all stars. However, Lecureur et al. (2007) find that \[Na/Fe\] ratios increase sharply with metallicity. They obtain values of \[Na/Fe\] $\sim 0.5$ for \[Fe/H\] = 0 and even higher for \[Fe/H\] $>0$, but with a scatter of 0.29 dex resulting in a range of \[Na/Fe\] from $-0.1$ to almost $1.0$. It is therefore possible for massive ellipticals to have high \[Na/Fe\]. In both cases, the models seem consistent with a Salpeter IMF at the low-dispersion end and a slightly bottom-heavy IMF for the high-dispersion end, if these effects are accounted for, but the models predict a steeper IMF slope of $x\sim3.0-3.5$ for both the XLENS galaxy SDSSJ0912+0029 and SDSSJ0041-0914. We note that a TiO feature at 8199Å could partly contaminate NaI, although this feature should not vary strongly (CvD12). To test possible contamination, we use a model with \[Ti/Fe\]=$\pm 0.3$ and calculate the NaI EW for a Chabrier IMF. We find that Ti enhancement only affect the NaI index by 1%. Overall we conclude that the NaD EWs and its trend with stellar mass remain unexplained for systems with $\sigma \ga 250$[$\mathrm{km\,s^{-1}}$]{}. We find that SSP models predict that TiO features also depend strongly on the slope of the low-mass end of the IMF, such as TiO2, shown in Figures 2c and 2d. This indicator gives more support to the conclusion that the sodium strengths of the XLENS ETG, SDSSJ0912+0029, still remain somewhat difficult to explain by current stellar population models, although most SDSS systems can be matched in NaI for most ETGs (if not in NaD). Together the TiO2 and NaI indices both imply a bottom-heavy IMF, steepening from Salpeter to possibly $x \approx 3$ for the most massive SDSS ETGs. As in Treu et al. (2010), a bottom-light IMF such as Chabrier IMF is inappropriate for the most massive ETGs. [lcccc]{} $2.35$ & $ 10.2 \pm 3$& $ 7.2 \pm 2$ & $0.75 \pm 0.2$& $0.59 \pm 0.18$\ $3.00$ & $ 22 \pm 6$ & $16 \pm 5$ & $1.6 \pm 0.5$& $ 1.4 \pm 0.4$\ $3.50$ & $ 43 \pm 13$ & $29 \pm 9$ & $2.4 \pm 0.8$& $ 2.4 \pm 0.7$ Limits on the IMF from Strong Lensing ------------------------------------- A strong case against an extreme bottom-heavy IMF can be made using the system with the strongest NaI EW (Fig. \[fig:indices\]), the XLENS galaxy SDSSJ0912+0029. This system provides a hard upper limit on the stellar mass inside its Einstein radius, no matter the IMF model. If we assume that the SSP models are correct and that this galaxy has a high \[Na/Fe\] abundance, we infer an IMF with a power-law slope $x=3$–$3.5$ (where the IMF follows $dn/dm=m^{-x}$, and the Salpeter slope is $x=2.35$). To assess whether these steep IMF slopes are consistent with the upper limit on the total mass, we calculate the total luminosity and the SSP stellar M/L ratio in stars for each assumed IMF to infer the stellar mass fraction inside the Einstein radius ($R_{\rm Ein}=4.55 \pm 0.23$kpc; Koopmans et al. 2006). Changes in the IMF of stars with $M \leq 0.3\,M_{\odot}$ changes the total luminosity of the lens galaxy by at most $\sim 10\%.$ Conversely, stars with masses of $0.1$–$0.3\,{M}_{\odot}$ contribute $\ga 60\%$ of the stellar mass for bottom-heavy IMFs with slopes steeper than Salpeter (see, e.g., Fig. 2 of CvD12). To determine the stellar M/L ratio, we use the isochrones at solar \[Fe/H\] and \[$\alpha$/H\] for a 13.5 Gyr population from the Dartmouth Stellar Evolution Program (DSEP), a state-of-the-art stellar evolution code (Chaboyer, Green, & Liebert 1999; Chaboyer et al. 2001). We compare three different IMFs: Salpeter ($x=2.35$), a bottom-heavy IMF ($x=3.0$) and a very bottom-heavy IMF ($x=3.5$). CvD12 use the same isochrones in their SSP for the bulk of the main sequence and red giant branch, except at $M < 0.2\,M_{\odot}$, where they use the Baraffe et al. (1998) isochrones. For each IMF we compute the quantity $$f^{*}_{\mathrm{Ein}}=M^{*}/M_{\mathrm{Ein}} = (L_{\mathrm{Ein}}/M_{\mathrm{Ein}}) \times (M_{*}/L)_{\mathrm{DSEP}},$$ where $M_{\mathrm{Ein}}$ is a robust measurement of the total mass enclosed within the physical Einstein radius \[$M_{\mathrm{Ein}} = (39.6 \pm 0.8) \times 10^{10}\,M_{\odot} \,$\], $L_{\mathrm{Ein}}$ is the luminosity enclosed within the Einstein radius, evaluated using B-spline luminosity models, as a fraction of de Vaucouleurs total model luminosity \[$L_{\mathrm{Ein}} = (4.49 \pm 0.2) \times 10^{10}\,L_{\odot} \,$, from Bolton et al. 2008\], and $(M_{*}/L)_{\mathrm{DSEP}}$ is the mass-to-light ratio from the DSEP isochrone using the appropriate IMF. The stellar M/L ratio includes the contribution from stellar remnants and gas ejected from stars at the end of their life-cycles. We list the results of this calculation in Table \[tab:m2l\]. For a Salpeter IMF, the stellar mass fraction of SDSSJ0912+0029 in the restframe $V$-band is $f^{*}_{\mathrm{Ein,Salp}}=0.59 \pm 0.15$, in agreement with previous results ($0.60\pm0.09$, Auger et al. 2009). The mass-to-light ratio calculated from the DSEP isochrone for a Salpeter IMF is $M/L_V =7.2 \pm 2\,(M/L)_{\odot}$ in the $V$ band and $M/L_B =10.2 \pm 3\,(M/L)_{\odot}$ in the $B$ band. The latter value is consistent with the upper limit of $M/L_B\leq9.08\,(M/L)_{\odot}$ derived from dynamical models of Barnabè at al. (2009) under the maximum bulge hypothesis. An IMF slope of $x=3.5$ yields $M/L_{V} =29 \pm 9\,(M/L)_{\odot}$, and $M/L_{B} =43 \pm 13\,(M/L)_{\odot}$ corresponding to $f^{*}_{\mathrm{Ein,3.5}}=2.4\pm0.8$, inconsistent with the total lensing mass within the Einstein radius at the $>95$% confidence level. An IMF of $x=3.0$ in $B$-band is also excluded at the $>90$% level, as this corresponds to a fraction $f^{*}_{\mathrm{Ein,3.0},B}=1.6\pm0.5$. For both of the bottom heavy IMFs in $B$-band and for the $x=3.5$ IMF in $V$-band, we obtain a stellar mass fraction within the Einstein radius in excess of unity, thereby violating the lensing constraint on the total mass of the system at the $>90$% CL. The $x=3$ model is only marginally consistent in $V$-band, but $f^{*}_{\mathrm{Ein,3.0},V}=1.4\pm0.4$ implies that there is no dark matter within the Einstein radius. Systematic uncertainties ------------------------ The uncertainty on the value of $f^{*}_{\mathrm{Ein}}$ has a number of contributions. The uncertainties in the mass and luminosity determinations from lensing are much smaller than differences in the values of $M/L$ arising from the use of different stellar population evolution models. The emerging picture is that, for a fixed IMF, it is difficult to constrain $M/L$ estimates to much higher accuracy than 0.1 dex (Gallazzi et al. 2008; Marchesini et al. 2009; Longhetti & Saracco 2009, Conroy et al. 2009, 2010). We examine mass-to-light ratios predicted for different IMF from different stellar population models in the rest-frame $V$- and $B$-bands and compare predictions from Worthey (1994), Bruzual & Charlot (2003), Maraston (2005), and Vazdekis et al. (2010) for single stellar populations with ages 11.2–14.1Gyr, solar ($Z=0.02$) or super-solar metallicity ($Z=0.05$). For each SSP and each IMF, we calculate an average value and a standard deviation that we associate with the inferred values of $M/L$. Changing the \[Fe/H\] abundance from $0$ to $0.22$ yields a $\sim 9\%$ uncertainty on $M/L$, while changing the age of the stellar population changes $M/L$ by $\sim20\%$ at fixed IMF. The latter is the dominant contribution to the final uncertainties. We propagated these errors into the stellar mass fraction. The errors on the stellar mass fractions in Table \[tab:m2l\] include both the random error contribution and the systematic uncertainties due to the use of different set of isochrones, bands, and stellar population age and metallicity uncertainties. Conclusions =========== In this letter we have studied the $\lambda$8190 and TiO $\lambda$6230 features – both indicators of low-mass ($< 0.3\,M_{\odot}$) stars in massive ETGs – as a function of each other, of age and metallicity indicators (Mg$b$, Fe, H$\beta$), of NaD, and of stellar velocity dispersion. We find the following: (1) The observed NaI-NaD trend depends strongly on stellar velocity dispersion of ETGs and only match current state-of-the-art SSP models for ETGs with $\sigma \la 250$[$\mathrm{km\,s^{-1}}$]{}. The most extreme NaI index strength in our sample is found in a gravitational lens system, which should have an IMF slope $x \ga 3$ based on the best current SSP models. The total enclosed mass of this system, however, excludes slopes steeper than $x=3.0$ at the $>90\%$ CL or slopes steeper than $x=3.5$ at the $>95\%$ CL. We conclude that the NaD feature is still affected by as-of-yet not understood processes in the more massive ETGs ($\sigma>250$[$\mathrm{km\,s^{-1}}$]{}). A full spectral comparison, in combination with lensing and dynamical constraints, is planned to further strengthen these results and assess whether NaI and NaD (in some instances) are contaminated. (2) We find that the TiO feature at $\lambda\sim 6230$Å (TiO2) is a particularly promising feature to decouple the IMF from age, metallicity, and abundance pattern of the stellar population, especially when combined with metallicity-dependent indices. We find that this feature correlates well with NaI, if the two most extreme cases as discussed in the text, are excluded. This correlation can be a crucial piece of evidence against interstellar contamination of the $\lambda$8190 sodium absorption lines, although this does not solve the problem of NaD absorption. If strong NaI features are indeed not due to ISM contamination, very massive ETGs have higher \[Na/Fe\] abundances ($>0.3$ dex) *and* slightly bottom-heavy IMFs, correlated with their stellar velocity dispersions. We also find a clear trend of an increasing IMF slope between $\sigma = 200$ to 335[$\mathrm{km\,s^{-1}}$]{} from Salpeter ($x=2.35$) to $x\approx 3.0$, in agreement with the XLENS system, which excludes steeper IMFs at the high-mass end. Our results are the first SSP-based indications of a steepening of the low-mass end of the IMF with increasing galaxy mass [*within*]{} the class of LRG/ETGs. Our results (i) support a similar trend first found by @2010ApJ...709.1195T, (ii) extend the evidence based on SSP models that the IMF steepens from spiral to early-type galaxies (vDC10), (iii) suggest that NaI and NaD (in some instances) could be contaminated by interstellar absorption, and (iv) support a similar trend found by @2012arXiv1202.3308C based on stellar kinematics. The upper limit of $x\la 3$, based on one of the most massive ETGs in our sample, a gravitational lens, also supports our previous similar finding [@2011MNRAS.417.3000S]. Acknowledgements {#acknowledgements .unnumbered} ================ The authors thank the referee for providing constructive comments. Data were reduced using EsoRex and the XSH pipeline developed by the ESO Data Flow System Group. C.S. acknowledges support from an Ubbo Emmius Fellowship. L.V.E.K. is supported in part by an NWO-VIDI program subsidy (project number 639.042.505). The authors thank C. Conroy and P. van Dokkum for kindly providing their stellar population models before publication and for providing very useful feedback on a draft manuscript that helped to improve it. The authors thank T. Treu and M. den Brok for useful comments on the manuscript. [50]{} Aihara, H., Allende Prieto, C., An, D., et al. 2011, , 193, 29 Auger, M. W., Treu, T., Bolton, A. S., et al. 2009, , 705, 1099 Auger, M. W., Treu, T., Gavazzi, R., et al. 2010, , 721, L163 Auger, M. W., Treu, T., Bolton, A. S., et al. 2010, , 724, 511 Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P. H. 1998, , 337, 403 Barnab[è]{}, M., Czoske, O., Koopmans, L. V. E., et al. 2009, , 399, 21 Barnab[è]{}, M., Czoske, O., Koopmans, L. V. E., Treu, T., & Bolton, A. S. 2011, , 415, 2215 Bastian, N., Covey, K. R., & Meyer, M. R. 2010, , 48, 339 Bolton, A. S., Burles, S., Koopmans, L. V. E., Treu, T., & Moustakas, L. A. 2006, , 638, 703 Bolton, A. S., Burles, S., Koopmans, L. V. E., et al. 2008, , 682, 964 Bruzual, G., & Charlot, S. 2003, , 344, 1000 Cappellari, M., & Emsellem, E. 2004, , 116, 138 Cappellari, M., McDermid, R. M., Alatalo, K., et al. 2012, arXiv:1202.3308 Chaboyer, B., Green, E. M., & Liebert, J. 1999, , 117, 1360 Chaboyer, B., Fenton, W. H., Nelan, J. E., Patnaude, D. J., & Simon, F. E. 2001, , 562, 521 Chabrier, G. 2003, , 115, 763 Conroy, C., Gunn, J. E., & White, M. 2009, , 699, 486 Conroy, C., White, M., & Gunn, J. E. 2010, , 708, 58 Conroy, C., & van Dokkum, P. 2012, , 747, 69 Cushing, M. C., Rayner, J. T., & Vacca, W. D. 2005, , 623, 1115 Dav[é]{}, R. 2008, , 385, 147 Dutton, A. A., Mendel, J. T., & Simard, L. 2012, , L412 Fulbright, J. P., Rich, R. M., & McWilliam, A. 2006, Chemical Abundances and Mixing in Stars in the Milky Way and its Satellites, 93 Gallazzi, A., Brinchmann, J., Charlot, S., & White, S. D. M. 2008, , 383, 1439 Graves, G. J., Faber, S. M., & Schiavon, R. P. 2009, , 698, 1590 Gonz[á]{}lez, J. J. 1993, Ph.D. Thesis, Koopmans, L. V. E., Treu, T., Bolton, A. S., Burles, S., & Moustakas, L. A. 2006, , 649, 599 Kroupa, P. 2001, , 322, 231 Lecureur, A., Hill, V., Zoccali, M., et al. 2007, , 465, 799 Longhetti, M., & Saracco, P. 2009, , 394, 774 Maraston, C. 2005, , 362, 799 Marchesini, D., van Dokkum, P. G., F[ö]{}rster Schreiber, N. M., et al. 2009, , 701, 1765 Napolitano, N. R., Romanowsky, A. J., & Tortora, C. 2010, , 405, 2351 Rayner, J. T., Cushing, M. C., & Vacca, W. D. 2009, , 185, 289 Sanchez-Blazquez, P., Peletier, R. F., Jimenez-Vicente, J., et al. 2007, VizieR Online Data Catalog, 837, 10703 Sparks, W. B., Carollo, C. M., & Macchetto, F. 1997, , 486, 253 Spiniello, C., Koopmans, L. V. E., Trager, S. C., Czoske, O., & Treu, T. 2011, , 417, 3000 Spolaor, M., Hau, G. K. T., Forbes, D. A., & Couch, W. J. 2010, , 408, 254 Spolaor, M., Proctor, R. N., Forbes, D. A., & Couch, W. J. 2009, , 691, L138 Thomas, D., Maraston, C., Bender, R., & Mendes de Oliveira, C. 2005, , 621, 673 Trager, S. C., Worthey, G., Faber, S. M., Burstein, D., & Gonzalez, J. J. 1998, , 116, 1 Trager, S. C., Faber, S. M., Worthey, G., & Gonz[á]{}lez, J. J. 2000, , 119, 1645 Trager, S. C., Faber, S. M., Worthey, G., & Gonz[á]{}lez, J. J. 2000, , 120, 165 Trager, S. C., Faber, S. M., & Dressler, A. 2008, , 386, 715 Treu, T., Auger, M. W., Koopmans, L. V. E., et al. 2010, , 709, 1195 van Dokkum, P. G. 2008, , 674, 29 van Dokkum, P. G., & Conroy, C. 2010, , 468, 940 Vazdekis, A., S[á]{}nchez-Bl[á]{}zquez, P., Falc[ó]{}n-Barroso, J., et al. 2010, , 404, 1639 Worthey, G. 1992, The Stellar Populations of Galaxies, 149, 507 Worthey, G. 1994, , 95, 107 [^1]: The X-Shooter Lens Survey, Spiniello et al. (2011) [^2]: $\mathrm{[MgFe]} = \sqrt{(\mathrm{Fe5270} + \mathrm{Fe5335})/2 \times \mathrm{Mg}b}$, González (1993) [^3]: For stellar populations with ages $>10$ Gyr, an uncertainty of 0.1 Å in H$\beta$ corresponds to 1 Gyr uncertainty in the age (cf. Worthey 1994).
--- abstract: 'This note deals with certain properties of convex functions. We provide results on the convexity of the set of minima of these functions, the behaviour of their subgradient set under restriction, and optimization of these functions over an affine subspace.' author: - 'Miel Sharf and Daniel Zelazo [^1]' bibliography: - 'Appendix.bib' title: On Certain Properties of Convex Functions --- Introduction ============ This paper deals with certain properties of convex function, most of them well-known but generally unmentioned in the literature. This work employs subgradient calculus, calculating the subgradient space after restricting to a subspace, or optimizing on a “moving" affine subspace, and also deals with the collection of minima of the convex function. The Lemmas ========== This paper contains the proof of the three following lemmas: \[Lemma1\] Let $f:\mathbb{R}^n \rightarrow \mathbb{R}$ be a convex function, and let $S:\mathbb{R}^n \rightarrow\mathbb{R}^d$ be some linear operator. Fix some $\zeta \in Im(S)$ and define a map $g:\{y:\: Sy=\zeta\}\to \mathbb{R}$ by $g(x)=f(x)$. Then 1. $g$ is a convex function on the smaller space ${\mathrm{Im}}(S^T)$, 2. the subdifferential of $g$ at $x$ is given by $$\partial g(x) = {\mathrm{Proj}}_{{\mathrm{Im}}(S)} (\partial f(x)),$$ where ${\mathrm{Proj}}_W$ is the orthogonal projection on the subspace $W$. \[Lemma2\] Let $f:\mathbb{R}^d \rightarrow \mathbb{R}$ be a convex function, and let $S:\mathbb{R}^n\rightarrow\mathbb{R}^d$ be some linear operator. Define a map $h:{\mathrm{Im}}(S^T)\to \mathbb{R}$ by $$h(x) = \min_{r: S^Tr=x} f(r),$$ assuming that the minimum is always achieved. Then 1. $h$ is a convex function, 2. if $f$ is strictly convex, then $h$ is strictly convex. \[Lemma3\] Let $C\subseteq \mathbb{R}^n$ be a convex set and let $f:C\rightarrow\mathbb{R}$ be convex. Suppose that $f$ achieves its minimum $m$ in $C$, and let $M=\{x:\: f(x)=m\}$ be the set of $f$’s minima. Then $M$ is convex. Proofs {#proofs .unnumbered} ====== We start by proving lemma \[Lemma2\]: We denote $V = {\mathrm{Im}}(S^T)$ for simplicity. We take some $x,y\in V$ and $t\in [0,1]$. Our goal is to show that $$h(tx+(1-t)y) \le th(x) + (1-t)h(y)$$ pick $r_x,r_y\in \mathbb{R}^d$ such that $g(x)=f(r_x)$ and $g(y)=f(r_y)$ (these exist by assumption that the minimum is always achieved). Then on one hand, we have $S^T(tr_x+(1-t)r_y) = tx+(1-t)y$ by linearity, so $h(tx+(1-t)y) \le f(tr_x+(1-t)r_y)$. On the other hand, by convexity: $$\label{Lemma2ConvexityInequality} f(tr_x+(1-t)r_y) \le tf(r_x) + (1-t)f(r_y) = th(x) + (1-t)h(y)$$ so we get the wanted inequality by chaining the two inequalities. As for strict convexity, we should note that if $x\neq y$ then $r_x\neq r_y$, so the inequality \[Lemma2ConvexityInequality\] becomes strict. This completes the proof of the lemma. We now prove lemma \[Lemma3\]: Let $x,y\in M$ and let $t\in[0,1]$. We need to show that $tx+(1-t)y\in M$. Indeed, because $f$ is convex, $$f(tx+(1-t)y) \le tf(x) + (1-t)f(y) = tm+(1-t)m = m$$ but on the other hand, $f(tx+(1-t)y)$ cannot be smaller than $m$, as $m$ is the minimum of $f$. Thus $f(tx+(1-t)y) = m$ and thus $tx+(1-t)y\in C$. Lastly, we prove lemma \[Lemma1\], which is the “toughest” of the three: We denote $X=\{y:\: Sy=\zeta\}$, $V=\ker{S}$ and $U={\mathrm{Im}}(S^T)$. We know that ${\mathrm{Im}}(S^T)=\ker(S)^\perp$, so we can identify $\mathbb{R}^n$ as a direct sum of $U$ and $V$. Thus we get a function $F:U\times V\rightarrow\mathbb{R}$ defined by $F(u,v)=f(u+v)$. In [@Rockafeller], one shows that if $\chi:\mathbb{R}\rightarrow\mathbb{R}$ is convex, then $\partial\chi(t) = [\chi^{\prime}_-,(t)\chi^{\prime}_+(t)]$, where $\chi^{\prime}_{\pm}$ are the one-sided derivatives. Furthermore, we know (again by [@Rockafeller]) that if $\rho:\mathbb{R}^n\rightarrow\mathbb{R}$ is convex, and we fix $v_0,v_1\in V$ and define $\chi(t)= \rho(v_0+tv_1)$, then $\chi$’s one-sided derivatives are given via: $$\begin{aligned} \chi^{\prime}_+(t) &= \max_{\alpha\in\partial\rho(v_0+tv_1)} \alpha^Tv_1 \\ \nonumber \chi^{\prime}_-(t) &= \min_{\alpha\in\partial\rho(v_0+tv_1)} \alpha^Tv_1 \nonumber\end{aligned}$$ These facts, together with the fact that $\partial\rho(v_0+tv_1)$ is convex, imply that $\partial\chi(t) = v^T\cdot\partial\rho(v_0+tv_1)$ Now, we can finally begin out proof. By assumption, there’s some $y\in \mathbb{R}$ such that $Sy=\zeta$. We can decompose $y$ as $u_0+v_0$ for some $u_0\in {\mathrm{Im}}(S^T)$ and $v_0\in\ker(S)$. The set $X$ is equal to $u_0+V$. Thus, we map $g$ can be described as $g(v)=F(u_0,v)$. Take some $v_0,v_1\in V$. Restricting $g$ to the line $\{v_0+tv_1: t\in\mathbb{R}\}$ is identical to restricing $f$ to the line $\{(u_0,v_0+tv_1): t\in\mathbb{R}\}$. Thus they yield the same subdifferential sets at $t=0$. By above, we get that: $$v_1^T \cdot\partial g(v_0) = v_1^T \cdot \partial F(u_0,v_0) = v_1^T \cdot {\mathrm{Proj}}_V(\partial F(u_0,v_0))$$ meaning that the sets $\partial g(v_0)$ and ${\mathrm{Proj}}_V(\partial F(u_0,v_0))$ look the same when hit by a linear functional on $V$. However, both of these sets are both convex and closed (see [@Rockafeller]). Thus, the separating hyperplanen theorem (see [@Conway]) implies that they are equal. Reading what $V$ and $F$ are, we get that for any $x\in {\mathrm{Im}}(S)$, $$\partial g(x) = {\mathrm{Proj}}_{{\mathrm{Im}}(S)} (\partial f(x))$$ which completes the proof. [^1]: M. Sharf and D. Zelazo are with the Faculty of Aerospace Engineering, Israel Institute of Technology, Haifa, Israel. [msharf@tx.technion.ac.il, dzelazo@technion.ac.il]{}
--- author: - 'Harbir Antil[^1]' - 'Zichao (Wendy) Di[^2]' - Ratna Khatri title: 'Bilevel Optimization, Deep Learning and Fractional Laplacian Regularization with Applications in Tomography[^3]' --- \[theorem\][Acknowledgement]{} \[theorem\][Condition]{} \[theorem\][Example]{} \[theorem\][Notation]{} \[theorem\][Problem]{} \[theorem\][Assumption]{} \[theorem\][Remark]{} [^1]: Department of Mathematical Sciences, George Mason University, Fairfax, VA 22030, USA. (, ). [^2]: Mathematics and Computer Science Division, Argonne National Laboratory, IL, USA. (). [^3]: Submitted to the editors DATE.
--- abstract: 'In this paper a family of fixed point algorithms, generalizing the [Petviashvili ]{}method, is considered. A previous work studied the convergence of the methods. Presented here is a second part of the analysis, concerning the introduction of some acceleration techniques into the iterative procedures. The purpose of the research is two-fold: one is improving the performance of the methods in case of convergence and the second one is widening their application when generating traveling waves in nonlinear dispersive wave equations, transforming some divergent into convergent cases. Two families of acceleration techniques are considered: the vector extrapolation methods and the Anderson acceleration methods. A comparative study through several numerical experiments is carried out.' address: - 'Department of Applied Mathematics, University of Valladolid, Paseo del Cauce 59, 47011, Valladolid, Spain.' - ' IMUVA, Institute of Mathematics of University of Valladolid; Spain. Email: joralv@eii.uva.es' - 'Department of Applied Mathematics, University of Valladolid, Paseo de Belen 15, 47011-Valladolid, Spain.' - ' IMUVA, Institute of Mathematics of University of Valladolid; Spain. Email: angel@mac.uva.es ' author: - 'J. Álvarez' - 'A. Durán' title: 'On Petviashvili type methods for traveling wave computations: Acceleration techniques' --- Petviashvili type methods, traveling wave generation, iterative methods for nonlinear systems, orbital convergence, acceleration techniques, vector extrapolation methods, Anderson acceleration MSC2010: 65H10, 65M99, 35C99, 35C07, 76B25 Introduction {#sec1} ============ In a previous paper [@alvarezd], a family of fixed-point algorithms for the numerical approximation of nonlinear systems of the form $$\begin{aligned} Lu=N(u),\quad u\in \mathbb{R}^{m}, \quad m>1,\label{mm1}\end{aligned}$$ was introduced. In (\[mm1\]), $L$ is a nonsingular $m\times m$ real matrix and $N:\mathbb{R}^{m}\rightarrow \mathbb{R}^{m}$ is an homogeneous function with degree $p, |p|>1$ (this means that $N(\lambda u)=\lambda^{p}N(u)$). Among other applications, systems of this form are very typical in the numerical generation of traveling waves and ground states in water wave problems and nonlinear optics. For the numerical approximation to solutions of (\[mm1\]), the use of the classical fixed-point algorithm is not suitable. This is due to the fact that if $u^{*}$ is a solution and $S=L^{-1}N^{\prime}(u^{*})$ stands for the iteration matrix at $u^{*}$, then, since $N$ is homogeneous of degree $p$ then $N^{\prime}(u^{*})u^{*}=pN(u^{*})$ and therefore $$\begin{aligned} Su^{*}=L^{-1}N^{\prime}(u^{*})u^{*}=pL^{-1}N(u^{*})=pu^{*},\end{aligned}$$ that is, $p$ is an eigenvalue of $S$ with magnitude above one. This makes the iteration not convergent in general. As an alternative and based on the [Petviashvili ]{}method, [@petviashvili; @pelinovskys; @lakobay; @lakobay2], the following fixed-point algorithms were considered in [@alvarezd]: $$\begin{aligned} Lu_{n+1}=s(u_{n})N(u_{n}), \quad n=0,1,\ldots,\label{mm2}\end{aligned}$$ from $u_{0}\neq 0$ and where $s:\mathbb{R}^{m}\rightarrow \mathbb{R}$ is a $C^{1}$ function satisfying the following properties: - A set of fixed points of the iteration operator $$\begin{aligned} F(x)=s(x)L^{-1}N(x),\label{iterop}\end{aligned}$$ coincides with a set of fixed points of (\[mm1\]). This means that: (a) if $u^{*}$ is a solution of (\[mm1\]) then $s(u^{*})=1$; (b) inversely, if the sequence $\{u_{n}\}_{n}$, generated by (\[mm2\]), converges to some $y$, then $s(y)=1$ (and, consequently, $y$ is a solution of (\[mm1\])). - $s$ is homogeneous with degree $q$ such that $|p+q|<1$. The function $s$ is called the stabilizing factor of the method, inheriting the nomenclature of the [Petviashvili ]{}method. Actually, formula (\[mm2\]) generalizes the [Petviashvili ]{}scheme, which corresponds to the choice $$\begin{aligned} s(x)=\left(\frac{\langle Lx,x\rangle}{\langle N(x),x\rangle}\right)^{\gamma},\quad q=\gamma(1-p),\label{mm3c}\end{aligned}$$ The first part of the work, carried out in [@alvarezd] (see also [@alvarezd2015]), analyzed the convergence of (\[mm2\]). The main conclusion was that, compared to the classical fixed-point algorithm, the stabilizing factor acts like a filter of the spectrum of the matrix $S$ in the sense that: - The eigenvalue $\lambda=p$ of $S$ is transformed to the eigenvalue $\lambda=p+q$ of the iteration matrix $F^{\prime}(u^{*})$ of (\[iterop\]). - The rest of the spectrum of $F^{\prime}(u^{*})$ is included into the spectrum of $S$. Thus, the convergence of (\[mm2\]) depends of the spectrum of $S$ different from $p$. From these conclusions, several results of convergence can be derived, see [@alvarezd] for details. The motivation of this paper is two-fold. First, several numerical experiments in the literature show that the [Petviashvili ]{}type algorithms are in sometimes computationally slower than other alternatives. In order to continue to benefit from the easy implementation (one of the advantages of the methods) the algorithms should improve their performance with the inclusion of some acceleration technique. A further motivation comes from the known mechanism of some extrapolation methods, [@Sidi2003], to transform divergent into convergent cases. The application of this property to these [Petviashvili ]{}type methods may extend their use to compute traveling waves under more demanding conditions, for example in two dimensions or/and in case of highly oscillatory waves. The literature on acceleration techniques is very rich with many different families and strategies, [@Sidi2003; @BrezinskiR1991]. This paper will be focused on two types of procedures: the vector extrapolation methods, [@BrezinskiR1991; @smithfs; @jbilous; @brezinski2; @sidifs] and the Anderson mixing, [@anderson; @walkern; @ni; @fangs]. We think that the first one is the most widely studied group; in particular, known convergence results for some of these methods will serve us to justify several examples of transformation from divergence to convergence when generating traveling waves iteratively. The second family accelerates the convergence by introducing the strategy of minimization of the residual in some norms at each step. They have been revealed efficient in, for example, electronic structure computations, [@anderson; @Pulay1982] (see also [@ni; @walkern] and references therein) and, to our knowledge, this is the first time they are applied to the numerical generation of traveling waves. The main purpose of this paper is then exploring by numerical means the application of these two acceleration methods to the generation of traveling waves through several problems of interest and from the [Petviashvili ]{}type methods (\[mm2\]) and their extended versions derived in [@alvarezd2014b]. With the case studies presented here we have tried to cover different situations of hard computation of the waves as an attempt to give some guidelines of application. In this sense, the paper provides several conclusions to be emphasized: - The use of acceleration techniques is highly recommended here since it improves the performance in general and allows to extend the application of the methods to computationally harder situations, with especial emphasis on two-dimensional simulations and highly oscillatory wave generation. - By comparing the two families of acceleration techniques considered in this study, the vector extrapolation methods are in general more competitive for these problems compared to the Anderson acceleration methods. Among the vector extrapolation methods, the polynomial methods provide a better performance in general (some exceptions can be seen in the experiments below). - The main drawback of the Anderson acceleration methods concerns the numerical treatment of the associated minimization problem, since most of the difficulties come from ill-conditioning. This might be improved by including suitable preconditioning techniques (here the methods were implemented in a standard way, [@walkern; @ni; @fangs]). However, it is remarkable that when the Anderson acceleration methods work, their performance is in general comparable to that of some vector extrapolation methods. The structure of the paper is as follows. Section \[se2\] is devoted to a description of the two families of acceleration techniques considered in this study. This also includes some comments on the implementation and convergence results. The application of both techniques to the methods (\[mm2\]) is studied in Section \[se3\] through a plethora of numerical experiments involving the computation of different types of traveling waves: ground states, classical and generalized solitary waves as well as periodic traveling waves. The numerical study will be focused on the two main motivations of the paper: the improvement of the efficiency and the extension of application of the methods to computationally harder problems and where the iteration is initially not convergent. Finally, Section \[se4\] completes the computational study with some illustrations of the application of the acceleration to the extended versions of the algorithms (\[mm2\]), treated in [@alvarezd2014b] and suitable when the nonlinearity in (\[mm1\]) contains several homogeneous terms with different degree. Some concluding remarks are in Section \[se5\]. Acceleration techniques {#se2} ======================= Besides the local character of the convergence, in some cases and compared to other alternatives, fixed point algorithms has the additional disadvantage of a slow performance. In what follows, several techniques of acceleration will be considered and applied to the methods (\[mm2\]), with the aim of improving their efficiency. Furthermore, as in the case of the classical algorithm, [@smithfs], some cases of divergence will be transformed to convergent iterations. This section introduces two families of acceleration techniques: the vector extrapolation methods (VEM from now on) and the Anderson acceleration methods (AAM). We will include a description of the schemes (including some convergence results) and some comments on implementation. Vector extrapolation methods ---------------------------- The first group of acceleration techniques consists of vector extrapolation methods (VEM). For a more detailed analysis and implementation of the methods see [@CabayJ1976; @Eddy1979; @Mesina1977; @smithfs; @jbilous; @brezinski2; @BrezinskiR1991; @Sidi2003] and references therein. Here we will describe the general features of the procedures and their application to (\[mm2\]). Two families of VEM are typically emphasized in the literature. The first one covers the so-called polynomial methods; they include, as the most widely cited, the minimal polynomial extrapolation (MPE), the reduced rank extrapolation (RRE) and the modified minimal polynomial extrapolation (MMPE) methods, [@CabayJ1976; @Eddy1979; @Mesina1977; @smithfs; @jbilous; @sidifs; @sidi]. The second family consists of the so-called $\epsilon$-algorithms; typical examples are the scalar and vector $\epsilon$-algorithms and the topological $\epsilon$-algorithm, [@brezinski1; @brezinski2; @jbilous; @tan]. All the methods share of course the idea of introducing the extrapolation as a procedure to transform the original sequence $\{u_{n}\}$ of the involved iterative process by some strategy. The polynomial methods are usually described in terms of the transformation ($k\leq m$) $$\begin{aligned} &&T_{k}:\mathbb{R}^{m}\longrightarrow\mathbb{R}^{m}\label{fsec211}\\ &&u_{n}\longmapsto t_{n,k}=u_{n}-\Delta U_{n,k}\left(V_{n,k}^{*}\Delta^{2}U_{n,k}\right)^{+}V_{n´k}\Delta u_{n},\nonumber\end{aligned}$$ where - $\Delta u_{n}=u_{n+1}-u_{n}, \Delta^{2}u_{n}=\Delta u_{n+1}-\Delta u_{n}$. - $\Delta^{i}U_{n.k}$ ($i=1,2$) denotes the $m\times k$ matrix of columns $\Delta^{i}u_{n},\ldots,\Delta^{i}u_{n+k-1}$. - $V_{n,k}$ stands for the $m\times k$ matrix of some columns $v_{1}^{(n)},\ldots,v_{k}^{(n)}$ with $V_{n,k}^{*}$ as the adjoint matrix of $V_{n,k}$ (conjugate transpose). In (\[fsec211\]), $A^{+}$ stands for the Moore-Penrose generalized inverse of $A$, defined as $A^{+}=(A^{*}A)^{-1}A^{*}$, [@Demmel; @Meyer; @GolubV]. Different choices of the vectors $v_{j}^{(n)}, j1,\ldots,k$ lead to the most widely used polynomial methods: 1. Minimal polynomial extrapolation (MPE): $v_{j}^{(n)}=\Delta u_{n+j-1}, j=1,\ldots,k$. 2. Reduced rank extrapolation (RRE): $v_{j}^{(n)}=\Delta^{2} u_{n+j-1}, j=1,\ldots,k$. 3. Modified minimal polynomial extrapolation (MMPE): $v_{j}^{(n)}=v_{j}, j=1,\ldots,k$, for arbitrary, fixed, linearly independent vectors $v_{1},\ldots v_{k}\in \mathbb{R}^{m}$. The formulation of the VEM may follow an alternative approach, [@sidifs; @smithfs]. The transformation (\[fsec211\]) can be computed in the form $$\begin{aligned} t_{n.k}=\sum_{j=0}^{k}\gamma_{j}u_{n+j},\label{fsec214}\quad \sum_{j=0}^{k}\gamma_{j}=1,\end{aligned}$$ where the coefficients $\gamma_{j}$ are obtained from the resolution (in some sense) of overdetermined, inconsistent systems $$\begin{aligned} \sum_{i=0}^{k-1}d_{i}w_{n+i}=\widetilde{w}_{n},\label{fsec215}\end{aligned}$$ for some vectors $w_{j}, \widetilde{w}_{j}\in\mathbb{R}^{m}$. Different methods emerge by combining different choices of the norm where the residual vector $\sum_{i=0}^{k-1}d_{i}w_{n+1}=\widetilde{w}_{n}$ is minimized with suitable vectors $w_{j}, \widetilde{w}_{j}\in\mathbb{R}^{m}$. Thus, for example, assuming $k<m$, we have: - RRE is obtained by writing (\[fsec214\]) in the form $$\begin{aligned} t_{n.k}=u_{n}-\sum_{j=0}^{k-1}\beta_{j}\Delta u_{n+j},\end{aligned}$$ where the $\beta_{j}$ solve (\[fsec215\]) with $w_{j}=\Delta^{2}u_{j}, \widetilde{w}_{j}=\Delta u_{j}$ and the Euclidean norm with equal weights is used. - MPE is obtained by using (\[fsec214\]) with $$\begin{aligned} \gamma_{j}=\frac{c_{j}}{\sum_{i=0}^{k}c_{i}},\leq 0\leq j\leq k,\end{aligned}$$ where $c_{k}=1$ and the $c_{j}, 0\leq j\leq k-1$ solve (\[fsec215\]) with $w_{j}=\Delta u_{j}, \widetilde{w}_{j}=-\Delta u_{j+k}$ in the sense of minimization with the Euclidean norm with equal weights. - MMPE is obtained from (\[fsec214\]) but where instead of (\[fsec215\]) a system of the form $$\begin{aligned} \sum_{i=0}^{k-1}d_{i}Q_{j}(w_{n+i})=Q_{j}(\widetilde{w}_{n}),\quad j=1,\ldots,k,\label{fsec216}\end{aligned}$$ is used. In (\[fsec216\]) $Q_{j}(y)=\langle e_{j},y\rangle=y_{j}$, being $y=(y_{1},\ldots,y_{m})^{T}$. These formulations can be unified by a representation with determinants, [@smithfs; @sidifs; @sidi; @SidiB1988]. This writes the extrapolation steps $t_{n,k}$ in the form $$\begin{aligned} t_{n,k}=\frac{D(u_{n},u_{n+1},\ldots,u_{n+k})}{D(1,1,\ldots,1)},\label{fsec217}\end{aligned}$$ with $$\begin{aligned} D(\sigma_{0},\ldots,\sigma_{k})=\left|\begin{matrix} \sigma_{0}&\sigma_{1}&\cdots&\cdots&\sigma_{k}\\ u_{0,0}&u_{0,1}&\cdots&\cdots&u_{0,k}\\ \vdots&\vdots&\cdots&\cdots&\vdots\\ \vdots&\vdots&\cdots&\cdots&\vdots\\ u_{k-1,0}&u_{k-1,1}&\cdots&\cdots&u_{k-1,k} \end{matrix}\right|,\label{fsec218}\end{aligned}$$ where the $u_{i,j}$ are scalars that depend on the extrapolation method and where the expansion of (\[fsec218\]) is in the sense $$\begin{aligned} D(\sigma_{0},\ldots,\sigma_{k})=\sum_{i=0}^{k}\sigma_{i}N_{i},\label{fsec219}\end{aligned}$$ with $N_{i}$ the cofactor of $\sigma_{i}$ in the first row. Thus in the case of the numerator in (\[fsec217\]), formula (\[fsec219\]) is a vector, while in the case of the denominator in (\[fsec217\]), formula (\[fsec219\]) is a scalar. (See [@BrezinskiR2003] for the interpretation in terms of the Schur complement of a matrix.) The three previously mentioned polynomial methods correspond to the following choices of $u_{i,j}$: - MPE: $u_{i,j}=\langle \Delta u_{n+i},\Delta u_{n+j}\rangle$. - RRE: $u_{i,j}=\langle \Delta^{2} u_{n+i},\Delta u_{n+j}\rangle$. - MMPE: $u_{i,j}=\langle e_{i+1},\Delta u_{n+j}\rangle$, where $e_{1},\ldots e_{k}$ are linearly independent vectors in $\mathbb{R}^{m}$. A second family of VEM is called the $\epsilon$-algorithms. A description of them may start from the scalar $\epsilon$-algorithm of Wynn, [@Wynn1962; @Wynn1966]. This scalar extrapolation method can be derived from the representation (\[fsec217\]), (\[fsec218\]) (in the scalar case) with $u_{i,j}=\Delta u_{n+i+j}, i=0,\ldots,k-1; j=0,\ldots,k$ (which are scalars in the scalar case). The corresponding ratio of determinants $$\begin{aligned} t_{n,k}=e_{k}(u_{n})=\frac{D(u_{n},u_{n+1},\ldots,u_{n+k})}{D(1,1,\ldots,1)},\label{fsec2111}\end{aligned}$$ is called the classical $e$- (or Shanks Schmidt $SS$) transform, [@Schmidt1941; @Shanks1955; @Wynn1956]. This ratio can be evaluated recursively for increasing $k$ and $n$ without the computation of determinants or Schur complements. The corresponding formulation is $$\begin{aligned} &&\epsilon_{-1}^{(n)}=0,\quad \epsilon_{0}^{(n)}=u_{n},\quad n=0,1,2,\ldots,\label{fsec2112}\\ &&\epsilon_{k+1}^{(n)}=\epsilon_{k-1}^{(n+1)}+(\epsilon_{k}^{(n+1)}- \epsilon_{k}^{(n)})^{-1},\quad k,n=0,1,2,\ldots,\label{fsec2113}\end{aligned}$$ where $\epsilon_{2k}^{(n)}:=e_{k}(u_{n}), \epsilon_{2k+1}^{(n)}:=(e_{k}(\Delta u_{n}))^{-1}$ and works along diagonals on $n+k$ constant. Thus, from (\[fsec2111\]), formulas (\[fsec2112\]), (\[fsec2113\]) compute each entry of a triangular array in terms of the previous entries. The extension of the scalar $\epsilon$-algorithm to the vectorial case was carried out by Brezinski, [@Brezinski1980], and Wynn, [@Wynn1964; @Gekeler1972], by using different definitions of inverse of a vector, see [@smithfs]. Wynn suggests to consider the transpose of the Moore-Penrose generalized inverse of a vector, $$\begin{aligned} w^{-1}=\frac{w}{||w||^{2}},\label{fsec2114}\end{aligned}$$ leading to the vector $\epsilon$-algorithm (VEA), whose formulas are of the form (\[fsec2112\]), (\[fsec2113\]) where the scalars are substituted by vectors and (\[fsec2113\]) makes use of (\[fsec2114\]). This implies that the $e$-transform (\[fsec2111\]) is understood in the above described vectorial sense. This is called the generalized Shanks Schmidt (GSS) transform, [@brezinski1]. On the other hand, Brezinski defines the inverse of pair of vectors $(v,w)$ such that $\langle v,w\rangle\neq 0$ as the pair of vectors $(w^{-1}, v^{-1})$ where $$\begin{aligned} w^{-1}=\frac{v}{\langle w,v\rangle},\quad v^{-1}=\frac{w}{\langle w,v\rangle}\end{aligned}$$ Thus, $v^{-1}$ is called the inverse of $v$ with respect to $w$ and viceversa. This definition leads to the so-called Topological $\epsilon$-algorithm (TEA), when an arbitrary vector $y$ is fixed and the inverses of $\Delta\epsilon_{2k}^{(n)}$ and $\Delta\epsilon_{2k+1}^{(n)}$ are considered with respect to $y$, that is $$\begin{aligned} \left(\Delta\epsilon_{2k}^{(n)}\right)^{-1}=\frac{y}{\langle y,\Delta\epsilon_{2k}^{(n)}\rangle},\quad \left(\Delta\epsilon_{2k+1}^{(n)}\right)^{-1}=\frac{y}{\langle y,\Delta\epsilon_{2k+1}^{(n)}\rangle}\end{aligned}$$ The recursive formulas are $$\begin{aligned} &&\epsilon_{-1}^{(n)}=0,\quad \epsilon_{0}^{(n)}=u_{n},\quad n=0,1,2,\ldots,\nonumber\\ &&\epsilon_{2k+1}^{(n)}=\epsilon_{2k-1}^{(n+1)}+ \left(\Delta\epsilon_{2k}^{(n)}\right)^{-1},\quad k,n=0,1,2,\ldots,\label{fsec2115}\\ &&\epsilon_{2k+2}^{(n)}=\epsilon_{2k}^{(n+1)}+\frac{\Delta\epsilon_{2k}^{(n)}}{\langle \Delta\epsilon_{2k+1}^{(n)},\Delta\epsilon_{2k}^{(n)}\rangle},\quad k,n=0,1,2,\ldots\nonumber\end{aligned}$$ Brezinski proved, [@brezinski1], the connection with the GSS-transform, showing that $$\begin{aligned} \epsilon_{2k}^{(n)}=e_{k}(u_{n}),\quad \epsilon_{2k+1}^{(n)}=(e_{k}(u_{n}))^{-1}=\frac{y}{\langle y,e_{k}(u_{n})\rangle}.\end{aligned}$$ For an efficient implementation of (\[fsec2115\]) see [@tan]. Thus (TEA) corresponds to take $u_{i,j}=Q(u_{n+i+j})=\langle y,u_{n+i+j}\rangle$ in (\[fsec217\]), (\[fsec218\]). The mechanism of working of the VEM can be described as follows, see [@sidifs; @smithfs; @Sidi2003] for details. One starts from assuming an asymptotic expression for the sequence $u_{n}$ of the form $$\begin{aligned} u_{n}\equiv u+\sum_{j=1}^{\infty} w_{j}\lambda_{j}^{n},\quad n\rightarrow\infty,\label{fsec212}\end{aligned}$$ where $u\in\mathbb{R}^{m}$ and $\lambda_{j}\in\mathbb{C}$, ordered such that $|\lambda_{j}|\geq |\lambda_{j+1}|, \lambda_{j}\neq 0,1, \lambda_{i}\neq \lambda_{j}$ if $i\neq j$ with only a finite number of $\lambda_{j}$ having the same modulus. The expansion (\[fsec212\]) can be generalized by considering, instead of constant vectors $w_{j}$, polynomials $P_{j}(n)$ in $n$ with vector coefficients of the form $$\begin{aligned} P_{j}(n)=\sum_{l=0}^{p_{j}}v_{jl}\begin{pmatrix}m\\l\end{pmatrix}, \label{fsec212b}\end{aligned}$$ with $\{v_{j0},\ldots,v_{jp_{j}}\}$ linearly independent in $\mathbb{R}^{m}$ and where if $|\lambda_{j}|= |\lambda_{j+1}|$ then $p_{j}\geq p_{j+1}$, [@SidiB1988]. For simplicity, the description below will make use of (\[fsec212\]). The asymptotic expansion (\[fsec212\]) is considered in a general vector space (finite or infinite dimensional) where the iteration is defined. It is understood in the sense that any truncation differs from $u_{n}$ in less than some power of the next $\lambda$. This means that for any positive integer $N$ there are $K>0$ and a positive integer $n_{0}$ that only depend on $N$ such that for every $n\geq n_{0}$ $$\begin{aligned} ||u_{n}-u-\sum_{j=1}^{N-1} w_{j}\lambda_{j}^{n}||\leq K\lambda_{N}^{n}.\label{fsec213}\end{aligned}$$ In particular, the case $N=1$ allows to identify $u$ as limit or anti-limit of the sequence $u_{n}$, according to the size of $\lambda_{1}$. (That is, if $|\lambda_{1}|<1$ then $\lim_{n\rightarrow\infty}u_{n}$ exists and equals $u$. If $|\lambda_{1}|>1$ then $\lim_{n\rightarrow\infty}u_{n}$ does not exist and $u$ is called the anti-limit of the sequence $u_{n}$.) Under these conditions, several results of convergence for MPE, RRE, MMPE and TEA are obtained in the literature, [@sidifs; @smithfs] and references therein. For these methods, one can find an extrapolation step $\kappa$ such that $$\begin{aligned} ||t_{n,\kappa}-u||=O(\lambda_{\kappa+1}^{n}),\quad n\rightarrow\infty.\label{fsec213b}\end{aligned}$$ The estimate (\[fsec213b\]) may explain the convergent behaviour of the extrapolation in some cases. If the $\lambda$’s are identified as the eigenvalues of the linearization operator of the iteration at the limit (or anti-limit) $u$, then the extrapolation has the effect of translating the behaviour of the iteration to an eigenvalue $\lambda_{\kappa+1}$ that may be into the unit disk, even if the previous ones are out of it. Hence, $u$ may be anti-limit for the original iteration and the extrapolation converges to it. These results are extended to the defective linear case with more general polynomials (\[fsec212b\]) in [@SidiB1988]. The integer $\kappa$ is related to the concept of minimal polynomial $P(\lambda)$ of a matrix $A$ with respect to a vector $v$, [@JbilouS1991; @jbilous; @smithfs]; this is the unique polynomial of least degree such that $$P(A)v=0.$$ Thus in the case of linear iteration with matrix $A$, $\kappa$ is taken to be the degree of the minimal polynomial of $A$ with respect to the first iteration $u_{0}$. In the case of a nonlinear system written in fixed point form $$\begin{aligned} x=\mathcal{F}(x),\label{fsec2110}\end{aligned}$$ then $\kappa$ is theoretically defined as the degree of the minimal polynomial of $A={\mathcal{F}}^{\prime}(u^{*})$ with respect to $u_{0}$, where $u^{*}$ is a solution of (\[fsec2110\]). In contrast with the linear case, there is no way to determine $\kappa$ in advance for the nonlinear case. This forces to consider several strategies for the choice and the corresponding implementation, see the discussion in [@smithfs] and the comments here below. We also mention that in the linear case, the extrapolation methods MPE and RRE are mathematically equivalent to the method of Arnoldi, [@Saad1981], and the GMRES, [@SaadS1986], respectively, see [@Sidi1988], while the MMPE is mathematically equivalent to the Hessenberg method, [@sidifs] and TEA to the method of Lanczos, [@Lanczos1952], see [@Sidi1988; @jbilous]. Efficient and stable implementation of the RRE, MPE and MMPE methods by using QR and LU factorizations can be seen in [@Sidi1991; @JbilousS1999]. For the case of TEA, see [@tan] and [@BrezinskiR1974; @sidi] for VEA. The implementation is usually carried out in a cycling mode. A cycle of the iteration is performed by the following steps : consider a method (\[mm2\]) with a stabilizing factor $s$ satisfying (P1), (P2). Given $u_{0}\neq 0$ and a width of extrapolation $mw\geq 1$, for $l=0,1,\ldots$, the advance $l\mapsto l+1$ is: - Set $t_{0}=u_{l}$ and compute $mw$ steps of the fixed-point algorithm: $$\begin{aligned} Lt_{n+1}=s(t_{n})N(t_{n}), n=0,\ldots mw-1.\label{fsec2116}\end{aligned}$$ - Compute the extrapolation steps (\[fsec217\]) with any of the methods described above and $ n=0,\ldots,mw$. - Set $u_{l+1}=t_{mw,l}, t_{0}=u_{l+1}$ and go to step (A). The cycle (A)-(B)-(C) is repeated until the error (residual or between two consecutive iterations) is below a prefixed tolerance, a maximum number of iterations is attained or the discrepancy between the stabilizing factor at the iterations and one is below a prefixed tolerance. The width of extrapolation $mw$ depends on the choice of the technique: $mw=\kappa+1$ for MPE, RRE or MMPE and $mw=2\kappa$ for VEA or TEA, [@smithfs]. Since $\kappa$ is generally unknown, some strategy for the implementation must be adopted. In practice, as discussed in [@smithfs], the methods are implemented with some (small) values of $mw$ and take that with the best performance. It may be also different for each cycle, although quadratic convergence is not expected if $\kappa$ is too small. This choice of $\kappa$ will be computationally studied in some examples in Section \[se3\]. The hypotheses for the expansion (\[fsec212\]) include the conditions $\lambda_{j}\neq 1 \;\; \forall j$. In many problems for traveling wave generation, $\lambda=1$ appears as eigenvalue of the iterative technique (of fixed point type) although under especial circumstances that allow to extend the convergence results in some sense. This especial situation is related to the presence of symmetries in the equations for traveling waves. In order to extend the results of convergence to this case, one has to consider the orbits by the symmetry group of the equations and interpret the convergence in the orbital sense, that is a convergence for the orbits. The description of this orbital convergence can be seen in [@alvarezd]. Finally, local quadratic convergence is proved in [@smithfs] (see also [@leferrand; @jbilous; @JbilouS1991; @Van1994]) for the four methods and VEA for a general nonlinear system (\[fsec2110\]) and under the following hypotheses on $\mathcal{F}$: - The Jacobian matrix ${\mathcal{F}}^{\prime}(u^{*})$ does not have $\lambda=1$ as eigenvalue. - $k$ is the degree of the minimal polynomial of ${\mathcal{F}}^{\prime}(u^{*})$ with respect to $u_{0}-u^{*}$. - the algorithm is implemented in the cycling mode where $\kappa$ is chosen in the $i$-th cycle as the degree of the minimal polynomial of ${\mathcal{F}}^{\prime}(u^{*})$ with respect to $t_{i-1}-u^{*}$. This result can be extended to the case where ${\mathcal{F}}$ admits a $\nu$-parameter ($\nu\geq 1$) group of symmetries and, consequently, $\lambda=1$ is an eigenvalue of ${\mathcal{F}}^{\prime}(u^{*})$, by using the reduced system for the orbits of the group and in the orbital sense, [@alvarezd]. This would lead to quadratic orbital convergence but linear local convergence. Anderson acceleration methods ----------------------------- A second family of acceleration techniques considered here is the so-called Anderson family or Anderson mixing, [@anderson]. It is widely used in electronic structure computations and only recently it has been analyzed in a more general context, [@walkern; @ni; @yangmlw]. (To our knowledge, this is the first time that AAM are applied to accelerate traveling wave computations.) The main goal of the approach consists of combining the iteration with a minimization problem for the residual at each step. For linear problems, this technique is essentially equivalent to the GMRES method, [@SaadS1986; @Demmel; @GolubV]. The stages for an iteration step are as follows: Given $u_{0}\neq 0, nw\geq 1$, set $Lu_{1}=s(u_{0})N(u_{0})$. For $k=1,2,\ldots$ - Set $n_{k}=\min \{nw,k\}$ - Set $F_{k}=(f_{k-n_{k}},\ldots ,f_{k})$ where $f_{i}=Lu_{i}-s(u_{i})N(u_{i})$ - Determine $\alpha^{(k)}=(\alpha_{0}^{(k)},\ldots,\alpha_{n_{k}}^{(k)})$ that solves $$\begin{aligned} \min_{\alpha=(\alpha_{0},\ldots,\alpha_{n_{k}})}||F_{k}\alpha||,\quad \sum_{i=0}^{n_{k}}\alpha_{i}=1\label{minimize}\end{aligned}$$ - Set $$Lu_{k+1}=\sum_{i=0}^{n_{k}}\alpha_{i}^{(k)}s(u_{k-n_{k}+i})N(u_{k-n_{k}+i})$$ The resolution of the optimization problem (\[minimize\]) is the source of the additional computational work of the acceleration. One way to reduce this extra effort is the so-called multisecant updating [@fangs; @Eyert1996]. (This also clarifies the connection with quasi-Newton methods.) This technique consists of writing the problem in an equivalent form $$\begin{aligned} \min_{\gamma=(\gamma_{0},\ldots,\gamma_{n_{k}})}||f-{\mathcal F}_{k}\gamma||, {\mathcal F}_{k}=(\Delta f_{k-n_{k}},\ldots ,\Delta f_{k-1}), \Delta f_{i}=f_{i+1}-f_{i},\label{minip}\end{aligned}$$ but with a more direct resolution, and determining the acceleration from it. The general step becomes: - Set $n_{k}=\min \{nw,k\}$ - Determine $\gamma^{(k)}=(\gamma_{0}^{(k)},\ldots,\gamma_{n_{k}}^{(k)})$ by solving (\[minip\]). - Set $$\alpha_{0}^{(k)}=\gamma_{0}^{(k)}, \alpha_{i}^{(k)}=\gamma_{i}^{(k)}- \gamma_{i}^{(k)}, 1\leq i\leq n_{k}-1, \alpha_{n_{k}}^{(k)}=1-\gamma_{n_{k}-1}^{(k)}$$ - Set $$Lu_{k+1}=\sum_{i=0}^{n_{k}}\alpha_{i}^{(k)}s(u_{k-n_{k}+i})N(u_{k-n_{k}+i}).$$ As mentioned in [@walkern], if ${\mathcal F}_{k}$ is full-rank, the solution of the minimization problem can be written as $\gamma^{(k)}=({\mathcal F}_{k}^{T}{\mathcal F}_{k})^{-1}{\mathcal F}_{k}^{T}f_{k}$ and the Anderson acceleration has the alternative form $$\begin{aligned} Lu_{k+1}&=&Lu_{k}-G_{k}f_{k},\nonumber\\ G_{k}&=&-I+({\mathcal H}_{k}+{\mathcal F}_{k})({\mathcal F}_{k}^{T}{\mathcal F}_{k})^{-1}{\mathcal F}_{k}^{T},\quad {\mathcal H}_{k}=(\Delta u_{k-m_{k}},\ldots,\Delta u_{k-1}),\nonumber\\ && \Delta u_{i}=u_{i+1}-u_{i}.\label{aaalternative}\end{aligned}$$ (Note that $G_{k}$ can be viewed as an approximate inverse of the Jacobian of $Lx-N(x)$). The formulation (\[aaalternative\]) motivates the generalization of the Anderson mixing, [@fangs]. This is performed replacing $({\mathcal F}_{k}^{T}{\mathcal F}_{k})^{-1}{\mathcal F}_{k}$ in (\[aaalternative\]) by some ${\mathcal V}_{k}\in\mathbb{R}^{n\times m}$ satisfying $$\begin{aligned} {\mathcal V}_{k}^{T}{\mathcal F}_{k}=I,\end{aligned}$$ and (\[aaalternative\]) becomes $$\begin{aligned} Lu_{k+1}&=&Lx_{k}-\widetilde{G}_{k}f_{k},\nonumber\\ \widetilde{G}_{k}&=&-I+({\mathcal H}_{k}+{\mathcal F}_{k}){\mathcal V}_{k}^{T},\quad {\mathcal H}_{k}=(\Delta u_{k-m_{k}},\ldots,\Delta u_{k-1}).\label{aaalternative2}\end{aligned}$$ The resulting methods are collected in the so-called Anderson’s family, [@walkern; @fangs]. Two particular members are emphasized: the Type-I method (denoted by AA-I from now on), which corresponds to ${\mathcal V}_{k}=({\mathcal H}_{k}^{T}{\mathcal F}_{k})^{-1}{\mathcal H}_{k}$ in (\[aaalternative2\]) and Type-II method (or AA-II), which is the original Anderson mixing (\[aaalternative\]). To our knowledge, some convergence results can be seen in [@walkern; @PotraE2013; @TothK2015]. In [@walkern] the authors identify some Anderson methods for linear problems and in some sense with the GMRES method and the Arnoldi (FOM) method; some convergence results can be derived from this identification. In [@PotraE2013] the equivalence with GMRES for linear problems is completely characterized. Finally, [@TothK2015] gives some proofs of convergence of the Anderson acceleration when applied to contractive mappings: $q$-linear convergence of the residual for linear problems under certain conditions when $nw=1$ and local $r$-linear convergence in the nonlinear case. (These types of convergence are defined in the paper.) On the other hand, as observed in [@walkern], the implementation of AAM should be carried out by attending to three main points: a convenient formulation of the minimization problem (\[minimize\]), a numerical method for its efficient resolution and, finally, the parameter $nw$, which plays a similar role to that of the extrapolation width $mw$ in the VEM. In our computations below, we have followed the treatment described in [@walkern]. This is based on the use of the unconstrained form (\[minip\]) and its numerical resolution with $QR$ decomposition. For other alternatives in both problems, see the discussion in [@walkern; @fangs] and the references cited there. (According to our results below, the use of alternative preconditioning techniques might be recommendable in some cases.) As far as the choice of $nw$ is concerned, a similar strategy to that of $mw$ will be used, since our experiments, [@walkern], suggest that $nw$ (as $mw$) strongly depends on the problem under study and large values are not recommended. Finally the codes are implemented by retaining the definition of $n_{k}=\min\{nw,k\}$ since other alternatives, [@yangmlw], did not improve the results in a relevant way. Numerical comparisons {#se3} ===================== Presented here is a comparative study on the use of VEM and AAM as acceleration techniques from the [Petviashvili ]{}type methods (\[mm2\]) in traveling wave computations. The comparison is organized according to two main points: the type of traveling wave to be generated and the elements of each family of methods to be used for the generation. The first point includes the following case studies: 1. Classical solitary waves, generalized solitary waves and periodic traveling waves of the four-parameter Boussinesq system, [@Bona_Chen_Saut_1; @Bona_Chen_Saut_2]. 2. Localized ground state solutions of NLS type equations, [@Yang2012; @yang2]. 3. Highly oscillatory solitary waves of the one- and two-dimensional Benjamin equation, [@ben0; @ben1; @ben2; @kim; @kima1; @kima2]. This plethora of waves attempts to discuss and overcome different computational difficulties and with the aim of establishing as more general conclusions as possible. As for the second point, each family of techniques has been represented by the following methods: - MPE and RRE standing for polynomial extrapolation methods. - VEA and TEA standing for $\epsilon$ algorithms. - AA-I and AA-II standing for the AAM. For simplicity, the acceleration will be applied to the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]) with $\gamma=p/(p-1)$. Due to the similar behaviour of the methods of the family (\[mm2\]), illustrated in [@alvarezd], the conclusions from the corresponding results can reasonable serve when the [Petviashvili ]{}method is substituted by any of (\[mm2\]). In all the cases considered, the traveling wave profiles appeared as solutions of initial value problems of ode’s. Their discretization to generate approximations to the profiles, was carried out in a common and standard way. The corresponding initial periodic boundary value problem (on a sufficiently long interval) was discretized by using Fourier collocation techniques, [@Boyd; @Canutohqz]; the discretization leads to a nonlinear system of algebraic equations for the approximate values of the profile at the collocation points or for the discrete Fourier coefficients of the approximation. This system is iteratively solved with the classical Petviashvili method (\[mm2\]), (\[mm3c\]) along with the selected acceleration technique. This will be described in each equation considered below. Several stopping criteria for the iterations are implemented: - A maximum number of iterations. - The iteration stops when one of the following quantities are below a prefixed, small tolerance $TOL$: - The difference in Euclidean norm between two consecutive iterations $$\begin{aligned} \label{fsec31} E_{n}=||u_{n+1}-u_{n}||,\quad n=0,1,\ldots\end{aligned}$$ - The residual error (also in Euclidean norm) $$\begin{aligned} \label{fsec32} RES_{n}=||Lu_{n}-N(u_{n})||,\quad n=0,1,\ldots\end{aligned}$$ - The discrepancy between the stabilizing factor and (in case of convergence) its limit one $$\begin{aligned} \label{fsec33} SFE_{n}=|s(u_{n})-1|,\quad n=0,1,\ldots\end{aligned}$$ The numerical experiments that form the comparative study are of different type: - For several values of $\kappa$ (in the case of VEM) and $nw$ (in the case of AAM) we have computed the number of iterations required by each method to achieve a residual error below the prefixed tolerance. This allows to compare some performance of the methods between the two families, among different techniques within a same family and indeed with the [Petviashvili ]{}method without acceleration. - Some eigenvalues of the iteration matrices for the classical fixed point algorithm and the [Petviashvili ]{}method have been computed (with the corresponding standard MATLAB function) in order to explain the behaviour of the second one, [@alvarezd] and how the acceleration eventually changes it. - The form of the approximate profiles and some experiments to check their accuracy are also displayed. Traveling wave solutions of Boussinesq systems {#se31} ---------------------------------------------- In this first example we study the numerical generation of traveling wave solutions of the four-parameter family of Boussinesq system $$\begin{aligned} \eta_{t}+u_{x}+(\eta u)_{x}+au_{xxx}-b\eta_{xxt}&=&0,\label{fsec311}\\ u_{t}+\eta_{x}+uu_{x}+c\eta_{xxx}-du_{xxt}&=&0,\nonumber\end{aligned}$$ where $\eta=\eta(x,t), u=u(x,t), x\in\mathbb{R}, t\geq 0$ and the four parameters $a, b, c, d$ satisfy $$\begin{aligned} a+b=\frac{1}{2}(\theta^{2}-\frac{1}{3}),\quad c+d=\frac{1}{2}(1-\theta^{2}),\label{fsec312}\end{aligned}$$ with some constant $\theta^{2}\in [0,1]$, [@Boussinesq; @Bona_Chen_Saut_1; @Bona_Chen_Saut_2]. System (\[fsec311\]) appears as one of the alternatives to model the bidirectional propagation of the irrotational free surface flow of an incompressible, inviscid fluid in a uniform horizontal channel under the effects of gravity when the surface tension and cross-channel variations of the fluid are assumed to be negligible. If $h_{0}$ denotes the undisturbed water depth, then $\eta(x,t)$ stands for the deviation of the free surface from $h_{0}$ at the point $x$ and time $t$, while $u(x,t)$ is the horizontal velocity of the fluid at $x$ and at the height $y=\theta h_{0}$ (where $y=0$ corresponds to the channel bottom) at time $t$. For the derivation of (\[fsec311\]) from the two-dimensional Euler equations and the mathematical theory see [@Bona_Chen_Saut_1; @Bona_Chen_Saut_2]. For the modification of (\[fsec312\]) to include the influence of surface tension see [@DaripaD2003; @ChenNS2009; @ChenNS2011]. The Boussinesq system (\[fsec311\]) admits different types of traveling wave solutions. First, being an approximation to the corresponding two-dimensional Euler equations in the theory of surface waves, it is expected to have classical solitary wave solutions. They are solutions of the initial value problem of (\[fsec311\]) of smooth traveling wave form $\eta=\eta(x-c_{s}t), u=u(x-c_{s}t)$ with some speed $c_{s}>0$ and decaying to zero as $X=x-c_{s}t\rightarrow\pm\infty$. Substitution into (\[fsec311\]) and after one integration the profiles $u=u(X), \eta=\eta(X)$ must satisfy the ode system $$\begin{aligned} \label{fsec313} \begin{pmatrix} c_{s}(1-b\partial_{XX})&-(1+a\partial_{XX}) \\ -(1+c\partial_{XX})&c_{s}(1-d\partial_{XX})\\ \end{pmatrix}\begin{pmatrix} \eta \\ u\\ \end{pmatrix}=\begin{pmatrix} u\eta \\ \frac{u^{2}}{2}\\ \end{pmatrix}.\end{aligned}$$ The problems of existence, asymptotic decay and stability of solutions of (\[fsec313\]) have been analyzed in many references and for particular values of $a, b, c, d$, see [@DougalisM2008] and references therein. Furthermore, in the same reference, linearly well-posed systems (\[fsec313\]) may be studied as a first order ode system and, based on normal form theory, a discussion on the values of the parameters leading to Boussinesq systems admitting solitary wave solutions with speed $c_{s}>1$ is established. According to it, two classes of systems can be distinguished. The first one admits classical (in the sense above defined) solitary wave solutions. This group contains the Bona-Smith system ($a=0, b=d=(3\theta^{2}-1)/6, c=(2-3\theta^{2})/3, 2/3<\theta^{2}<1$), [@BonaS1976], or the BBM-BBM system ($a=c=0, b=d=1/6$), [@BonaC1988]. (The classical Boussinesq system, which corresponds to $a=b=c=0, d=1/3$, is also in this group, although it is out of the general discussion of [@DougalisM2008] and has been studied separately, [@pegow].) A second class of Boussinesq systems admits generalized solitary wave solutions, that is traveling wave profiles which are not homoclinic to zero at infinity but to small amplitude periodic waves, [@Lombardi2000]. The KdV-KdV system ($a=c>0, b=d=0$), [@BonaDM2007; @BonaDM2008], is an example of this second group. Finally, the existence of periodic traveling wave solutions of (\[fsec311\]) is studied in [@Chen_Chen_Nguyen] by applying topological degree theory for positive operator to the corresponding periodic initial value problem posed on an interval $(-l,l)$ and some cnoidal wave solutions of the BBM-BBM system are computed. A smooth periodic traveling wave solution $\eta=\eta(x.c_{s}t), u=u(x.c_{s}t)$ with some speed $c_{s}>0$ must satisfy the ode system $$\begin{aligned} \label{fsec313b} \begin{pmatrix} c_{s}(1-b\partial_{xx})&-(1+a\partial_{xx}) \\ -(1+c\partial_{xx})&c_{s}(1-d\partial_{xx})\\ \end{pmatrix}\begin{pmatrix} \eta \\ u\\ \end{pmatrix}=\begin{pmatrix} u\eta \\ \frac{u^{2}}{2}\\ \end{pmatrix}+\begin{pmatrix} K_{1} \\ K_{2}\\ \end{pmatrix},\end{aligned}$$ for some real constants $K_{1}, K_{2}$. These are related to the period parameter $l$. The resolution of (\[fsec313b\]) involves a modified system for which these constants of integration are set to zero, [@Chen_Chen_Nguyen]. This is briefly described as follows. One first searches for constant solutions $\eta=C_{1}, u=C_{2}$ of (\[fsec313b\]). This leads to the system $$\begin{aligned} \begin{pmatrix} c_{s}&-1\\ -1&c_{s}\\ \end{pmatrix}\begin{pmatrix} C_{1}\\ C_{2}\\ \end{pmatrix}=\begin{pmatrix} C_{1}C_{2} \\ \frac{C_{2}^{2}}{2}\\ \end{pmatrix}+\begin{pmatrix} K_{1} \\ K_{2}\\ \end{pmatrix},\end{aligned}$$ which can be solved as a cubic equation for $C_{2}$: $$\begin{aligned} &&\frac{C_{2}^{3}}{2}-\frac{3}{2}c_{s}C_{2}^{2}+(c_{s}^{2}-1+K_{1})C_{2}-K_{2}-c_{s}K_{1}=0,\label{fsec313c}\\ &&C_{1}=c_{s}C_{2}-C_{2}^{2}/2-K_{2}.\label{fsec313c1}\end{aligned}$$ Once $(C_{1},C_{2})$ is obtained (there may be more than one solution indeed) the differences $\widetilde{\eta}=\eta-C_{1}, \widetilde{u}=u-C_{2}$ must satisfy $$\begin{aligned} \label{fsec313d} \begin{pmatrix} c_{s}(1-b\partial_{xx})-C_{2}&-(1+a\partial_{xx}) -C_{1}\\ -(1+c\partial_{xx})&c_{s}(1-d\partial_{xx})-C_{2}\\ \end{pmatrix}\begin{pmatrix} \widetilde{\eta} \\ \widetilde{u}\\ \end{pmatrix}=\begin{pmatrix} \widetilde{u}\widetilde{\eta}\\ \frac{\widetilde{u}^{2}}{2}\\ \end{pmatrix}.\end{aligned}$$ This strategy will be considered in the numerical generation of the profiles in (\[fsec313b\]): the system (\[fsec313d\]) wil be discretized to compute approximations to the variables $\widetilde{\eta}, \widetilde{u}$ and to the variables $\eta=\widetilde{\eta}+C_{1}, u=\widetilde{u}+C_{2}$ from them. In order to generate numerically classical and generalized solitary wave solutions of (\[fsec311\]) the corresponding periodic value problem of (\[fsec313\]) on a long enough interval $(-l,l)$ is discretized with a Fourier collocation method leading to a discrete system of the form $$\begin{aligned} \label{fsec314} \underbrace{ \begin{pmatrix} c_{s}(I_{m}-bD^{2})&-(I_{m}+aD^{2}) \\ -(I_{m}+cD^{2})&c_{s}(I_{m}-dD^{2})\\ \end{pmatrix}}_{L}\begin{pmatrix} \eta_{h} \\ u_{h}\\ \end{pmatrix} =\underbrace{\begin{pmatrix} u_{h}.\eta_{h} \\ \frac{u_{h}.^{2}}{2}\\ \end{pmatrix} }_{N(\eta_{h},u_{h})}.\end{aligned}$$ where $\eta_{h}, u_{h}\in \mathbb{R}^{m}$ are approximations $\eta_{h,j}\approx \eta(x_{j}), u_{h,j}\approx u(x_{j})$ to the values of a solution of (\[fsec313b\]) at the grid points $x_{j}=-l+jh, h=2l/m, j=0,\ldots m-1$, $D$ is the pseudospectral differentiation matrix, [@Boyd; @Canutohqz], $I_{m}$ is the $m\times m$ identity matrix and the nonlinear term $N$, which is homogeneous of degree $p=2$, involves Hadamard products. In the case of periodic traveling waves and as was mentioned above, system (\[fsec314\]) will be substituted in the implementation by $$\begin{aligned} \label{fsec314b}\begin{pmatrix} c_{s}(I_{m}-bD^{2})-C_{2}I_{m}&-(I_{m}+aD^{2})-C_{1}I_{m} \\ -(I_{m}+cD^{2})&c_{s}(I_{m}-dD^{2})-C_{2}I_{m}\\ \end{pmatrix}\begin{pmatrix} \widetilde{\eta}_{h} \\ \widetilde{u}_{h}\\ \end{pmatrix} =\begin{pmatrix} \widetilde{u}_{h}.\widetilde{\eta}_{h} \\ \frac{\widetilde{u}_{h}.^{2}}{2}\\ \end{pmatrix} \end{aligned}$$ for the approximations $\widetilde{\eta}_{h}, \widetilde{u}_{h}$ to the $\widetilde{\eta}, \widetilde{u}$ variables at the grid points and where $C_{1}, C_{2}$ are previously known from the resolution of (\[fsec313c\]), (\[fsec313c1\]). Then $\eta_{h}=\widetilde{\eta}_{h}+C_{1}, u_{h}=\widetilde{u}_{h}+C_{2}$. The methods (\[mm2\]) along with the corresponding acceleration technique are then applied to the discrete systems (\[fsec314\]) and (\[fsec314b\]). The implementation is performed in the Fourier space; for example (\[fsec314\]) becomes $$\begin{aligned} && \begin{pmatrix} c_{s}(1+b\left(\frac{p\pi}{l}\right)^{2})&-(1-a\left(\frac{p\pi}{l}\right)^{2}) \\ -(1-c\left(\frac{p\pi}{l}\right)^{2})&c_{s}(1+d\left(\frac{p\pi}{l}\right)^{2})\\ \end{pmatrix}\begin{pmatrix} \left(\widehat{\eta_{h}}\right)_{p} \\ \left(\widehat{u_{h}}\right)_{p}\\ \end{pmatrix} =\begin{pmatrix} \left(\widehat{u_{h}.\eta_{h}}\right)_{p}\\ \frac{1}{2}\left(\widehat{u_{h}.u_{h}}\right)_{p}\\ \end{pmatrix},\\ &&-\frac{m}{2}\leq p\leq \frac{m}{2}.\end{aligned}$$ Thus the $2m\times 2m$ system (\[fsec314\]) is divided into $m$ blocks of $2\times 2$ systems for the corresponding $p$-th discrete Fourier coefficients $\left(\widehat{\eta_{h}}\right)_{p}, \left(\widehat{u_{h}}\right)_{p}, -m/2\leq p\leq m/2.$ (For simplicity, we assume that $m=2^{s}$ for some $s>1$.) Alternatively, (\[fsec314\]) can be written in the form $$\begin{aligned} \label{fsec315} \begin{pmatrix} \eta_{h} \\ u_{h}\\ \end{pmatrix}=T_{h}\begin{pmatrix} \eta_{h} \\ u_{h}\ \end{pmatrix}=\begin{pmatrix} A_{h}\ast (u_{h}.\eta_{h})+B_{h}\ast \frac{u_{h}.^{2}}{2}\\ B_{h}\ast (u_{h}.\eta_{h})+C_{h}\ast \frac{u_{h}.^{2}}{2}\\ \end{pmatrix},\end{aligned}$$ where $\ast$ denotes periodic convolution and if $\omega=\exp(-2\pi i/m)$, the vectors $A_{h}, B_{h}, C_{h}$ have discrete Fourier coefficients $$\begin{aligned} &&(\widehat{A_{h}})_{p}=\frac{1-a\left(\frac{p\pi}{l}\right)^{2}}{\Delta(p)},\quad (\widehat{B_{h}})_{p}=\frac{c_{s}(1+b\left(\frac{p\pi}{l}\right)^{2})}{\Delta(p)},\\ &&(\widehat{C_{h}})_{p}=\frac{1-c\left(\frac{p\pi}{l}\right)^{2}}{\Delta(p)},\quad (\widehat{D_{h}})_{p}=\frac{c_{s}(1+d\left(\frac{p\pi}{l}\right)^{2})}{\Delta(p)},\\ &&\Delta(p)=c_{s}^{2}(1+b\left(\frac{p\pi}{l}\right)^{2})(1+d\left(\frac{p\pi}{l}\right)^{2})-(1-a\left(\frac{p\pi}{l}\right)^{2})(1-c\left(\frac{p\pi}{l}\right)^{2}),\\ &&-\frac{m}{2}\leq p\leq \frac{m}{2}.\end{aligned}$$ In order to explain the behaviour of the iteration, the size of the eigenvalues of the iteration matrix will be relevant in the numerical study. In this case, the corresponding iteration matrix of the classical fixed point iteration at a solution $u^{*}=(\eta_{h}^{*},u_{h}^{*})$ has the form $$\begin{aligned} \label{fsec316} S=L^{-1}\begin{pmatrix} {\rm diag}(u_{h}^{*})&{\rm diag}(\eta_{h}^{*}) \\ 0&{\rm diag}(u_{h}^{*})\\ \end{pmatrix},\end{aligned}$$ (where ${\rm diag}(v)$ stands for the diagonal matrix with diagonal entries given by the components of $v\in\mathbb{R}^{m}$). Some information on the spectrum of $S$ is known. We already have the eigenvalue $\lambda=2$, corresponding to the degree of homogeneity of the nonlinear part, with $u^{*}=(\eta_{h}^{*},u_{h}^{*})$ as an eigenvector. Also, the application of $D$ to (\[fsec314\]) leads to $$\begin{aligned} &&c_{s}D\eta_{h}^{*}-Du_{h}^{*}=D(\eta_{h}^{*}.u_{h}^{*})=u_{h}^{*}.D\eta_{h}^{*}+\eta_{h}^{*}.Du_{h}^{*}\\ &&-D\eta_{h}^{*}+c_{s}\left(I-\frac{1}{3}D^{2}\right)Du_{h}^{*}=D\left(\frac{u_{h}.^{2}}{2}\right)=u_{h}^{*}Du_{h}^{*},\end{aligned}$$ which means that $\lambda=1$ is an eigenvalue of $S$ and $(D\eta_{h}, Du_{h})^{T}$ is an associated eigenvector. This corresponds to the translational invariance of (\[fsec311\]). Three particular systems of (\[fsec311\]) will be taken to illustrate the numerical generation of traveling waves. The first one is the classical Boussinesq system ($a=b=c=0, d=1/3$), [@Boussinesq; @Bona_Chen_Saut_1; @Bona_Chen_Saut_2] $$\begin{aligned} \eta_{t}+u_{x}+(\eta u)_{x}&=&0,\nonumber\\ u_{t}+\eta_{x}+uu_{x}-\frac{1}{3}u_{xxt}&=&0.\label{fsec317}\end{aligned}$$ which is known to have classical solitary wave solutions, [@pegow]. The second one is the so-called KdV-KdV system ($a=c=1/6, b=d=0$) $$\begin{aligned} \eta_{t}+u_{x}+(\eta u)_{x}+\frac{1}{6}u_{xxx}&=&0,\nonumber\\ u_{t}+\eta_{x}+uu_{x}+\frac{1}{6}\eta_{xxx}&=&0,\label{fsec318}\end{aligned}$$ that admits generalized solitary wave solutions, [@BonaDM2007; @BonaDM2008]. Finally, in order to illustrate the numerical generation of periodic traveling waves, the BBM-BBM system ($a=c=0, b=d=1/6$), $$\begin{aligned} \eta_{t}+u_{x}+(\eta u)_{x}-\frac{1}{6}\eta_{xxt}&=&0,\nonumber\\ u_{t}+\eta_{x}+uu_{x}-\frac{1}{6}u_{xxt}&=&0,\label{fsec319}\end{aligned}$$ will be taken, [@Chen_Chen_Nguyen]. ### Numerical generation of classical solitary waves of (\[fsec317\]) In the case of system (\[fsec317\]) a first experiment of comparison of the acceleration techniques has been made by taking $c_{s}=1.3$ and a hyperbolic secant profile as initial iteration with $l=64$ and $m=1024$ collocation points. The Petviashvili method (\[mm2\]), (\[mm3c\]) with $\gamma=2$ was first run, generating approximate $\eta$ and $u$ profiles as shown in Figures \[figuresw0\](a) and (b) while Figures \[figuresw0\](c) and (d) stand for the corresponding phase portraits of the approximate profiles in (a) and (b). (They show the classical character of the solitary waves, represented as homoclinic to zero orbits with exponential decay, [@pegow]. ) The accuracy of the iteration is checked in Figure \[figuresw1\]. Figure \[figuresw1\](a) illustrates the convergence of the sequence $s_{n}=s(\eta_{n},u_{n})$ of stabilizing factors, computed with the corresponding formula (\[mm3c\]) and the optimal value $\gamma=2$. The discrepancy (\[fsec33\]) is below the tolerance $TOL=10^{-13}$ in $n=62$ iterations, while the first residual error below $TOL$ is $9.092489E-14$ at $n=76$. (This also happens in the rest of the experiments: when the procedure is convergent, the error $|1-s_{n}|$ achieves the tolerance before the residual error; therefore, the control on this last one is a harder test and will be adopted as the main one to stop the iteration.) Convergence is also confirmed by Table \[tav1b\]. This shows the six largest magnitude eigenvalues of the iteration matrix (\[fsec316\]) (first column) and of the iteration matrix (at the same iterate) of the [Petviashvili ]{}procedure (second column) both at the last computed iterate $(\eta_{f},u_{f})$. The first column reveals the dominant eigenvalues $\lambda_{1}=2, \lambda_{2}=1$, both simple, while the rest is below one. The filtering effect of the [Petviashvili ]{}method, [@alvarezd], is observed in the second column; the dominant eigenvalue is filtered to zero (recall that $\gamma=2$) and the rest is preserved. Since $\lambda_{2}=1$ corresponds to the translational symmetry of (\[fsec311\]), this guarantees the local convergence of the method (also in the orbital sense mentioned above). Iteration matrix $S(\eta_{f},u_{f})$ Iteration matrix $F^{\prime}(\eta_{f},u_{f})$ -------------------------------------- ----------------------------------------------- $1.999999E+00$ $9.999999E-01$ $9.999999E-01$ $6.763242E-01$ $6.763242E-01$ $5.411229E-01$ $5.411229E-01$ $4.820667E-01$ $4.820667E-01$ $4.567337E-01$ $4.567337E-01$ $4.465122E-01$ : Classical solitary wave generation of (\[fsec317\]) . Six largest magnitude eigenvalues of the approximated iteration matrix $S=L^{-1}N^{\prime}(\eta_{f},u_{f})$ (first column) and of the iteration matrix $F^{\prime}(\eta_{f},u_{f})$, generated by the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]) with $\gamma=2$, both evaluated at the last computed iterate $(\eta_{f},u_{f})$.[]{data-label="tav1b"} The improvement of the performance of the [Petviashvili ]{}method with several acceleration techniques is now computationally analyzed. A first point to study is the choice of the parameters $\kappa$ (for the VEM) and $nw$ (for the AAM). Table \[tav2b\] shows, for values of $\kappa$ between one and ten, the number of iterations required by MPE, RRE, VEA and TEA to achieve a residual error below $TOL=10^{-13}$. (The residual error, corresponding to the last iteration is in parenthesis for each computation.) From these results, the following comments can be made: $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $1$ $269$ $99$ $631$ $408$ ($8.9136E-14$) ($9.1312E-14$) ($9.7920E-14$) ($7.3356E-14$) $2$ $64$ $48$ $43$ $43$ ($7.4794E-14$) ($8.3903E-14$) ($7.7841E-14$) ($8.1979E-14$) $3$ $43$ $43$ $38$ $42$ ($7.5001E-14$) ($8.0682E-14$) ($7.7395E-14$) ($9.6255E-14$) $4$ $33$ $33$ $33$ $37$ ($7.9601E-14$) ($8.2823E-14$) ($7.7824E-14$) ($8.0527E-14$) $5$ $28$ $28$ $31$ $39$ ($8.5285E-14$) ($9.8557E-14$) ($8.7444E-14$) ($9.3163E-14$) $6$ $26$ $26$ $29$ ${\bf 35}$ ($8.3589E-14$) ($7.7189E-14$) ($7.3281E-14$) ($7.9462E-14$) $7$ $27$ $27$ $33$ $35$ ($8.6403E-14$) ($8.1151E-14$) ($7.2215E-14$) ($8.4479E-14$) $8$ $27$ $25$ $29$ $37$ ($9.7379E-14$) ($8.0842E-14$) ($7.3955E-14$) ($7.4142E-14$) $9$ ${\bf 23}$ ${\bf 24}$ $30$ $41$ ($9.5276E-14$) ($8.3798E-14$) ($8.7013E-14$) ($7.2068E-14$) $10$ $25$ $25$ ${\bf 27}$ $35$ ($7.6433E-14$) ($8.3980E-14$) ($9.3658E-14$) ($8.5795E-14$) : Classical solitary wave generation of (\[fsec317\]) . Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis. Without acceleration, the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]) with $\gamma=2$ requires $n=76$ iterations with a residual error $9.0925E-14$.[]{data-label="tav2b"} 1. For $\kappa\geq 2$, all the methods improve the performance of the [Petviashvili ]{}method without acceleration (cf. Figure \[figuresw1\](b)). The reduction in the number of iterations varies in a range $50-70\%$. 2. In general, polynomial methods (MPE and RRE, which essentially behaves in an equivalent way) are more efficient than $\epsilon$-algorithms (with VEA slightly better than TEA). In the best cases, the improvement is about $70\%$ in the case of MPE and RRE and RRE, about $65\%$ with respect to VEA and about $54\%$ in the case of TEA. (However, one has to take into account that the cycle in the case of polynomial methods is $mw=\kappa+1$ and in the case of $\epsilon$-algorithms is $mw=2\kappa$; cf. Figure \[figuresw2\].) In the case of the AAM, the corresponding results are in Table \[tav3b\]. Now, the role of the parameter $\kappa$ (or $mw$) is played by $nw$. $nw$ AA-I($nw$) AA-II($nw$) ------ -------------------- -------------------- $1$ $38$($8.4014E-14$) $35$($4.8504E-14$) $2$ $28$($4.9835E-14$) $26$($5.6978E-14$) $3$ $28$($5.5678E-14$) $25$($6.2897E-14$) $4$ $27$($3.8773E-14$) $22$($1.4624E-14$) $5$ $22$($4.4004E-14$) $20$($6.5530E-14$) $6$ $21$($7.9925E-14$) $21$($2.4615E-14$) $7$ $21$($2.3111E-14$) $20$($5.3227E-14$) $8$ $20$($8.0666E-14$) $20$($2.7701E-14$) $9$ $20$($4.5873E-14$) $19$($9.6556E-14$) $10$ $20$($2.7208E-14$) $19$($6.8255E-14$) : Classical solitary wave generation of (\[fsec317\]) . Number of iterations required by AA-I and AA-II as function of $nw$ to achieve a residual error below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis. Without acceleration, the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]) with $\gamma=2$ requires $n=76$ iterations with a residual error $9.0925E-14$.[]{data-label="tav3b"} The results show that the performance of the methods is essentially the same. The best results are obtained with $nw=8$ in the case of AA-I and $nw=9$ for the AA-II. On the other hand, as mentioned in [@ni; @walkern], the value of $nw$ cannot be too large, because of ill-conditioning. In this example, this was observed for AA-I when $nw=9, 10$. (The corresponding results in Table \[tav3b\] were obtained by using standard preconditioning.) Finally, compared to the [Petviashvili ]{}method without acceleration, the reduction in the number of iterations is in range of $50-80\%$. Since the implementation of the methods is different, a comparison between VEM and AAM should take into account several efficiency indicators. In our example, we have measured the performance by computing the residual error as function of the number of iterations (i. e. comparing the best results of Tables \[tav2b\] and \[tav3b\]) and as function of the computational time. The comparison of the methods in terms of the number of iterations is illustrated in Figure \[figuresw2\](a). This shows, in semilogarithmic scale, the residual error as function of the number of iterations for the [Petviashvili ]{}method without acceleration (solid line) and accelerated with the six selected techniques, implemented with the values of $\kappa$ and $nw$ that, according to Tables \[tav2b\] and \[tav3b\], lead to the best number of iterations. For this example, the AA-I(8) and AA-II(9) give, for a tolerance of $TOL=10^{-13}$ in the residual error, a slightly smaller number of iterations than the (mostly equivalent) RRE(9) and MPE(9). The initially worse performance of VEA(10) and TEA(6) is corrected after the first cycle. For example, in the case of VEA(10), after this first cycle, Figure \[figuresw2\](a) shows that the reduction in the residual error is the fastest. A second comment concerns the computational efficiency. Figure \[figuresw2\](b) shows (again in semi-log scale) the residual error as function of the CPU time in seconds for the four VEM. According to this, VEA(10) is the most efficient, followed by MPE(9), TEA(6) and RRE(9). The comparison in CPU time of this last one (the worst one among the VEM in efficiency) with AA-I(8) and AA-II(9) is shown in Figure \[figuresw2\](c) and reveals the poor performance in computational time as the main drawback of the AAM for this case. (see the formulation and implementation described in Section \[se2\] to attempt to give an explanation of it.) ### Numerical generation of generalized solitary waves of (\[fsec318\]) Here we show the results concerning the generation of approximate generalized solitary waves of the KdV-KdV system (\[fsec318\]). In this case we have considered a speed $c_{s}=1.3$ and a Gaussian-type profile as initial guess for $\eta$ and $u$, with $l=64$ and $m=1024$ Fourier collocation points. The approximate $\eta$ and $u$ profiles generated by the [Petviashvili ]{}method (without acceleration) are displayed in Figures \[figuresw3\](a) and (b) respectively (observe the oscillatory ripples to the left and right of the main pulse), and the performance of the method (measured in terms of the convergence of the stabilizing factor and the behaviour of the residual error as function of number of iterations) is shown in Figures \[figuresw3\](c) and (d) respectively. The method achieves a residual error of $1.150546E-12$ in $n=47$ iterations and $7.859422E-14$ in $n=52$ iterations. In this case, the corresponding phase portraits in Figures \[figuresw3a\](a) and (b) show the generalized character of the waves, with orbits that are homoclinic to small amplitude periodic oscillations at infinity. The last computed iterate, corresponding to this residual error, is used to evaluate the iteration matrices $S(\eta_{f},u_{f})$ and $F^{\prime}(\eta_{f},u_{f})$ of the classical fixed point and [Petviashvili ]{}method, respectively. The associated six largest magnitude eigenvalues are shown in Table \[tav4b\]. (The generalized character of the computed solitary wave is also noticed by the presence of conjugate complex eigenvalues in the linearization matrix at the wave, cf. Table \[tav1b\].) Iteration matrix $S(\eta_{f},u_{f})$ Iteration matrix $F^{\prime}(\eta_{f},u_{f})$ -------------------------------------- ----------------------------------------------- $1.999999E+00$ $1.000000E+00$ $1.000000E+00$ $5.625613E-01$ $5.625613E-01$ $-3.525656E-01$ $-3.525656E-01$ $-3.521308E-01$ $-3.521308E-01$ $3.069304E-01+i 6.906434E-02$ $3.069304E-01+i 6.906434E-02$ $3.069304E-01-i 6.906434E-02$ : Generalized solitary wave generation of (\[fsec318\]) . Six largest magnitude eigenvalues of the approximated iteration matrix $S=L^{-1}N^{\prime}(\eta_{f},u_{f})$ (first column) and of the iteration matrix $F^{\prime}(\eta_{f},u_{f})$, generated by the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]) with $\gamma=2$, both evaluated at the last computed iterate $(\eta_{f},u_{f})$.[]{data-label="tav4b"} The performance of the acceleration techniques is first checked in Tables \[tav5b\] and \[tav6b\] (respectively) which are the analogous to Tables \[tav2b\] and \[tav3b\] respectively for the generalized solitary wave generation. The conclusions are the same as those of the generation of approximate classical solitary wave profiles for system (\[fsec317\]): in terms of the number of iterations, AAM give the best performance and amongst the VEM, the extrapolation methods MPE and RRE are (in this case slightly) more efficient than the vector $\epsilon$-algorithms VEA and TEA, see Figure \[figuresw4\](a). The ranking is the opposite when residual error is measured in terms of the computational time. Figure \[figuresw4\](b) shows that VEA is the fastest and MPE the slowest. Even though, this is much faster than any of the Anderson algorithms, as shown in Figure \[figuresw4\](c). $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $1$ $93$ $64$ $283$ $67$ ($8.2739E-14$) ($9.4330E-14$) ($6.7393E-14$) ($6.5919E-14$) $2$ $42$ $43$ $81$ $49$ ($9.3934E-14$) ($8.6837E-14$) ($6.1237E-14$) ($8.7121E-14$) $3$ $37$ $37$ $38$ $37$ ($9.8740E-14$) ($5.6549E-14$) ($6.7020E-14$) ($7.5320E-14$) $4$ $33$ $32$ $39$ $30$ ($7.5531E-14$) ($6.9363E-14$) ($3.6501E-14$) ($4.2863E-14$) $5$ $28$ $28$ $32$ $30$ ($6.6301E-14$) ($7.0496E-14$) ($5.9777E-14$) ($8.5244E-14$) $6$ $28$ $28$ $29$ $28$ ($7.8419E-14$) ($7.4939E-14$) ($3.2861E-14$) ($7.7335E-14$) $7$ $27$ $27$ $32$ $32$ ($3.0803E-14$) ($3.2406E-14$) ($6.5803E-14$) ($4.9439E-14$) $8$ $23$ $23$ $28$ $29$ ($9.3872E-14$) ($8.8601E-14$) ($8.0707E-14$) ($9.1792E-14$) $9$ $23$ $23$ $26$ $27$ ($3.0380E-14$) ($3.1758E-14$) ($6.4506E-14$) ($8.0379E-14$) $10$ $24$ $24$ $24$ $28$ ($5.8031E-14$) ($5.0834E-14$) ($7.1291E-14$) ($5.8663E-14$) : Generalized solitary wave generation of (\[fsec318\]) . Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis. Without acceleration, the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]) with $\gamma=2$ requires $n=52$ iterations with a residual error $7.8594E-14$.[]{data-label="tav5b"} $nw$ AA-I($nw$) AA-II($nw$) ------ -------------------- -------------------- $1$ $28$($7.6888E-14$) $30$($3.6918E-14$) $2$ $21$($3.2119E-14$) $21$($3.4864E-14$) $3$ $19$($4.9065E-14$) $20$($4.6961E-14$) $4$ $19$($2.5310E-14$) $18$($1.9550E-14$) $5$ $18$($5.9489E-14$) $18$($4.2896E-14$) $6$ $18$($2.0285E-14$) $18$($1.1674E-14$) $7$ $17$($5.1055E-14$) $17$($5.5586E-14$) $8$ $17$($2.8378E-14$) $17$($1.9837E-14$) $9$ $17$($7.2231E-14$) $16$($7.1040E-14$) $10$ $17$($2.8512E-14$) $16$($5.7507E-14$) : Generalized solitary wave generation of (\[fsec318\]) . Number of iterations required by AA-I and AA-II as function of $nw$ to achieve a residual error below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis. Without acceleration, the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]) with $\gamma=2$ requires $n=53$ iterations with a residual error $7.8594E-14$.[]{data-label="tav6b"} ### Numerical generation of periodic traveling waves of (\[fsec319\]) The numerical generation of periodic traveling wave solutions of the BBM-BBM system (\[fsec319\]) will complete the study about traveling wave generation of Boussinesq systems (\[fsec311\]). Here the initial data are similar to those of the previous cases, although now $l=16$ is taken. Once system (\[fsec313c\]) is solved, the application of the [Petviashvili ]{}type method to (\[fsec314b\]) generates, for $K_{1}=0.75, K_{2}=1$ (taken as an example) the computed profiles shown in Figure \[Fig\_ptw1\](a)-(b). The periodic behaviour is also observed in the corresponding phase plots, shown in Figure \[Fig\_ptw1\](c), (d), while the performance is illustrated in Figure \[Fig\_ptw2\], which corresponds to the behaviour of the residual error as function of the number of iterations. The method attains a residual error of $9.335366E-12$ in $n=572$ iterations, showing the need of some acceleration technique. ![Numerical generation of periodic traveling waves of (\[fsec319\]). Approximate profiles for $K_{1}=0.75, K_{2}=1$. Residual error (\[fsec32\]) vs number of iterations.[]{data-label="Fig_ptw2"}](bbmbbm_ptw3.eps){width="9.6cm"} This slow performance is justified by the corresponding table of eigenvalues of the linearization operators, Table \[tavptw1\] in this case. Iteration matrix $S(\eta_{f},u_{f})$ Iteration matrix $F^{\prime}(\eta_{f},u_{f})$ -------------------------------------- ----------------------------------------------- $2.000000E+00$ $1.000000E+00$ $1.000000E+00$ $-9.545242E-01$ $-9.545242E-01$ $-5.353103E-01-6.459204E-01i$ $-5.353103E-01-6.459204E-01i$ $-5.353103E-01-6.459204E-01i$ $-5.353103E-01-6.459204E-01i$ $-5.353103E-01+6.459204E-01i$ $-5.353103E-01+6.459204E-01i$ $-5.353103E-01+6.459204E-01i$ : Periodic traveling wave generation of (\[fsec319\]) with $K_{1}=0.75, K_{2}=1$. Six largest magnitude eigenvalues of the approximated iteration matrix $S=L^{-1}N^{\prime}(\eta_{f},u_{f})$ (first column) and of the iteration matrix $F^{\prime}(\eta_{f},u_{f})$, generated by the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]) with $\gamma=2$, both evaluated at the last computed iterate $(\eta_{f},u_{f})$.[]{data-label="tavptw1"} We observe that besides eigenvalue one (associated to the translational invariance) the next largest in magnitude eigenvalue is close to one. (As in the generalized solitary wave generation the presence of conjugate complex eigenvalues, in this case with algebraic multiplicity above one, in the spectrum of the linearization matrices is noticed.) We now evaluate the application of VEM taking this example as illustration. The standard comparison in performance is given in Table \[tavptw2\]. In this case the tolerance for the residual error was set as $TOL=10^{-11}$. $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $1$ $118$ $278$ $88$ ($8.9607E-12$) ($7.1073E-12$) ($4.9649E-12$) $2$ $70$ $81$ $81$ $81$ ($9.2990E-12$) ($6.8950E-12$) ($6.8161E-12$) ($6.8798E-12$) $3$ $54$ $53$ $78$ $64$ ($9.4403E-12$) ($7.2397E-12$) ($4.2025E-12$) ($9.4892E-12$) $4$ $47$ $52$ $55$ $55$ ($7.0369E-12$) ($5.3720E-12$) ($4.0063E-12$) ($7.4587E-12$) $5$ $55$ $46$ $48$ $103$ ($4.0773E-12$) ($8.2635E-12$) ($9.7571E-12$) ($6.2560E-12$) $6$ $46$ $44$ $53$ $79$ ($9.9878E-12$) ($5.3034E-12$) ($5.7135E-12$) ($5.4689E-12$) $7$ $45$ $49$ $51$ $73$ ($7.4645E-12$) ($3.6062E-12$) ($9.3703E-12$) ($8.3118E-12$) $8$ $45$ $46$ $53$ $69$ ($9.1401E-12$) ($4.3655E-12$) ($9.0789E-12$) ($4.4378E-12$) $9$ $41$ $41$ $58$ $77$ ($9.9555E-12$) ($4.7100E-12$) ($9.8141E-12$) ($8.4583E-12$) $10$ $45$ $45$ $64$ 64 ($3.9218E-12$) ($4.4096E-12$) ($6.2339E-12$) ($5.0514E-12$) : Periodic traveling wave generation of (\[fsec319\]). Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error (\[fsec32\]) below $TOL=10^{-11}$. The residual error at the last computed iterate is in parenthesis. Without acceleration, the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]) with $\gamma=2$ requires $n=572$ iterations with a residual error $9.3354E-12$.[]{data-label="tavptw2"} Some conclusions from it are the following: 1. Better performance of polynomial methods compared to $\epsilon$-algorithms. 2. MPE and RRE are virtually equivalent, especially when $\kappa$ grows. There are more differences between VEA and TEA, but they decrease when $\kappa$ grows. 3. For polynomial methods, the best results are obtained for large $\kappa$ (around $\kappa=9$) while for $\epsilon$ algorithms, it is better to take small $\kappa$ (around $mw=4, 5$). This implies a similar length of each cycle (width of extrapolation). We now analyze the results corresponding to AAM by using Table \[tavptw3\], which evaluates the performance of AA-I and AA-II for the same example. $nw$ AA-I($nw$) AA-II($nw$) ------ -------------------- -------------------- $1$ $57$($2.2757E-12$) $66$($8.3451E-12$) $2$ $84$($4.7115E-12$) $40$($6.4724E-12$) $3$ $81$($6.2384E-12$) $39$($3.1769E-12$) $4$ $81$($1.4968E-12$) $36$($3.2197E-12$) $5$ $48$($9.8634E-12$) $36$($4.5059E-12$) $6$ $48$($3.2036E-12$) $49$($3.7796E-12$) $7$ Ill-conditioned $38$($2.4939E-12$) $8$ Ill-conditioned $36$($7.2839E-12$) $9$ Ill-conditioned $35$($8.3908E-12$) $10$ Ill-conditioned $37$($2.6179E-12$) : Periodic traveling wave generation of (\[fsec319\]). Number of iterations required by AA-I and AA-II as function of $nw$ to achieve a residual error (\[fsec32\]) below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis. Without acceleration, the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]) with $\gamma=2$ requires $n=572$ iterations with a residual error $9.3354E-12$.[]{data-label="tavptw3"} Some conclusions from Table \[tavptw3\]: 1. As in some previous cases the AAM (particularly AA-II) behave better than any VEM when measuring the performance in terms of the number of iterations. However, the polynomial methods MPE and RRE are more efficient in terms of the computational time, see Figure \[Fig\_ptw3\]. 2. The best results of AA-II are obtained with large values of $nw$. The method does not appear to be affected by ill-conditioning, contrary to AA-I, which becomes useless from $nw=7$. Example 2. Localized ground state generation -------------------------------------------- A second group of experiments illustrates the generation of localized ground states in nonlinear Schrödinger (NLS) type models with potentials. In particular, the equation $$\begin{aligned} \label{doub_well11} iu_{t}+\partial_{xx} u+V(x)u+|u|^{2}u=0, \end{aligned}$$ with potential $V(x)=6{\rm sech}^{2}(x)$, is considered as an example, [@lakobay; @yang2]. A localized ground state solution of (\[doub\_well11\]) has the form $u(x,t)=e^{i\mu t}U(x)$, where $\mu\in \mathbb{R}$ and $U(x)$ is assumed to be real and localized ($U\rightarrow 0,\; |x|\rightarrow\infty$). Substitution into (\[doub\_well11\]) leads to $$\begin{aligned} \label{doub_well12} U^{\prime\prime}(x)+V(x)U(x)-\mu U(x)+|U(x)|^{2}U(x)=0.\end{aligned}$$ A discretization of (\[doub\_well12\]) based on a Fourier collocation method on a sufficiently long interval $(-l,l)$ requires in this case the resolution of a system of the form (\[mm1\]) for the approximations $U_{h}$ of $U$ at the grid points $x_{j}=-l+jh, h=2l/m, j=0,\ldots,m-1$, with $$\begin{aligned} L=D^{2}+{\rm diag}(V)-\mu I_{m},\quad N(U_{h})=-U_{h}.^{3},\label{s311}\end{aligned}$$ where $D$ is the pseudospectral differentiation matrix, ${\rm diag}(V)$ is the diagonal matrix with elements $V_{j}=V(x_{j}), j=0,\ldots,m-1$ and $I_{m}$ is the $m\times m$ identity matrix. The nonlinearity $N$ is homogeneous with degree three, where, as usual, the dot stands for the Hadamard product. The discussion below is focused on the ground state numerical generation for several values of $\mu$, which provide different challenges to the iteration. For each considered value of $\mu$, the performance of both families of acceleration techniques has been checked. The first results concern the numerical generation of an asymmetric solution of (\[doub\_well12\]) for $\mu=1.3$ (Figure \[Fig321\](a)). Figure \[Fig322\] compares the performance of the acceleration techniques in terms of the number of iterations required to reduce the residual error (\[fsec32\]) below $TOL=10^{-12}$ and as function of the extrapolation width parameters $\kappa$ and $nw$. In the case of VEM, Figure \[Fig322\](a), all the techniques considered are comparable and the differences are not very large; MPE with $\kappa=7$ gives the minimum number of iterations. (The values $\kappa=8, 9$ also lead to the same number of iterations, but the computational effort in CPU time is higher.) The AAM, Figure \[Fig322\](b), are competitive with VEM for small values of $nw$ ($nw=1,2$). As $nw$ grows, the number of iterations increases (in opposite way to the behaviour of VEM with respect to $\kappa$) and the computation of the coefficients in the minimization problem becomes ill-conditioned. The comparison between the most efficient method of each family (MPE(7) and AA-II(2) respectively) is displayed in Figure \[Fig323\]. It shows the residual error as function of the number of iterations (a) and the CPU time (b) for the [Petviashvili ]{}method without acceleration (solid line) and accelerated with MPE(7) and AA-II(2). In both figures, the improvement in the performance with respect to the [Petviashvili ]{}method provided by the two acceleration techniques is observed, with the best results corresponding to MPE (and, in general VEM against AAM). $\mu=1.3$ $\mu=1.3$ $\mu=3.3$ $\mu=3.3$ --------------- --------------------------- --------------- --------------------------- eigs$(S)$ eigs$(F^{\prime}(u^{*}))$ eigs$(S)$ eigs$(F^{\prime}(u^{*}))$ 2.999999E+00 2.886842E-01 -6.328271E+00 -6.328271E+00 2.886842E-01 -1.858331E-01 3.000000E+00 8.594730E-01 -1.858331E-01 1.419117E-01 8.594730E-01 5.552068E-01 1.419117E-01 7.522396E-02 5.552068E-01 2.978699E-01 7.522396E-02 5.527593E-02 2.978699E-01 2.360730E-01 5.527593E-02 3.629863E-02 2.360730E-01 1.552434E-01 : Numerical generation of asymmetric profile of (\[doub\_well12\]) with $\mu=1.3$ and $\mu=3.3$. Six largest magnitude eigenvalues of the approximated iteration matrix of the classical fixed-point method $S=L^{-1}N^{\prime}(U_{f})$ (left) and of the [Petviashvili ]{}method, evaluated at the last computed iterate $U_{f}$ obtained with MPE($7$).[]{data-label="tav1"} Table \[tav1\] confirms the convergence of the [Petviashvili ]{}method. It displays the six largest magnitude eigenvalues of the corresponding iteration matrix of the classical fixed-point algorithm $S=L^{-1}N^{\prime}(U_{f})$, and of the [Petviashvili ]{}method (\[iterop\]), (\[mm3c\]) for two values of $\mu$. Since an analytical expression for the exact profile is not known, the matrices have been evaluated at the last computed iterate given by MPE(7). In the case of $S$ (first column), the dominant eigenvalue corresponds to the degree of homogeneity $p=3$, with the rest of the eigenvalues below one. The filter action of the stabilizing factor is observed in the second column. The degree $p=3$ has been subtituted by zero (the optimal $q=\gamma (1-p)=-p$ has been taken) and the rest of the spectrum is preserved. This implies that for $\mu=1.3$, the spectral radius of $F^{\prime}(u^{*})$ is below one (second column) and this leads to the (local) convergence of the method. For other values of $\mu$, some differences are observed. When $\mu=3.3$ the numerical generation of an asymmetric solution of (\[doub\_well12\]) (see Figure \[Fig321\](b)) with the [Petviashvili ]{}method without acceleration is not possible in general. Table \[tav1\] (third column) shows the presence of an additional eigenvalue with magnitude above one in the iteration matrix $S$ of the classical fixed point algorithm. As part of the spectrum different from the degree of homogeneity $p=3$, this eigenvalue also appears in the spectrum of the iteration matrix of the [Petviashvili ]{}method (fourth column) making thus the convergence fail. Here the use of the acceleration techniques corrects this behaviour, leading to convergence. (Both iteration matrices are in fact evaluated at the approximate profile displayed in Figure \[Fig321\](b).) In this case (see Figure \[Fig324\](a)) MPE and RRE have virtually the same performance while the $\epsilon$-algorithms start reducing their efficiency. (In this case, TEA does not always work in a reliable way and is not competitive against the other VEM.) As far as the AAM are concerned, Figure \[Fig324\](b), both improve the performance in a similar, relevant way. They are comparable with VEM in number of iterations (Figure \[Fig325\](a)) and behave better when measuring the computational time (Figure \[Fig325\](b)). It may be worth considering the case $\mu=6.3$ because of some relevant points. The first one is the generation of the asymmetric profile, Figure \[Fig321\](c), which in general is not possible with the [Petviashvili ]{}method without acceleration. The situation is similar to that of the previous case $\mu=3.3$ and it is shown in Table \[tav2\] (first and second columns). In this case, the best results of the acceleration are given by MPE and AA-I (Figure \[Fig326\]). The loss of performance of the $\epsilon$-algorithms and the improvement of AAM, observed in the previous experiments, are confirmed here and in the experiments for $\mu=8.3$ (Figures \[Fig328\] and \[Fig329\]). The comparison between MPE and AA-I, see Figures \[Fig327\](a), (b), reveals, ikn the authors’ opinion, a similar performance. $\mu=6.3$ $\mu=6.3$ $\mu=8.3$ $\mu=8.3$ -------------- --------------------------- -------------- --------------------------- eigs$(S)$ eigs$(F^{\prime}(u^{*}))$ eigs$(S)$ eigs$(F^{\prime}(u^{*}))$ 5.095370E+00 5.096207E+00 3.962824E+00 3.962824E+00 3.000000E+00 9.672929E-01 2.999999E+00 9.807797E-01 9.672929E-01 7.506018E-01 9.807797E-01 8.081404E-01 7.506018E-01 4.078905E-01 8.081404E-01 4.459845E-01 4.078905E-01 3.472429E-01 4.459845E-01 4.030040E-01 3.472429E-01 2.032986E-01 4.030040E-01 1.929797E-01 : Numerical generation of asymmetric profile of (\[doub\_well12\]) with $\mu=6.3$ and $\mu=8.3$. Six largest magnitude eigenvalues of the approximated iteration matrix of the classical fixed-point method $S=L^{-1}N^{\prime}(U_{f})$ (left) and of the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]), evaluated at the last computed iterate $U_{f}$ obtained with MPE($10$).[]{data-label="tav2"} The second question with regard to the case $\mu=6.3$ concerns the behaviour of the [Petviashvili ]{}method without acceleration. In this case, the method is convergent, but to a symmetric localized wave, see Figure \[Fig3210\]. eigs$(S)$ eigs$(F^{\prime}(u^{*}))$ -------------- --------------------------- 3.000000E+00 6.098684E-01 6.098684E-01 2.696853E-01 2.696853E-01 1.518421E-01 1.518421E-01 1.039553E-01 1.039553E-01 6.046185E-02 6.046185E-02 5.492737E-02 : Numerical generation of symmetric profile of (\[doub\_well12\]) with $\mu=6.3$. Six largest magnitude eigenvalues of the approximated iteration matrix of the classical fixed-point method $S=L^{-1}N^{\prime}(U_{f})$ (left) and of the [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]), evaluated at the last computed iterate $U_{f}$ obtained with [Petviashvili ]{}method (\[mm2\]), (\[mm3c\]).[]{data-label="tav3"} This can be explained by the first two columns of Table \[tav2\] and by Table \[tav3\]. Note that, as mentioned before, for the asymmetric solution, the [Petviashvili ]{}method cannot be convergent. However, according to the information provided by Table \[tav3\], this is locally convergent to the symmetric solution. (In this case, the spectral radius of $F^{\prime}(u^{*})$ is below one.) This profile can be indeed approximated by using acceleration techniques (and with the corresponding computational saving) but starting from a different initial iteration. Finally, the case $\mu=8.3$ is also analyzed, see Figure \[Fig321\](d). The main reason we find to emphasize this case is to confirm the conclusions obtained from the experiments with the previous values of $\mu$: - Among the VEM, the polynomial methods give a better performance, while the $\epsilon$-algorithms become less efficient as $\mu$ increases. As observed in Figure \[Fig321\], the larger $\mu$ the larger and narrower the asymmetric profile is. The computation becomes harder as is noticed by comparing the iterations required by the methods in Figures \[Fig322\], \[Fig324\], \[Fig326\] and \[Fig328\]. One can also note the increment of the magnitude of the eigenvalues of the corresponding iteration matrices of the [Petviashvili ]{}method in Tables \[tav1\] and \[tav2\]. Therefore, under more demanding conditions, the polynomial methods give a better answer than the $\epsilon$-algorithms. - Contrary to the $\epsilon$-algorithms, whose performance gets worse as $\mu$ increases, the AAM improve their behaviour up to being comparable with polynomial methods (cf. the periodic traveling wave generation in Section \[se3\]). Furthermore, this is obtained with small values of the parameter $nw$, thus avoiding ill-conditioned problems. ![Numerical generation of symmetric ground state of (\[doub\_well12\]) with $\mu=6.3$. Approximate profile with [Petviashvili ]{}method without acceleration.[]{data-label="Fig3210"}](ejemplo325.eps){width="8cm"} Example 3. Solitary wave solutions of the Benjamin equation ----------------------------------------------------------- An additional application of acceleration techniques concerns the oscillatory character of the wave to be numerically generated. This property has shown to make influence on the performance of the iteration with eventual loss of convergence in some cases, [@DougalisDM2012; @DougalisDM2015]. Presented here is the use of acceleration as an alternative to overcome this difficulty and this will be illustrated with the numerical generation of solitary waves in one- and two-dimensional versions of the Benjamin equation. ### One-dimensional Benjamin equation A first example of the situation described above is given by the solitary wave solutions of the Benjamin equation, [@ben0] $$\label{E11} u_t+\alpha u_x+\beta u u_x-\gamma \mathcal{H}u_{xx}-\delta u_{xxx}=0,$$ where $u=u(x,t), x\in \mathbb{R}, t\geq 0, \alpha,\beta,\gamma,\delta$ are positive constants, and $\mathcal{H}$ denotes the Hilbert transform defined on the real line as $$\begin{aligned} \mathcal{H}f(x):=\frac{1}{\pi}p.v.\int_{-\infty}^{\infty}\frac{f(y)}{x-y}\,dy,\label{hilb}\end{aligned}$$ or through its Fourier transform as $$\widehat{\mathcal{H}f}(k)=-{\rm i}{\rm sign}(k)\widehat{f}(k), \quad k\in\mathbb{R}.$$ Equation (\[E11\]) is a model for the propagation of internal waves along the interface of a two-layer fluid system and where gravity and surface tension effects are not negligible. It includes the limiting cases of negligible surface tension ($\delta=0$ or Benjamin-Ono equation) and a limit of a model with very thin upper fluid ($\gamma=0$ or KdV equation). Solitary-wave solutions of (\[E11\]) with speed $c_{s}>0$ are determined by profiles $u(x,t)=\varphi(x-c_{s}t), c_{s}>0$, such that $\varphi$ and its derivatives tend to zero as $X=x-c_{s}t$ approaches $\pm\infty$ and satisfying $$\begin{aligned} (\alpha-c_{s})\varphi +\frac{\beta}{2}\varphi^{2}-\gamma \mathcal{H}\varphi^{\prime}-\delta \varphi^{\prime\prime}=0,\label{E15}\end{aligned}$$ where ${}^{\prime}=d/dX$. Albert et al., [@AlbertBR1999] established a complete theory of existence and orbital stability of solitary waves of (\[E11\]) for small $\gamma$, while Benjamin, [@ben1], derived the oscillating behaviour of the waves, with the number of oscillations increasing as $\gamma$ approaches $\gamma^{\ast}=2\sqrt{\delta(\alpha-c_{s})}$, along with the asymptotic decay, as $|X|\rightarrow\infty$, like $1/X^{2}$. Except in the limiting cases, solitary wave solutions are not analytically known. A standard way to generate solitary wave profiles numerically consists of considering (\[E15\]) in the Fourier space $$\begin{aligned} \label{Be3} (-c_{s}+\alpha-\gamma |k|+\delta k^{2})\widehat{\varphi} +\frac{\beta}{2}\widehat{\varphi^{2}}=0,\quad k\in\mathbb{R},\end{aligned}$$ (where $\widehat{\varphi}(k)$ is the Fourier transform of $\varphi$), discretizing (\[Be3\]) with periodic boundary conditions on a sufficiently long interval $(-l,l)$ and the use of discrete Fourier transform (DFT) $$\begin{aligned} \label{E41} (-c_{s}+\alpha-\gamma |k|+\delta k^{2})\widehat{\varphi^{N}}_{k} +\frac{\beta}{2}\left(\widehat{\varphi^{N}\ast \varphi^{N}}\right)_{k}=0,\end{aligned}$$ for $k=-\frac{N}{2},\ldots,\frac{N}{2}-1$, where $\varphi^{N}$ is a trigonometric polynomial of degree $N$ which approximates $\varphi$ and $\widehat{\varphi^{N}}_{k}$ denotes its $k^{\rm th}$ Fourier coefficient. Then (\[E41\]) is numerically solved by incremental continuation with respect to $\gamma$ from $\gamma=0$ (which corresponds to KdV equation and for which solitary wave profiles are analytically known) and a nonlinear iteratively solver for each value of the homotopic path with respect to $\gamma$. For a more detailed description of the incremental continuation method and the performance of several nonlinear iterative solvers see [@AlbertBR1999; @DougalisDM2012]. The experiments performed there reveal that the oscillatory behaviour of the wave increases the difficulty of its computation, even using numerical continuation. Our aim here is giving a computational alternative, based on the use of acceleration techniques. To this end, we fix a speed $c_{s}=0.75$, the parameters $\alpha=\beta=\delta=1$ and generate numerically a solitary wave solution of (\[E15\]) by combining the [Petviashvili ]{}method, standing for the family of iterative method (\[mm2\]), along with the acceleration techniques considered in previous examples. We will take four values of $\gamma$, namely $0.9, 0.99, 0.999, 0.9999$ (which, for the considered values of the parameters, are close to $\gamma^{*}$, equals $1$ in our example), correspond to a more and more oscillating profile (with smaller and smaller amplitude, see Figures \[Ben1\](a)-(d); the computational window is $[-512,512]$ with $N=4096$ collocation points) and for which the [Petviashvili ]{}method with numerical continuation requires a long computation to converge or directly does not work. In all the experiments the initial iteration is the (analytically known) solitary wave profile corresponding to $\gamma=0$ (KdV equation). As in the previous examples, we first estimate the performance of the acceleration techniques by comparing the number of iterations required by each of them to achieve a residual error (\[fsec32\]) less than a tolerance $TOL=10^{-13}$. $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $2$ $35$ $35$ $33$ $36$ ($7.0418E-14$) ($5.3048E-14$) ($7.3270E-14$) ($7.5208E-14$) $3$ $26$ $28$ $28$ $25$ ($8.5843E-14$) ($3.7474E-14$) ($4.3193E-14$) ($3.5514E-14$) $4$ $25$ $27$ $23$ $21$ ($8.7162E-14$) ($4.0422E-14$) ($2.6305E-14$) ($2.7078E-14$) $5$ $22$ $22$ $25$ $25$ ($2.3778E-14$) ($2.4540E-14$) ($2.4551E-14$) ($2.6612E-14$) $6$ $19$ $19$ $24$ $24$ ($6.8569E-14$) ($6.3344E-14$) ($8.5839E-14$) ($8.8986E-14$) $7$ $19$ $19$ $21$ $24$ ($4.8718E-15$) ($4.4709E-15$) ($5.1694E-14$) ($7.2978E-14$) $8$ $21$ $21$ $19$ $19$ ($2.4289E-15$) ($2.3872E-15$) ($1.8080E-14$) ($6.5383E-14$) $9$ $23$ $23$ $21$ $21$ ($2.2138E-15$) ($2.5529E-15$) ($3.5794E-15$) ($3.3103E-15$) $10$ $25$ $25$ $23$ $23$ ($2.3084E-15$) ($2.2764E-15$) ($4.3858E-15$) ($3.3545E-15$) : Solitary wave generation of (\[E15\]) . Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error (\[fsec32\]) below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis; $c_{s}=0.75$, $\alpha=\beta=\delta=1$, $\gamma=0.9$.[]{data-label="tav7b"} For the case of the VEM and the four values of $\gamma$ considered, this information is given in Tables \[tav7b\]-\[tav10b\]. All the methods achieve convergence in the four cases. (The [Petviashvili ]{}method with continuation is not able to converge for the last two values of $\gamma$ and for the first two values the number of iterations required is prohibitive: for example, just going from $\gamma=0.98$ to $\gamma=0.99$ the method requires $266$ iterations to have a residual error of size $9.0634E-14$; the continuation process from the initial $\gamma=0$, where our computations start, requires a total number of iterations of about $4470$.) As expected the effort of VEM in number of iterations increases with $\gamma$, that is, with the oscillating character of the profile, see Figure \[Ben2\](a). $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $2$ $77$ $67$ $147$ $93$ ($4.3301E-14$) ($5.8218E-14$) ($1.8199E-14$) ($8.9274E-14$) $3$ $56$ $53$ $47$ $51$ ($2.9502E-14$) ($4.4310E-14$) ($2.1373E-14$) ($6.5537E-14$) $4$ $43$ $49$ $48$ $41$ ($4.7982E-14$) ($1.5692E-14$) ($7.4855E-14$) ($2.4388E-14$) $5$ $38$ $36$ $39$ $37$ ($3.8410E-14$) ($2.5110E-14$) ($4.0062E-14$) ($3.4320E-14$) $6$ $33$ $33$ $43$ $43$ ($3.6079E-14$) ($2.3031E-14$) ($1.6073E-14$) ($1.6545E-14$) $7$ $30$ $30$ $35$ $49$ ($6.3053E-14$) ($3.2262E-14$) ($3.3824E-14$) ($1.8129E-15$) $8$ $38$ $31$ $40$ $37$ ($9.5443E-14$) ($6.3679E-14$) ($3.2121E-14$) ($1.4194E-14$) $9$ $34$ $34$ $41$ $41$ ($1.8749E-15$) ($2.1169E-15$) ($4.1825E-15$) ($1.8746E-15$) $10$ $37$ $37$ $45$ $45$ ($2.2503E-15$) ($1.7597E-15$) ($4.7245E-15$) ($1.8872E-15$) : Solitary wave generation of (\[E15\]) . Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error (\[fsec32\]) below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis; $c_{s}=0.75$, $\alpha=\beta=\delta=1$, $\gamma=0.99$.[]{data-label="tav8b"} Among them and except some particular cases (for example, when TEA is applied with $\gamma=0.9999$ and $\kappa=6$) the polynomial methods are more efficient than $\epsilon$-algorithms when $\gamma$ increases, although the difference is shorter than that was obtained in the examples of Section \[se31\]. It is remarkable that in the case of a solitary wave profile with a small number of oscillations (for example, when $\gamma=0.9$) the performance of the methods is virtually the same: after one or two cycles, the improvement of the acceleration technique is good enough to not needing to complete the next cycle in order to achieve the tolerance for the residual error. This is particularly emphasized in the case of the $\epsilon$-algorithms, where the cycle is longer ($2\kappa$ against $\kappa+1$ for the case of the polynomial methods). $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $3$ $63$ $60$ $56$ $65$ ($2.2431E-14$) ($3.9572E-14$) ($9.8315E-14$) ($6.6026E-14$) $4$ $79$ $49$ $77$ $70$ ($4.9749E-14$) ($4.1392E-14$) ($6.6180E-14$) ($9.3318E-14$) $5$ $43$ $43$ $54$ $101$ ($5.8270E-14$) ($1.0725E-14$) ($5.9093E-14$) ($4.4323E-14$) $6$ $45$ $45$ $71$ $71$ ($9.4355E-14$) ($3.5010E-14$) ($6.8677E-14$) ($1.9108E-14$) $7$ $37$ $37$ $65$ $65$ ($8.3020E-14$) ($6.3139E-14$) ($1.1800E-14$) ($2.2238E-15$) $8$ $51$ $41$ $73$ $73$ ($4.8702E-15$) ($1.3712E-14$) ($8.5215E-15$) ($3.7373E-15$) $9$ $45$ $45$ $81$ $81$ ($2.8855E-15$) ($2.7603E-15$) ($3.4268E-15$) ($2.6722E-15$) $10$ $51$ $51$ $111$ $69$ ($3.3110E-14$) ($3.2354E-14$) ($2.0694E-15$) ($4.0083E-14$) : Solitary wave generation of (\[E15\]) . Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error (\[fsec32\]) below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis; $c_{s}=0.75$, $\alpha=\beta=\delta=1$, $\gamma=0.999$.[]{data-label="tav9b"} $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $3$ $60$ $56$ $65$ $49$ ($4.0201E-14$) ($2.3990E-14$) ($1.2911E-14$) ($1.4934E-14$) $4$ $55$ $61$ $203$ $51$ ($2.4890E-14$) ($4.0300E-14$) ($6.8061E-14$) ($1.4970E-14$) $5$ $45$ $42$ $52$ $73$ ($1.4175E-14$) ($6.6916E-14$) ($5.7187E-14$) ($1.2493E-15$) $6$ $43$ $41$ $101$ $43$ ($3.6348E-14$) ($1.5143E-14$) ($7.7548E-14$) ($4.9096E-14$) $7$ $46$ $46$ $65$ $65$ ($4.7457E-15$) ($2.3306E-15$) ($9.9131E-14$) ($8.3133E-16$) $8$ $51$ $51$ $109$ $73$ ($1.9157E-15$) ($7.6397E-16$) ($3.0772E-15$) ($1.3583E-15$) $9$ $56$ $56$ $101$ $81$ ($6.7580E-16$) ($7.1896E-16$) ($7.8767E-16$) ($2.4959E-15$) $10$ $61$ $61$ $199$ $89$ ($7.8194E-16$) ($9.0591E-16$) ($7.7246E-16$) ($6.4547E-16$) : Solitary wave generation of (\[E15\]) . Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error(\[fsec32\]) below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis; $c_{s}=0.75$, $\alpha=\beta=\delta=1$, $\gamma=0.9999$.[]{data-label="tav10b"} The results from Tables \[tav7b\]-\[tav10b\] are in contrast with those from Tables \[tav11b\]-\[tav14b\], that correspond to the AAM. The main conclusion here is that these methods are strongly affected by the oscillating character of the profiles and, compared to VEM, do not seem to be recommendable for this sort of computations, at least without a suitable choice of preconditioning. (Some of it was suggested by the previous experiments concerning the generalized solitary waves of some Boussinesq systems, see Section \[se3\].) Tables \[tav11b\]-\[tav14b\] show that as $\gamma$ increases, ill-conditioning of the corresponding least-squares problem is observed from even moderate values of $nw$ in the case of AA-I (affecting the stability of the method, which is not able to converge or requires a great effort in number of iterations) while AA-II is not so affected. $nw$ AA-I($nw$) AA-II($nw$) ------ -------------------- -------------------- $1$ $20$($5.5496E-14$) $25$($4.1169E-14$) $2$ $17$($1.9530E-15$) $21$($1.2546E-14$) $3$ $15$($4.4608E-15$) $20$($4.8161E-14$) $4$ $15$($1.3401E-14$) $20$($8.3292E-14$) $5$ $14$($5.1185E-14$) $21$($1.4961E-14$) $6$ $14$($1.5211E-14$) $21$($1.4958E-14$) : Solitary wave generation of (\[E15\]) . Number of iterations required by AA-I and AA-II as function of $nw$ to achieve a residual error (\[fsec32\]) below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis. $c_{s}=0.75$, $\alpha=\beta=\delta=1$, $\gamma=0.9$.[]{data-label="tav11b"} However, when AAM work, they exhibit a competitive performance, as shown in Figure \[Ben2\](b) when comparing with Figure \[Ben2\](a). (Our implementation follows that of described in [@walkern], which uses the unconstrained form of the least-squares problem and that also was suggested by some other authors, [@fangs]. For the numerical resolution we have used some other alternatives, with $QR$ decomposition with pivoting, [@walkern], and SVD, [@fangs]. The results of Tables \[tav11b\]-\[tav14b\] correspond to the first implementation, while the second one overcomes ill-conditioning in some more cases of AA-I, but at the cost of an important increase of the iterations.) $nw$ AA-I($nw$) AA-II($nw$) ------ -------------------- -------------------- $1$ $58$($8.4209E-14$) $38$($6.6459E-14$) $2$ $44$($3.1097E-14$) $44$($5.6598E-14$) $3$ $38$($1.0963E-14$) $36$($9.8103E-14$) $4$ $53$($2.9810E-14$) $47$($3.1570E-14$) $5$ $57$($4.4383E-14$) $6$ $47$($5.3103E-14$) $7$ $53$($7.2250E-14$) $8$ $64$($3.6054E-14$) : Solitary wave generation of (\[E15\]) . Number of iterations required by AA-I and AA-II as function of $nw$ to achieve a residual error (\[fsec32\]) below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis. $c_{s}=0.75$, $\alpha=\beta=\delta=1$, $\gamma=0.99$.[]{data-label="tav12b"} $nw$ AA-I($nw$) AA-II($nw$) ------ --------------------- -------------------- $1$ $63$($8.2392E-14$) $77$($9.2229E-14$) $2$ $57$($9.5654E-14$) $83$($8.6540E-14$) $3$ $225$($4.7416E-14$) $49$($7.4595E-14$) $4$ $33$($2.0366E-14$) $91$($6.5926E-14$) $5$ $64$($1.1616E-14$) $6$ $70$($5.9306E-14$) : Solitary wave generation of (\[E15\]) . Number of iterations required by AA-I and AA-II as function of $nw$ to achieve a residual error (\[fsec32\]) below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis. $c_{s}=0.75$, $\alpha=\beta=\delta=1$, $\gamma=0.999$.[]{data-label="tav13b"} $nw$ AA-I($nw$) AA-II($nw$) ------ --------------------- --------------------- $1$ $800$($2.7029E-14$) $84$($9.3995E-15$) $2$ $61$($3.1858E-14$) $193$($3.5965E-14$) $3$ $99$($2.2611E-14$) $98$($1.3640E-14$) $4$ $144$($8.0198E-14$) $82$($4.5365E-14$) $5$ $94$($1.8114E-14$) $6$ $66$($8.1966E-14$) : Solitary wave generation of (\[E15\]) . Number of iterations required by AA-I and AA-II as function of $nw$ to achieve a residual error (\[fsec32\]) below $TOL=10^{-13}$. The residual error at the last computed iterate is in parenthesis. $c_{s}=0.75$, $\alpha=\beta=\delta=1$, $\gamma=0.9999$.[]{data-label="tav14b"} ### Lump solitary waves of 2D Benjamin equation In order to finish off this example we study the performance of the acceleration techniques when generating numerically lump solitary wave solutions of the 2D Benjamin equation [@kim; @kima1; @kima2] $$\begin{aligned} \label{ben2d0} \left(\eta_{t}+ \alpha\eta\eta_{x}-\beta \mathcal{H}(\eta_{xx})+\delta\eta_{xxx} \right)_{x}-\eta_{zz}=0,\end{aligned}$$ where $\alpha, \beta, \delta\geq 0$ and $\mathcal{H}$ is the Hilbert transform (\[hilb\]) with respect to $x$. In (\[ben2d0\]), as in the one-dimensional case, $\eta=\eta(x,z,t)$ stands for the interfacial deviation wave between two ideal fluids with a bounded upper layer and the heavier one with infinite depth, and under the presence of interfacial tension. The two-dimensional version incorporates weak transverse variations. For the experiments below we will consider a normalized version of (\[ben2d0\]), [@kim] $$\begin{aligned} \label{ben2d1} \left(\eta_{t}+ (\eta^{2})_{x}-2\Gamma \mathcal{H}(\eta_{xx})+\eta_{xxx} \right)_{x}-\eta_{zz}=0,\end{aligned}$$ where $\Gamma\geq 0$. (The case $\Gamma=0$ corresponds to the KP-I equation, [@kadomtsevp].) For localized solutions, the zero total mass $$\begin{aligned} \int_{-\infty}^{\infty} \eta(x,z,t)dx=0,\label{ben2d2}\end{aligned}$$ is also assumed. Lump solitary wave solutions of (\[ben2d1\]), (\[ben2d2\]) are solutions of the form $\eta(x,z,t)=\eta(X,Z), X=x-c_{s}t, Z=z$ for some $c_{s}>0$. Substitution into (\[ben2d1\]) leads to $$\begin{aligned} \label{lumpsw} \left(-c_{s}\eta+\eta^{2}-2\Gamma \mathcal{H}(\eta_{X})+\eta_{XX}\right)_{XX}-\eta_{ZZ}=0,\end{aligned}$$ As shown in [@kima2], the value $\Gamma=1$ marks a bifurcation point as for the type of lump solutions of (\[ben2d1\]) between lumps of KP-I type and of wavepacket type. This implies in particular that as $\Gamma<1$ approaches one the lump wave increases the oscillations. The numerical procedure used in [@kim; @kima2] to generate approximate lump waves combines numerical continuation in $\Gamma$, pseudospectral approximation to (\[lumpsw\]) (where constraint (\[ben2d2\]) is imposed) and Newton’s method for the resolution of the corresponding system of equations in each step of the $\Gamma$-homotopic path. The use of the [Petviashvili ]{}methods (instead of Newton’s) was suggested in [@alvarezd]. (For the use of the [Petviashvili ]{}method in the generation of two-dimensional solitary waves see e. g. [@AbramyanS1985; @VoronovichSS1998].) As in the one-dimensional case, the computation of approximate lump profiles comes up two main difficulties: the use of numerical continuation and the oscillating behaviour of the lump. These problems can be overcome with the use of acceleration techniques, especially VEM. In order to illustrate this we will take $c_{s}=1$ and generate approximate lump solitary waves for $\Gamma=0.99, 0.999, 0.9999$. As described in [@alvarezd], the periodic problem on a square $[-L_{x},L_{x}]\times [-L_{z},L_{z}]$ of (\[ben2d1\]) is discretized by using a Fourier collocation method, generating approximations $(\eta_{h})_{i,j}$ to the lump profile $\eta(x_{i},z_{j})$ at the collocation points $x_{i}=-L_{x}+ih_{x}, z_{j}=-L_{z}+jh_{z}, h_{x}=2L_{x}/N_{x}, h_{z}=2L_{z}/N_{z}, i=1,\ldots, N_{x}, j=1,\ldots, N_{z}$. The system for the discrete Fourier coefficients of the approximation is of the form $$\begin{aligned} (k_{x}^{2}(c_{s}+2\Gamma |k_{x}|+k_{z}^{2}))\widehat{\eta}_{h}(k_{x},k_{z})=k_{x}^{2}\left(\widehat{\eta_{h}^{2}}\right)(k_{x},k_{z}),\label{ben2d3}\end{aligned}$$ for $k_{x}=-N_{x}/2,\ldots,N_{x}/2, k_{z}=-N_{z}/2,\ldots,N_{z}/2$ and where $\widehat{\eta}_{h}(k_{x},k_{z})$ stands for discrete $(k_{x},k_{z})$-Fourier component of $\eta_{h}$. The zero total mass condition (\[ben2d2\]) is imposed as $$\begin{aligned} \widehat{\eta}_{h}(0,0)=0.\label{ben2d4}\end{aligned}$$ When (\[ben2d4\]) is included into (\[ben2d3\]), the resulting system for the rest of Fourier components is nonsingular and it is iteratively solved, for fixed $\Gamma$, by using: - The [Petviashvili ]{}method with numerical continuation from the initial iteration given by the exact profile for $\Gamma=0$ $$\begin{aligned} \eta_{0}(x,z)=12c_{s}\frac{3+c_{s}^{2}z^{2}-c_{s}x^{2}}{(3+c_{s}x^{2}+c_{s}^{2}z^{2})^{2}}.\label{IL}\end{aligned}$$ - The [Petviashvili ]{}method without numerical continuation but accelerated with the six techniques MPE, RRE, TEA, VEA, AA-I and AA-II and the same initial iteration (\[IL\]). The experiments below follow a similar design to that of the one dimensional case. We have taken $N_{x}=N_{z}=1024$ with $L_{x}=L_{z}=256$ and a tolerance of $TOL=10^{-8}$ for the control of the iteration. As before, the number of iterations shown in the numerical results correspond to the total account, including the iterations of each cycle. From this value, one can obtain the iterations exclusively due to the corresponding acceleration. We think that this way of counting the iterations makes the comparison with the results without acceleration more realistic. We also remark that the use of the same initial iteration (\[IL\]) is against the alternative technique with acceleration methods since, according to the form of the resulting waves, the initial profile is not close. This should be observed in the behaviour of the residual error with respect to the number of iterations: the main effort is at the beginning; once the error is small enough, all the techniques accelerate the convergence in a more important way. Here three values of $\Gamma=0.99, 0.999, 0.9999$ are considered. The corresponding approximate waves (confirming the highly oscillatory behaviour) are computed with the acceleration procedure of (ii) and can be observed in Figure \[fexample41\]. The first procedure in (i), based on continuation with respect to $\Gamma$, is totally inefficient for the the first value and does not work for the other two. The performance of the acceleration techniques is compared in Figure \[Ben2d\] and Tables \[tav15b\]-\[tav18b\]. $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $2$ $38$ $41$ $63$ $49$ ($7.5320E-09$) ($7.7910E-09$) ($5.8233E-09$) ($8.1460E-09$) $3$ $32$ $75$ $49$ $56$ ($6.6515E-09$) ($3.3504E-09$) ($9.0167E-09$) ($9.4419E-09$) $4$ $32$ $37$ $33$ $38$ ($5.0660E-09$) ($3.8941E-09$) ($9.4694E-09$) ($8.9305E-09$) $5$ $26$ $30$ $35$ $34$ ($4.9957E-09$) ($8.4036E-09$) ($1.3179E-09$) ($1.1643E-09$) $6$ $29$ $30$ $30$ $27$ ($4.5762E-09$) ($1.4663E-09$) ($9.7151E-09$) ($2.8493E-09$) $7$ $25$ $33$ $31$ $31$ ($2.8813E-09$) ($5.5247E-09$) ($1.8370E-09$) ($6.0375E-09$) $8$ $21$ $29$ $35$ $35$ ($6.8132E-09$) ($3.7396E-09$) ($6.7544E-09$) ($2.3207E-09$) $9$ $22$ $25$ $35$ $40$ ($7.0811E-09$) ($6.1487E-09$) ($7.6658E-09$) ($5.3006E-09$) $10$ $23$ $25$ $39$ $43$ ($9.6176E-10$) ($7.8664E-09$) ($9.0031E-09$) ($2.7713E-09$) : Solitary wave generation of (\[ben2d1\]) . Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error below $TOL=10^{-8}$. The residual error (\[fsec32\]) at the last computed iterate is in parenthesis; $c_{s}=1$, $\Gamma=0.99$.[]{data-label="tav15b"} $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $3$ $47$ $54$ $59$ $66$ ($8.2832E-09$) ($5.6681E-09$) ($9.5469E-09$) ($7.6937E-09$) $4$ $46$ $53$ $87$ $65$ ($9.8170E-09$) ($8.4899E-09$) ($7.9785E-09$) ($1.6182E-09$) $5$ $43$ $44$ $56$ $56$ ($2.7604E-09$) ($6.3987E-09$) ($7.2821E-09$) ($6.2222E-09$) $6$ $36$ $38$ $105$ $67$ ($5.3518E-09$) ($5.7851E-09$) ($1.2954E-10$) ($3.0156E-10$) $7$ $41$ $43$ $76$ $77$ ($3.7360E-09$) ($3.1947E-09$) ($1.1064E-10$) ($4.9070E-10$) $8$ $38$ $56$ $86$ $87$ ($1.9864E-09$) ($1.9460E-09$) ($4.8067E-11$) ($4.6454E-10$) $9$ $51$ $51$ $96$ $59$ ($4.6989E-12$) ($2.2425E-11$) ($2.8033E-11$) ($6.7102E-09$) $10$ $56$ $56$ $85$ $65$ ($5.5395E-10$) ($1.0988E-11$) ($2.7617E-09$) ($1.3631E-09$) : Solitary wave generation of (\[ben2d1\]) . Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error below $TOL=10^{-8}$. The residual error (\[fsec32\]) at the last computed iterate is in parenthesis; $c_{s}=1$, $\Gamma=0.999$.[]{data-label="tav16b"} $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $3$ $53$ $54$ $57$ $72$ ($5.7252E-09$) ($6.4717E-09$) ($4.9757E-09$) ($8.5993E-09$) $4$ $51$ $47$ $82$ $65$ ($6.5144E-09$) ($6.5364E-09$) ($1.2292E-09$) ($7.8352E-10$) $5$ $43$ $49$ $67$ $68$ ($1.4420E-09$) ($5.7277E-09$) ($9.3104E-09$) ($3.3648E-09$) $6$ $50$ $50$ $105$ $105$ ($2.8215E-12$) ($1.3107E-10$) ($4.7779E-12$) ($3.2453E-09$) $7$ $49$ $50$ $91$ $122$ ($8.1696E-09$) ($5.7674E-09$) ($1.4031E-11$) ($6.5282E-10$) $8$ $55$ $55$ $103$ $155$ ($2.5994E-10$) ($9.0192E-09$) ($1.4896E-10$) ($2.9075E-10$) : Solitary wave generation of (\[ben2d1\]) . Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error below $TOL=10^{-8}$. The residual error (\[fsec32\]) at the last computed iterate is in parenthesis; $c_{s}=1$, $\Gamma=0.9999$.[]{data-label="tav17b"} $nw$ AA-I($nw$) AA-II($nw$) ------ -------------------- -------------------- $2$ $23$($2.5840E-09$) $25$($5.5612E-09$) $3$ $20$($2.5548E-09$) $4$ $36$($1.5755E-09$) $18$($8.3616E-09$) $5$ $18$($6.8363E-09$) $17$($6.2358E-09$) $6$ $19$($4.5057E-09$) $17$($6.4962E-09$) $7$ $19$($3.3613E-09$) $17$($3.1330E-09$) : Solitary wave generation of (\[ben2d1\]) . Number of iterations required by AA-I and AA-II as function of $nw$ to achieve a residual error below $TOL=10^{-8}$. The residual error (\[fsec32\]) at the last computed iterate is in parenthesis. $c_{s}=1$, $\Gamma=0.99$.[]{data-label="tav18b"} The comparison of the techniques in this case confirms the conclusions obtained in the one-dimensional version, namely: - The best performance is given by the polynomial methods (MPE in this case). - The $\epsilon$-algorithms, although less efficient, are also competitive (contrary to what was observed in some previous examples). - AAM only work correctly up to a moderate value of $\Gamma<1$. When $\Gamma$ approaches one they cannot get the performance of VEM or directly fail. Acceleration techniques with extended [Petviashvili ]{}type methods {#se4} =================================================================== One of the drawbacks of the [Petviashvili ]{}type methods (\[mm2\]) in traveling wave generation is their limitation to some specific problems, namely those with homogeneous nonlinearities. When the nonlinear term is not homogeneous but a combination of homogeneous functions of different degree, these methods can be extended by adapting the stabilizing function $s$ to each homogeneous part. This leads to the so-called e-[Petviashvili ]{}type methods, derived in [@alvarezd2014b]. In this section and in order to improve the traveling wave generation for problems with this type of nonlinearities, we will apply the acceleration techniques to the e-[Petviashvili ]{}method as initial iterative procedure. This will be illustrated with the numerical generation of localized ground state solutions of the following generalized nonlinear Schrödinger equation $$\begin{aligned} iu_{t}+u_{xx}-V(x)u+|u|^{2}u-0.2|u|^{4}u+\nu |u|^{6}u=0,\label{gnls2}\end{aligned}$$ with $V(x)=-3.5{\rm sech}^{2}(x+1.5)-3{\rm sech}^{2}(x-1.5)$ and $\nu$ a real constant. Equation (\[gnls2\]) was studied in [@Yang2012] (see also references therein) where a bifurcation of solitary waves for $\nu= \nu_{c}\approx 0.01247946$ was analyzed. The bifurcation is of transcritical type with two tangentially connected branches of smooth solutions. This can be characterized by using the behaviour of the power $$\begin{aligned} P(\mu)=\int_{-\infty}^{\infty} U^{2}(x,\mu)dx,\label{power}\end{aligned}$$ as function of $\mu$ for any localized ground state solution $u(x,t)=U(x,\mu)e^{i\mu t}, \mu\in\mathbb{R}$. The two branches are connected at some $(\mu_{0},P(\mu_{0}))\approx (3.28,14.35)$. The numerical generation of localized ground state profiles of (\[gnls2\]) with e-[Petviashvili ]{}type methods was treated in [@alvarezd2014b] where the equation for the profiles $U(x,\mu)$ $$\begin{aligned} -\mu U+u^{\prime\prime}-V(x)U+|U|^{2}U-0.2|U|^{4}U+\nu |U|^{6}U=0,\label{gnls2b}\end{aligned}$$ was discretized by Fourier collocation techniques, leading to the system $LU_{h}=N(U_{h})$ for the vector approximation $U_{h}$ at the grid points $x_{j}$ and where $$\begin{aligned} L&=&\mu I-D_{h}^{2}+diag(V(x_{0}),\ldots,V(x_{m-1})),\nonumber\\ N(U_{h})&=&N_{1}(U_{h})+N_{2}(U_{h})+N_{3}(U_{h})\nonumber\\ &=&\left(|U_{h}|.^{2}\right).U_{h}-0.2\left(|U_{h}|.^{4}\right).U_{h}+\kappa \left(|U_{h}|.^{6}\right).U_{h}.\label{lab41}\end{aligned}$$ The nonlinearity in (\[lab41\]) contains three homogeneous terms with degrees $p_{1}=3, p_{2}=5, p_{3}=7$ and the e-[Petviashvili ]{}method $$\begin{aligned} LU_{h}^{n+1}&=&\sum_{j=1}^{3}s_{j}(U_{h}^{n})N_{j}(U_{h}^{n}), n=0,1,\ldots,\label{lab22e}\\ s_{j}(u)&=&\left(\frac{\langle Lu,u\rangle}{\langle N(u),u\rangle}\right)^{\gamma_{j}},\quad \gamma_{j}=\frac{p_{j}}{p_{j}-1},\quad j=1,2,3,\label{lab25e}\end{aligned}$$ is applied. The iteration (\[lab22e\]), (\[lab25e\]) will be considered as the method to be complemented with acceleration techniques. Finally, the quantity (\[power\]) has been approximated by $$\begin{aligned} P_{h}(U_{h})=h\sum_{j} U_{h,j}^{2}.\label{power2}\end{aligned}$$ Classical fixed point e-[Petviashvili ]{}[ method]{} (\[lab22e\]), (\[lab25e\]) ----------------------- ----------------------------------------------------------- 1.687048E+00 9.829607E-01 9.834930E-01 4.740069E-01 4.793766E-01 3.616157E-01 3.747266E-01 2.606251E-01+1.734293E-01i 1.979766E-01 2.606251E-01-1.734293E-01i 1.426054E-01 1.764488E-01 : Six largest magnitude eigenvalues of the iteration matrices of classical fixed point algorithm and of e-[Petviashvili ]{}method (\[lab22e\]), (\[lab25e\]) for $\mu=3.281$ at the last computed iterate. The dominant egienvalue in the column on the right justifies the slow performance of the method.[]{data-label="tav_epet1"} $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ---------------- ---------------- ---------------- ---------------- $2$ $185$ $260$ $1250$ $187$ ($5.3669E-11$) ($9.9728E-11$) ($9.5278E-11$) ($5.5118E-11$) $3$ $135$ $135$ $209$ $155$ ($5.1486E-11$) ($7.4460E-11$) ($8.6758E-11$) ($6.5522E-11$) $4$ $118$ $64$ $167$ $109$ ($6.3944E-11$) ($7.6714E-11$) ($8.7013E-11$) ($5.6136E-11$) $5$ $64$ $69$ $75$ $78$ ($7.0759E-11$) ($8.7100E-11$) ($7.1288E-11$) ($8.7351E-11$) $6$ $55$ $65$ $85$ $795$ ($8.4602E-11$) ($4.5311E-11$) ($7.6708E-11$) ($6.9120E-11$) $7$ $53$ $58$ $80$ $91$ ($6.6155E-11$) ($1.1617E-11$) ($9.8698E-11$) ($3.4911E-11$) $8$ $49$ $58$ $89$ $70$ ($5.6418E-11$) ($9.0426E-11$) ($9.0902E-11$) ($6.9110E-11$) $9$ $52$ $62$ $82$ $78$ ($1.3763E-11$) ($4.1423E-11$) ($7.4598E-11$) ($7.1601E-11$) $10$ $47$ $67$ $88$ $106$ ($6.1296E-11$) ($4.6194E-11$) ($6.9450E-11$) ($4.9290E-11$) : Ground state generation of (\[gnls2\]). Number of iterations required by MPE, RRE, VEA and TEA as function of $\kappa$ to achieve a residual error below $TOL=10^{-10}$. The residual error (\[fsec32\]) at the last computed iterate is in parenthesis; $\mu=3.281$. For the e-[Petviashvili ]{}method (\[lab22e\]), (\[lab25e\]) without acceleration $n=1023$ iterations are required for a residual error of $9.9939E-11$.[]{data-label="tav_epet2"} $\kappa$ MPE($\kappa$) RRE($\kappa$) VEA($\kappa$) TEA($\kappa$) ---------- ----------------- ----------------- ----------------- ----------------- $2$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $14.458882E+00$ $3$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $14.458882E+00$ $4$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $14.458882E+00$ $5$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $14.458882E+00$ $6$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $14.458882E+00$ $7$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $8$ $14.458882E+00$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $9$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $10$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ $14.446162E+00$ : Ground state generation of (\[gnls2\]). Values of (\[power2\]) for each iteration from Table \[tav\_epet2\]; $\mu=3.281$.[]{data-label="tav_epet2b"} The numerical illustration of this case takes $\mu=3.281$, $TOL=10^{-10}$ and a superposition of squared hyperbolic secant functions as initial iteration. The numerical profile $U_{h}$ generated by (\[lab22e\]), (\[lab25e\]) is shown in Figure \[Figpet1\](a). The corresponding value for (\[power2\]) is $P_{h}(U_{h})=14.446162$ and the poor performance of the method is made clear in Figure \[Figpet1\](b) which displays the behaviour of the residual error (\[fsec32\]) as function of the number of iterations and shows that the method requires $n=1023$ iterations to achieve a residual error below $TOL$. (See Table \[tav\_epet1\], first eigenvalue of the second column, to explain this slow behaviour.) The application of acceleration with VEM to this example is displayed in Table \[tav\_epet2\] and Figure \[Figpet2\]. The following points are emphasized: - The acceleration leads to a great improvement with respect to the e-[Petviashvili ]{}method (\[lab22e\]), (\[lab25e\]). In order to have a residual error below $TOL$, the reduction in the number of iterations is above $90\%$. - As in the previous examples, polynomial methods work better than $\epsilon$-algorithms. By comparing the two polynomial methods, MPE is more efficient. Its best performance requires a large number of $\kappa$ (which means a long cycle, above eight). On the other hand, the best results for the $\epsilon$-algorithms are obtained with moderate values of $\kappa$, around five. (This also happens in general in the previous examples.) - The value of $\mu$ considered for the experiments is close to the one corresponding to the bifurcation point, that is it is close to the tangential point of the two branches of solitary wave solutions. The computation of the quantity (\[power2\]) for each acceleration, shown in Table \[tav\_epet2b\], attempts to study the behaviour of the iterations close to the bifurcation. In most of the cases, the computed value coincides to that of the profile generated by (\[lab22e\]), (\[lab25e\]) without acceleration. In the case of MPE(8) and TEA(2)-TEA(6), the value changes to $P_{h}(U_{h})=14.458882$. This suggests that for these cases the accelerated iteration converges to the profile of the upper branch while in most of the cases (including the one without acceleration) the limit profile belongs to the lower branch,[@Yang2012]. (Close to the bifurcation indeed, the form of the profiles is very similar, see Figures \[Figpet3\](a) and (b). Note however from Table \[tav\_epet1c\] that the dominant eigenvalue of the iteration matrix of (\[lab22e\]), (\[lab25e\]) is above one. This and Table \[tav\_epet1\] may explain the convergence of this method to the profile with $P_{h}(U_{h})=14.4446162$. - The comparison with the best choices of the VEM is illustrated in Figure \[Figpet2\], which compares the behaviour of the residual error (\[fsec32\]) as function of the number of iterations and of CPU time in seconds. The results reveal again the better performance of the polynomial methods when the residual error starts to be below $10^{-5}$. $nw$ AA-I($nw$) $P$ AA-II($nw$) $P$ ------ -------------------- ----------- --------------------- ----------- $1$ $78$($5.2802E-11$) $14.4589$ $59$($5.4557E-11$) $14.4589$ $2$ $50$($7.8887E-11$) $14.4462$ $51$($5.5612E-09$) $14.4462$ $3$ $74$($4.4814E-11$) $14.4589$ $28$($1.7879E-11$) $3.9918$ $4$ $93$($9.0311E-11$) $14.4462$ $32$($2.6548E-11$) $3.9918$ $5$ $49$($2.0421E-11$) $3.9918$ $64$($1.2763E-11$) $3.9918$ $6$ $64$($3.3540E-11$) $3.9918$ $80$($3.0285E-11$) $3.9918$ $7$ $61$($4.5136E-11$) $14.4462$ $29$($9.1724E-11$) $3.9918$ $8$ $57$($4.7025E-11$) $14.4662$ $54$($3.7450E-12$) $9.7217$ $9$ $55$($5.6993E-12$) $3.9918$ $10$ $108$($3.4559E-11$) $3.9918$ : Ground state generation of (\[gnls2\]) with $\mu=3.281$. Number of iterations required by AA-I and AA-II as function of $nw$ to achieve a residual error below $TOL=10^{-10}$. The residual error (\[fsec32\]) at the last computed iterate is in parenthesis.[]{data-label="tav_epet3"} When the iteration (\[lab22e\]), (\[lab25e\]) is accelerated with the AAM, we obtain the results displayed in Table \[tav\_epet3\]: - The behaviour of the methods in this case looks similar to that of some previous examples as far as the general performance is concerned: they are competitive for moderate values of $nw$, with a better performance of AA-II, less affected by ill-conditioning. - In some cases the AMM approximate profiles (see Figures \[Figpet3\](c) and (d)) which correspond to values of (\[power2\]) out of the branches. This uncertain behaviour provides the main drawback of the methods. The spectral information for the two additional approximate profiles is given in Tables \[tav\_epet1b\] and \[tav\_epet1d\]. The results suggest the lack of preservation of (\[power2\]) through the iterative process. Classical fixed point e-[Petviashvili ]{}[ method]{} (\[lab22e\]), (\[lab25e\]) ----------------------- ----------------------------------------------------------- 1.994420E+00 4.824994E-01 4.828929E-01 2.921950E-01-4.208258E-02i 2.155227E-01 2.921950E-01+4.208258E-02i 1.266822E-01 1.270267E-01 8.385958E-02 8.457380E-02 6.009441E-02 6.014597E-02 : Six largest magnitude eigenvalues of the iteration matrices of classical fixed point algorithm and of e-[Petviashvili ]{}method (\[lab22e\]), (\[lab25e\]) for $\mu=3.281$ at the last computed iterate for $P=3.9918$.[]{data-label="tav_epet1b"} Classical fixed point e-[Petviashvili ]{}[ method]{} (\[lab22e\]), (\[lab25e\]) ----------------------- ----------------------------------------------------------- 1.643665E+00 1.016836E+00 1.015912E+00 4.764502E-01 4.862159E-01 3.707178E-01 3.756950E-01 2.883210E-01-1.334802E-01i 2.022740E-01 2.883210E-01+1.334802E-01i 1.417784E-01 1.748132E-01 : Six largest magnitude eigenvalues of the iteration matrices of classical fixed point algorithm and of e-[Petviashvili ]{}method (\[lab22e\]), (\[lab25e\]) for $\mu=3.281$ at the last computed iterate for $P=14.4559$.[]{data-label="tav_epet1c"} Classical fixed point e-[Petviashvili ]{}[ method]{} (\[lab22e\]), (\[lab25e\]) ----------------------- ----------------------------------------------------------- 1.919934E+00 1.418879E+00 1.201956E+00 6.091959E-01-4.750746E-02i 6.134143E-01 6.091959E-01+4.750746E-02i 3.716346E-01 4.517846E-01 2.394216E-01 2.884719E-01 1.733253E-01 1.799915E-01 : Six largest magnitude eigenvalues of the iteration matrices of classical fixed point algorithm and of e-[Petviashvili ]{}method (\[lab22e\]), (\[lab25e\]) for $\mu=3.281$ at the last computed iterate for $P=9.7217$.[]{data-label="tav_epet1d"} Concluding remarks and future work {#se5} ================================== In this paper we have studied numerically the use of acceleration techniques applied to fixed point algorithms of [Petviashvili ]{}type to generate numerically traveling waves in nonlinear dispersive wave equations. The comparison has been established between vector extrapolation methods and Anderson acceleration methods for different types of traveling waves. From the plethora of numerical experiments, our main conclusions are: - The use of acceleration techniques improves the performance of the [Petviashvili ]{}type methods in all the cases. This improvement is observed in two main points: first, when the [Petviashvili ]{}type method is convergent, the acceleration reduces the number of iterations in a relevant way. (In some cases, this is really important: in some one-dimensional problems, the reduction is at least of $50\%$ and attains up to $75\%$.) On the other hand, the mechanism of acceleration, especially in the case of VEM, allows to transform initially divergent sequences into convergent processes. This is particularly relevant in traveling waves with high oscillations. Furthermore, acceleration has been shown to be more efficient than other alternatives for some cases, like numerical continuation. - In general, VEM provide better results and among them, polynomial methods such as MPE and RRE are more efficient than $\epsilon$-algorithms like VEA and TEA, although in some convergent cases the acceleration in terms of the number of iterations is very similar among all the methods while in computational time the $\epsilon$-algorithms work better. - The AAM are competitive in some cases but they are mostly affected by ill-conditioning and a more computational effort due to their longer implementation. The best results of these methods are obtained when generating numerically periodic traveling waves in some nonlinear dispersive systems and ground state profiles in NLS type equations, while their performance is poor when computing highly oscillatory traveling waves. Their application to the e-[Petviashvili ]{}type methods suggests an uncertain behaviour with respect to relevant quantities of the problem through the iteration. The main question to that this comparative study has not been able to answer is in the authors’ opinion finding a deeper understanding of the way how acceleration techniques (especially VEM) work on these problems. In particular, we miss some conclusions about the width of extrapolation (which is related to the extrapolation step for convergence) to be used a priori (if it is possible to do that). We have observed that this looks to be strongly dependent on the problem under study. This might be though a good starting point for a future research. Acknowledgements {#acknowledgements .unnumbered} ================ This research has been supported by project MTM2014-54710-P. [ablowitzm]{} L. A. Abramyan, Y. A. Stepanyants, The structure of two-dimensional solitons in media with anomalous small dispersion, Siv. Phys. JETP, 61(5)(1985) 963-966. J.P. Albert, J.L. Bona, J.M. Restrepo, Solitary-wave solutions of the Benjamin equation, SIAM J. Appl. Math., 59 (1999) 2139–2161. J. Alvarez, A. Duran, Petviashvili type methods for traveling wave computations: I. Analysis of convergence, J. Comp. Appl. Math., 266 (2014) 39-51. J. Alvarez, A. Duran, An extended Petviashvili method for the numerical generation of traveling and localized waves, Commun. Nonlinear Sci. Numer. Simulat., 19(2014) 2272-2283. J. Alvarez, A. Duran, Corrigendum to ‘‘Petviashvili type methods for traveling wave computations: I. Analysis of convergence’’ \[J. Comput. Appl. Math. 266 (2014) 39–51\], J. Comp. Appl. Math., 277(2015) 215-216. D. G. Anderson, Iterative procedures for nonlinear integral equations, J. Assoc. Comput. Mach., 12 (1965) 547-560. T. B. Benjamin, Internal waves of permanent form in fluids of great depth, J. Fluid Mech. 29 (1967) 559-592. T. B. Benjamin. A new kind of solitary wave, J. Fluid Mech., 245 (1992) 401-411. T. B. Benjamin, Solitary and periodic waves of a new kind, Phil. Trans. R. Soc. Lond. A, 354 (1996) 1775-1806. J. L. Bona and M. Chen, A Boussinesq system for two-way propagation of nonlinear dispersive waves, Physica D, 116 (1998), 191–224. J. L. Bona, M. Chen, J.-C. Saut, Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media: I. Derivation and linear theory, [J. Nonlin. Sci.]{} [12]{} (2002), 283-318. J. L. Bona, M. Chen, J.-C. Saut, Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media: II. The nonlinear theory, [Nonlinearity]{} [17]{} (2004), 925-952. J. L. Bona, V. A. Dougalis, D. E. Mitsotakis, Numerical solution of KdV-KdV systems of Boussinesq equations: I. The numerical scheme and generalized solitary waves, Math. Comp. Simul., 74(2007) 214-228. J. L. Bona, V. A. Dougalis, D. E. Mitsotakis, Numerical solution of KdV-KdV systems of Boussinesq equations: II. Generation and evolution of radiating solitary waves, Nonlinearity 21 (2008) 2825-2848. J. L. Bona and R. Smith, A model for the two-way propagation of water waves in a channel, Math. Proc. Camb. Phil. Soc. 79(1976), 167–182. J. V. Boussinesq, Théorie des ondes et des remous qui se propagent le long d’un canal rectangulaire horizontal, en communiquant au liquide contenu dans ce canal des vitesses sensiblement pareilles de la surface au fond, [J. Math. Pures Appl.]{} [17]{} (1872) 55-108. , [Chebyshev and Fourier Spectral Methods]{}, 2nd ed. Dover Publications, New York, 2000. C. Brezinski, Généralization de la transformation de Shanks, de la table de Padé et de l’epsilon-algorithm, Calcolo, 12 (1975) 317-360. C. Brezinski, A general extrapolation algorithm, Numer. Math., 35(1980) 175-187. C. Brezinski, Convergence acceleration during the 20th century, J. Comput. Appl. Math., 122 (2000) 1-21. C. Brezinski, A. C. Rieu, The solution of systems of equations usiung the vector $\epsilon$-algorithm and an application to boundary value problems, Math. Comp., 28(1974) 731-741. C. Brezinski, M. Redivo Zaglia, Extrapolation Methods, Theory and Practice, North-Holland, Amsterdam, 1991. C. Brezinski, M. Redivo Zaglia, A Schur complement approach to a general extrapolation algorithm, Linear Algebra and Appl., 368 (2003) 279-301. , [Spectral Methods in Fluid Dynamics]{}. Springer-Verlag, New York-Heidelberg-Berlin, 1988. S. Cabay, L. W. Jackson, A polynomial extrapolation method for finding limits and antilimits for vector sequences, SIAM J. Numer. Anal., 13(1976) 734-752. M. Chen, M. Chen, N. V. Nguyen, Cnoidal wave solutions to Boussinesq systems, [ Nonlinearity]{}, [ 20]{} (2007), 1443-1461. M. Chen, N. V. Nguyen, S.-M. Sun, Solitary-wave solutions to Boussinesq systems with large surface tension, Disc. Cont. Dyn. Systems, Ser. S, 2(1)(2009) 37-53. M. Chen, N. V. Nguyen, S.-M. Sun, Existence of traveling-wave solutions to Boussinesq systems, Diff. Int. Eq., 24(2011) 895-908. P. Daripa, R. K. dash, A class of model equations for bi-directional propagation of capillary-gravity waves. Intern. J. Engrg. Sci., 41(2003) 201-218. J. Demmel, Applied Numerical Linear Algebra, SIAM Philadelphia, 1997. V. A. Dougalis, A. Duran, D. E. Mitsotakis, Numerical approximation of solitary waves of the Benjamin equation, accepted in Math. Comp. Simul., 2012. V. A. Dougalis, A. Duran, D. E. Mitsotakis, Numerical solution of the Benjamin equation, Wave Motion 52(2015) 194-215. V. A. Dougalis, D. E. Mitsotakis, Theory and Numerical Analysis of Boussinesq systems: A review, in: Effective Computational Methods in Wave Propagation, N. A. Kamparis, V. A. Dougalis, J. A. Ekaterinaris (eds.) CRC Press 2008, 63-110. H. Fang, Y. Saad, Two classes of multisecant methods for nonlinear acceleration, Numer. Linear Algebra Appl., 16 (2009) 197-221. R. P. Eddy, Extrapolation to the limit of a vector sequence, in: P.C.C. Wang (Ed.), Information Linkage Between Applied Mathematics and Industry, Academic Press, New York, 1979, 387-396. E. Eyert, A comparative study on methods for convergece acceleration of iterative vector sequences, J. Comput. Phys., 124(1996) 271-285. E. Gekeler, On the solution of systems of equations by the epsilon algorithm of Wynn, Math. Comp., 26(1972) 427-437. G. H. Golub, Ch. F. Van Loan, Matrix Computations, J. H. U. Press, Baltimore, 1996. K. Jbilou, H. Sadok, Some results about vector extrapolation methods and related fixed point iterations, J. Comput. Appl. Math., 36(1991) 385-398. K. Jbilou, H. Sadok, LU-implementation of the modified minimal polynomial extrapolation method, IMA J. Numer. Anal., 19(1999) 549-561. K. Jbilou, H. Sadok, Vector extrapolation methods. Applications and numerical comparisons, J. Comput. Appl. Math., 122 (2000) 149-165. B. B. Kadomtsev, V. I. Petviashvili, On the stability of solitary waves in weakly dispersive media, Sov. Phys. Dokl. 15 (1970) 539-541. B. Kim, Three-dimensional solitary waves in dispersive wave systems, Doctoral dissertation, Department of Mathematics, MIT, 2006. B. Kim, T. R. Akylas, On gravity-capillary lumps, J. Fluid Mech. 540 (2005) 337-351. B. Kim, T. R. Akylas, On gravity-capillary lumps. Part 2. Two-dimensional Benjamin equation, J. Fluid Mech. 557 (2006) 3+237-256. , [A generalized Petviashvili method for scalar and vector Hamiltonian equations with arbitrary form of nonlinearity]{}, [J. Comput. Phys.]{} [ 226]{} (2007) 1668-1692. T.I. Lakoba, J. Yang, A mode elimination technique to improve convergence of iteration methods for finding solitary waves, J. Comp. Phys. 226 (2007) 1693-1709. C. Lanczos, Solutions of systems of linear equations by minimized iterations, J. Res. Natl. Bur. Stand., 49(1952) 33-53. H. Le Ferrand, The quadratic convergence of the topological $\epsilon$-algorithm for systems of nonlinear equations, Numer. Algorithms, 3 (1992) 273-284. E. Lombardi, Oscillatory Integrals and Phenomena Beyond All Algebraic Orders, with Applications to Homoclinic Orbits in Reversible Systems, Lecture Notes in Mathematics, Springer-Verlag, Berlin, 2000. M. Mesina, Convergence acceleration for the iterative solution of the equations $X=AX+f$, Comput. Methods Appl. Mech. Engrg., 10(1977) 165-173. C. D. Meyer, [Matrix Analysis and Applied Linear Algebra]{}, SIAM Philadelphia, 2000. P. Ni, Anderson Acceleration of Fixed-Point Iteration with Applications to Elñectronic Structure Computations, Ph. D. thesis, Worcester Polytechnic Institute, Worcester, MA, 2009. R. L. Pego, M. I. Weinstein, Convective linear stability of solitary waves for Boussinesq equations, [l Stud. Appl. Math.]{} [99]{}(1997) 311-375. , [ Convergence of Petviashvili’s iteration method for numerical approximation of stationary solutions of nonlinear wave equations]{}, [ SIAM J. Numer. Anal.]{} [ 42]{} (2004) 1110-1127. , [ Soviet J. Plasma Phys.]{} [ 2]{} (1976) 257-258. F. Potra, H. Engler, A characterization of the behaviour of the Anderson acceleration on linear problems, Linear Algebra and its Applications, 438(3)(2013) 1002-1011. P. Pulay, Improved SCF convergence, J. Comput. Chem., 3 (1982) 556-560. Y. Saad, Krylov subspace methods for solving large unsymmetric linear systems, Math. Comp., 37(1981) 105-126. Y. Saad, M. H. Schultz, GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Comput., 7(1986) 856-869. A. Sidi, Convergence and stability of minimal polynomial and reduced rank extrapolation algorothms, SIAM J. Numer. Anal., 23 (1986) 197-209. A. Sidi, Extrapolation vs. projection methods for linear systems of equations, J. Comp. Appl. Math., 22(1)(1988) 71-88. A. Sidi, Efficient implementation of minimal polynomial and reduced rank extrapolation methods, J. Comput. Appl. Math., 36(1991) 305-337. A. Sidi, Practical Extrapolation Methods, Theory and Applications, Cambridge University Press, New York, 2003. A. Sidi, J. Bridger, Convergence and stability analyses for some vector extrapolation methods in the presence of defective iteration matrices, J. Comp. Appl. Math., 22(1988) 35-61. A. Sidi, W. F. Ford, D. A. Smith, Acceleration of convergence of vector sequences, SIAM J. Numer. Anal., 23 (1986) 178-196. J. R. Schmidt, On the numerical solution of linear simultaneous equations by an iterative method, Phil. Mag., Ser. 7, 32(1941) 369-383. D. Shanks, Non-linear transformations of divergent and slowly convergent sequences, J. Math. and Phys., 34(1955) 1-42. S. Skelboe, Computation of the periodic steady-state response of nonlinear networks by extrapolation methods, IEEE Trans. Circuits and Systems, 27(1980) 161-175. D. A. Smith, W. F. Ford, A. Sidi, Extrapolation methods for vector sequences, SIAM Rev., 29 (1987) 199-233. R. C. E. Tan, Implementation of the topologicasl $\epsilon$.algorithm, SIAM J. Sci. Statist. Comput., 9 (1988) 839-848. A. Toth, C. T. Kelley, Convergence analysis for Anderson acceleration, SIAM J. Numer. Anal., 53(2015) 805-819. J. Van Iseghem, Convergence of vectorial sequences. Applications, Numer. Math., 68(1994) 549-562. V. V. Voronovich, V. I. Shira, Y. A. Stepanyants, Two-dimensional models for nonlinear voriticy waves in shear flows, Stud. Appl. Math., 100(1998) 1-32. H. F. Walker, P. Ni, Anderson acceleration for fixed-point iterations, SIAM J. Numer. Anal., 49 (2011) 1715-1735. P. Wynn, On a device for computing the $e_{m}(S_{n})$ transformation, Mathematical Tables and Other Aids to Computation, 10(1956) 91-96. P. Wynn, Acceleration techniques in Numerical Analysis, with particular reference to problems in one independent variable, Proceedings of the IFIP Congress 1962, North-Holland, 149-156. P. Wynn, General purpose vector epsilon algorithm ALGOL procedures, Numer. Math., 6(1964) 22-36. P. Wynn, On the convergence and stability of the epsilon algorithm, SIAM J. Numer. Anal., 3(1966) 91-122. J. Yang, [*Nonlinear Waves in Integrable and Nonintegrable Systems*]{}, SIAM, Philadelphia, 2010. J. Yang, Classification of solitary wave bifurcations in generalized nonlinear Schrödinger equation, Stud. Appl. Math., 129(2012) 133–162. C. Yang, J. C. Meza, B. Lee, L. W. Wang, KSSOLV- a MATLAB toolbox for solving the Kohn-Sham equations, ACM Trans. Math. Software, 36 (2009) 1-35.
--- abstract: 'We calculate the heavy quark evolution in heavy ion collisions and show results for the elliptic flow $v_2$ as well as the nuclear modification factor $R_{AA}$ at RHIC and LHC energies. For the calculation we implement a Langevin approach for the transport of heavy quarks in the UrQMD (hydrodynamics + Boltzmann) hybrid model. As drag and diffusion coefficients we use a Resonance approach for elastic heavy-quark scattering and assume a decoupling temperature of the charm quarks from the hot medium of $130\, {\mathrm{MeV}}$. At RHIC energies we use a coalescence approach at the decoupling temperature for the hadronization of the heavy quarks to D-mesons and B-mesons and a sub-following decay to heavy flavor electrons using PYTHIA. At LHC we use an additional fragmentation mechanism to account for the higher transverse momenta reached at higher collision energies.' address: - ' $^{1}\,$Frankfurt Institute for Advanced Studies (FIAS),Ruth-Moufang-Str. 1, 60438 Frankfurt am Main, Germany ' - ' $^{2}\,$Institut für Theoretische Physik, Johann Wolfgang Goethe-Universität, Max-von-Laue-Str. 1, 60438 Frankfurt am Main, Germany ' - ' $^{3}\,$Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720, USA ' - '$^4$ School of Physics, Institute of Science, Suranaree University of Technology, Nakhon Ratchasima 30000, Thailand' - '$^5$ Thailand Center of Excellence in Physics (ThEP), Commission on Higher Education, Bangkok 10400, Thailand' author: - | Thomas Lang$^{1,2}$, Hendrik van Hees$^{1,2}$, Jan Steinheimer$^{3}$,\ Yu-Peng Yan$^{4,5}$, Marcus Bleicher$^{1,2}$ title: Heavy quark transport at RHIC and LHC --- Introduction ============ Heavy quarks are an ideal probe for the QGP. They are produced in the primordial hard collisions of the nuclear reaction and therefore probe the created medium during its entire evolution process. When the system cools down they hadronize, and their decay products can finally be detected. Therefore, heavy-quark observables provide new insights into the interaction processes within the hot and dense medium. Two of the most interesting observables are the elliptic flow, $v_2$, and the nuclear modification factor, $R_{AA}$, of open-heavy-flavor mesons and their decay products like “non-photonic” single electrons. The measured large elliptic flow, $v_2$, of open-heavy-flavor mesons and the “non-photonic single electrons or muons” from their decay underline that heavy quarks take part in the collective motion of the bulk medium, consisting of light quarks and gluons. The nuclear modification factor shows a large suppression of the open-heavy flavor particles’ spectra at high transverse momenta ($p_T$) compared to the findings in pp collisions. This also supports a high degree of thermalization of the heavy quarks with the bulk medium. In this letter we explore the medium modification of heavy-flavor $p_T$ spectra, using a hybrid model, consisting of the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) model [@Bass:1998ca; @Bleicher:1999xi] and a full (3+1)-dimensional ideal hydrodynamical model [@Rischke:1995ir; @Rischke:1995mt] to simulate the bulk medium. The heavy-quark propagation in the medium is described by a relativistic Langevin approach [@Rapp:2009my]. Similar studies have recently been performed in a thermal fireball model with a combined coalescence-fragmentation approach [@vanHees:2007me; @vanHees:2007mf; @Greco:2007sz; @vanHees:2008gj; @Rapp:2008fv; @Rapp:2008qc; @Rapp:2009my], in an ideal hydrodynamics model with a lattice-QCD EoS [@He:2012df; @He:2012xz], in a model from Kolb and Heinz [@Aichelin:2012ww], in the BAMPS model [@Uphoff:2011ad; @Uphoff:2012gb], the MARTINI model [@Young:2011ug] as well as in further studies and model comparisons [@Moore:2004tg; @Vitev:2007jj; @Gossiaux:2010yx; @Gossiaux:2011ea; @Gossiaux:2012th]. Description of the model ======================== The UrQMD hybrid model has been developed to combine the advantages of transport theory and (ideal) fluid dynamics [@Petersen:2008dd]. It uses initial conditions, generated by the UrQMD model [@Bass:1999tu; @Dumitru:1999sf], for a full (3+1) dimensional ideal fluid dynamical evolution, including the explicit propagation of the baryon current. After a Cooper-Frye transition back to the transport description, the freeze out of the system is treated dynamically within the UrQMD approach. The hybrid model has been successfully applied to describe particle yields and transverse dynamics from AGS to LHC energies [@Petersen:2008dd; @Steinheimer:2007iy; @Steinheimer:2009nn; @Petersen:2010cw; @Petersen:2011sb] and is therefore a reliable model for the flowing background medium. The diffusion of a “heavy particles” in a medium consisting of “light particles” can be described with help of a Fokker-Planck equation [@Svetitsky:1987gq; @GolamMustafa:1997id; @Moore:2004tg; @Svetitsky:1987gq; @GolamMustafa:1997id; @vanHees:2004gq; @vanHees:2005wb; @vanHees:2007me; @Gossiaux:2008jv; @He:2011yi] as an approximation of the collision term of the corresponding Boltzmann equation. It can be mapped into an equivalent stochastic Langevin equation, suitable for numerical simulations. The drag and diffusion coefficients for the heavy-quark propagation within this framework are taken from a Resonance approach [@vanHees:2004gq], where the existence of D-mesons and B-mesons in the QGP phase is assumed, as well as a $T$-Matrix approach [@vanHees:2007me] in which quark-antiquark potentials are used for the calculation of the coefficients in the QGP. The initial production of charm quarks in our approach is based on a Glauber approach. For the realization of the initial collision dynamics we use the UrQMD model. We perform a first UrQMD run excluding interactions between the colliding nuclei and save the nucleon-nucleon collision space-time coordinates. These coordinates are used in a second, full UrQMD run as possible production coordinates for the charm quarks. As momentum distribution for the initially produced charm quarks at $\sqrt {s_{NN}}=200\; {\mathrm{GeV}}$ we use $$\frac{1}{2\pi p_Tdp_T}=\frac{\left(A_1+p_T^2\right)^2}{\left(1+A_2\cdot p_T^2\right)^{A_3}},$$ with $A_1=0.5$, $A_2=0.1471$ and $A_3=21$ and for bottom quarks $$\frac{1}{2\pi p_Tdp_T}=\frac{1}{\left( A_1+p_T^2 \right)^{A_2}},$$ with $A_1=57.74$ and $A_2=5.04$. These distributions are taken from [@vanHees:2005wb; @vanHees:2007me]. The $p_T$ distribution for charm quarks at $2.76\,$TeV is obtained from a fit to PYTHIA calculations. $$\frac{1}{2\pi p_Tdp_T}=\frac{1}{(1+A_1\cdot \left(p_T^2\right)^{A_2})^{A_3}}$$ with the coefficients $A_1=0.136$, $A_2=\,2.055$ and $A_3=\,2.862$. Starting with these distributions as initial conditions we propagate the heavy quarks at each hydro-timestep. We use the UrQMD/hydro’s cell velocities, the cell temperature, the size of the time-step, and the $\gamma$-factor for the calculation of the momentum transfer, propagating all quarks independently. Our approach provides us only with the heavy-quark distributions. Since heavy quarks cannot be measured directly in experiments we include a hadronization mechanism for D-mesons and B-mesons, via the use of a quark-coalescence mechanism. To implement this coalescence we perform our Langevin calculation until the decoupling temperature is reached. Subsequently we add the momenta of light quarks to those of the heavy quarks. Results ======= First we performed our calculations in Au+Au collisions at $\sqrt {s_{NN}}=200\; {\mathrm{GeV}}$ in a centrality range of 20%-40%. To compare our results to the single-electron spectra measured by PHENIX we use PYTHIA for the decay of the heavy quarks to heavy flavor electrons and apply a rapidity cut of $|y|<0.35$. Fig. \[RHIC\] (left) shows our results for the elliptic flow $v_2$. For a decoupling temperature of $130\;{\mathrm{MeV}}$ we obtain a reasonable agreement with the experimental data except for low $p_T$ bins. Here a depletion effect can be seen. This effect is due to the radial velocity of the medium, which is in case of a developed elliptic flow larger in $x$ than in $y$ direction. Consequently there is a depletion of particles with high $v_x$ in the low $p_T$ region and smaller elliptic flow. This effect is more important for heavier particles and a larger radial flow [@Huovinen:2001cy; @Krieg:2007bc]. In Fig. \[RHIC\] (right) the nuclear modification factor $R_{AA}$ for non-photonic single electrons is depicted. ![(Color online) Elliptic flow $v_2$ (left) and nuclear modification factor $R_{AA}$ (right) of electrons from heavy quark decays in Au+Au collisions at $\sqrt {s_{NN}}=200\;{\mathrm{GeV}}$ using a coalescence mechanism. We use a rapidity cut of $|y|<0.35$. For a decoupling temperature of $130\;{\mathrm{MeV}}$ we get a reasonable agreement to data [@Adare:2010de][]{data-label="RHIC"}](CoaflowRHIC4){width="100.00000%"} ![(Color online) Elliptic flow $v_2$ (left) and nuclear modification factor $R_{AA}$ (right) of electrons from heavy quark decays in Au+Au collisions at $\sqrt {s_{NN}}=200\;{\mathrm{GeV}}$ using a coalescence mechanism. We use a rapidity cut of $|y|<0.35$. For a decoupling temperature of $130\;{\mathrm{MeV}}$ we get a reasonable agreement to data [@Adare:2010de][]{data-label="RHIC"}](CoaRAARHIC4){width="100.00000%"} Also here we obtain a good agreement with the data, especially in case of using the T-Matrix coefficients or a low decoupling temperature. Now we performed the same calculations, but in Pb+Pb collisions at $\sqrt{s}_{NN} =2.76\; {\mathrm{TeV}}$ in a centrality range of 30%-50%. The analysis is done in a rapidity cut of $|y|<0.35$ in line with the ALICE data. Here we made use of the coalescence mechanism with a decoupling temperature of $130\,\text{MeV}$ only since we achieved the best results using this configuration at RHIC energies. In the ALICE experiment D-mesons are measured. Therefore we do not need to perform the decay to electrons this time. Fig. \[LHC\] (left) depicts our results for the elliptic flow compared to ALICE measurements. ![(Color online) Left: Flow $v_2$ of D-mesons in Pb+Pb collisions at $\sqrt {s_{NN}}=2.76\,$TeV compared to data from the ALICE experiment. (Talk by Z. Conesa del Valle at QM 2012, data not published yet.) A rapidity cut of $|y|<0.35$ is employed. Right: $R_{AA}$ of D-mesons in Pb+Pb collisions at $\sqrt {s_{NN}}=2.76\; {\mathrm{TeV}}$ compared to experimental data from ALICE [@ALICE:2012ab]. A rapidity cut of $|y|<0.35$ is employed.[]{data-label="LHC"}](flowLHC4){width="100.00000%"} ![(Color online) Left: Flow $v_2$ of D-mesons in Pb+Pb collisions at $\sqrt {s_{NN}}=2.76\,$TeV compared to data from the ALICE experiment. (Talk by Z. Conesa del Valle at QM 2012, data not published yet.) A rapidity cut of $|y|<0.35$ is employed. Right: $R_{AA}$ of D-mesons in Pb+Pb collisions at $\sqrt {s_{NN}}=2.76\; {\mathrm{TeV}}$ compared to experimental data from ALICE [@ALICE:2012ab]. A rapidity cut of $|y|<0.35$ is employed.[]{data-label="LHC"}](RAALHC4){width="100.00000%"} Additionally, apart form the calculation using the coalescence mechanism, also a calculation using a fragmentation mechanism is shown, since fragmentation might get more important at higher $p_T$ bins, as measured at LHC. As fragmentation mechanism we used the Peterson fragmentation [@Peterson:1982ak]. $$D^H_Q(z)=\frac{N}{z[1-(1/z)-\epsilon_Q/(1-z)]^2},$$ Here $N$ is a normalization constant, $z$ the relative-momentum fraction obtained in the fragmentation of the charm quarks and $\epsilon_Q=0.05$. Both $v_2$ calculations are in agreement with the ALICE data set. Using the fragmentation function a sharper rise of the elliptic flow at low $p_T$ is reached, while at medium $p_T$ the flow using the coalescence approach is stronger. At high $p_T$ both hadronization mechanisms lead to similar results. A complementary view on the drag and diffusion coefficients is provided by the nuclear suppression factor $R_{AA}$. Figure \[LHC\] (right) shows the calculated nuclear modification factor $R_{AA}$ of D-mesons at LHC. Here we compare to two data sets available, for $D^0$ and $D^+$ mesons. In line with the experimental data the simulation is done for a more central bin of $\sigma/\sigma_ {to}=0\%$-$20\%$. In case of the coalescence approach we find a maximum of the $R_{AA}$ at about $2 \; {\mathrm{GeV}}$ followed by a sharp decline to an $R_{AA}$ of about $0.2$ at high $p_T$. The fragmentation approach leads to a different result at low $p_T$. A very sharp $R_{AA}$ drop-off from low to high $p_T$ is seen. At high $p_T$ the two approaches nearly converge. Concerning the difference of the results using the fragmentation and coalescence mechanism new $v_2$ and $R_{AA}$ measurements, especially at low $p_T$, would be very helpful to draw conclusions on the hadronization mechanism at LHC. To summarize, we presented in this letter our results on the medium modification of heavy quarks at RHIC and LHC energies using the nuclear modification factor $R_{AA}$ and the elliptic flow $v_2$ as observables. At RHIC energies we compared different sets for drag and diffusion coefficients and obtained the best agreement to experimental measurements if using a Resonance model with a decoupling temperature of $130\,\text{MeV}$. At LHC we compared a coalescence approach and a fragmentation approach as hadronization mechanism using the Resonance model at a decoupling temperature of $130\,\text{MeV}$. Both approaches describe the ellitpic flow $v_2$ in pretty good agreement with the experimental data while for the $R_{AA}$ a major disagreement between our models at low $p_T$ can be seen that needs to be resolved by new measurements. ACKNOWLEDGMENTS =============== We are grateful to the Center for Scientific Computing (CSC) at Frankfurt for providing computing resources. T. Lang gratefully acknowledges support from the Helmholtz Research School on Quark Matter Studies. This work is supported by the Hessian LOEWE initiative through the Helmholtz International Center for FAIR (HIC for FAIR). J. S. acknowledges a Feodor Lynen fellowship of the Alexander von Humboldt foundation. This work is supported by the Office of Nuclear Physics in the US Department of Energy’s Office of Science under Contract No. DE-AC02-05CH11231. [99]{} S. A. Bass [*et al.*]{}, Prog. Part. Nucl. Phys.  [**41**]{} (1998) 255 \[Prog. Part. Nucl. Phys.  [**41**]{} (1998) 225\] \[arXiv:nucl-th/9803035\]. M. Bleicher [*et al.*]{}, J. Phys.  [**G25**]{} (1999) 1859 \[arXiv:hep-ph/9909407\]. D. H. Rischke, S. Bernard and J. A. Maruhn, Nucl. Phys.  A [**595**]{} (1995) 346 \[arXiv:nucl-th/9504018\]. D. H. Rischke, Y. Pursun and J. A. Maruhn, Nucl. Phys.  A [**595**]{} (1995) 383 \[Erratum-ibid.  A [**596**]{} (1996) 717\] \[arXiv:nucl-th/9504021\]. R. Rapp and H. van Hees, arXiv:0903.1096 \[hep-ph\]. H. van Hees, M. Mannarelli, V. Greco and R. Rapp, Phys. Rev. Lett.  [**100**]{} (2008) 192301 \[arXiv:0709.2884 \[hep-ph\]\]. H. van Hees, V. Greco and R. Rapp, arXiv:0706.4456 \[hep-ph\]. V. Greco, H. van Hees and R. Rapp, arXiv:0709.4452 \[hep-ph\]. H. van Hees, M. Mannarelli, V. Greco and R. Rapp, Eur. Phys. J. C [**61**]{} (2009) 799 \[arXiv:0808.3710 \[hep-ph\]\]. R. Rapp, D. Cabrera, V. Greco, M. Mannarelli and H. van Hees, arXiv:0806.3341 \[hep-ph\]. R. Rapp and H. van Hees, arXiv:0803.0901 \[hep-ph\]. M. He, R. J. Fries and R. Rapp, arXiv:1204.4442 \[nucl-th\]. M. He, R. J. Fries and R. Rapp, arXiv:1208.0256 \[nucl-th\]. J. Aichelin, P. B. Gossiaux and T. Gousset, Acta Phys. Polon.  B [**43**]{} (2012) 655 \[arXiv:1201.4192 \[nucl-th\]\]. J. Uphoff, O. Fochler, Z. Xu and C. Greiner, Phys. Rev.  C [**84**]{} (2011) 024908 \[arXiv:1104.2295 \[hep-ph\]\]. J. Uphoff, O. Fochler, Z. Xu and C. Greiner, Phys. Lett.  B [**717**]{} (2012) 430 \[arXiv:1205.4945 \[hep-ph\]\]. C. Young, B. Schenke, S. Jeon and C. Gale, Phys. Rev.  C [**86**]{} (2012) 034905 \[arXiv:1111.0647 \[nucl-th\]\]. G. D. Moore and D. Teaney, Phys. Rev.  C [**71**]{} (2005) 064904 \[arXiv:hep-ph/0412346\]. I. Vitev, A. Adil and H. van Hees, J. Phys.  [**G34**]{} (2007) S769 \[arXiv:hep-ph/0701188\]. P. B. Gossiaux, J. Aichelin, T. Gousset and V. Guiho, J. Phys.  [**G37**]{} (2010) 094019 \[arXiv:1001.4166 \[hep-ph\]\]. P. B. Gossiaux, S. Vogel, H. van Hees, J. Aichelin, R. Rapp, M. He and M. Bluhm, arXiv:1102.1114 \[hep-ph\]. P. Gossiaux, J. Aichelin and T. Gousset, Prog. Theor. Phys. Suppl.  [**193**]{} (2012) 110 \[arXiv:1201.4038 \[hep-ph\]\]. H. Petersen, J. Steinheimer, G. Burau, M. Bleicher and H. Stocker, Phys. Rev.  C [**78**]{} (2008) 044901 \[arXiv:0806.1695 \[nucl-th\]\]. S. A. Bass, A. Dumitru, M. Bleicher, L. Bravina, E. Zabrodin, H. Stoecker and W. Greiner, Phys. Rev.  C [**60**]{} (1999) 021902 \[arXiv:nucl-th/9902062\]. A. Dumitru, S. A. Bass, M. Bleicher, H. Stoecker and W. Greiner, Phys. Lett.  B [**460**]{} (1999) 411 \[arXiv:nucl-th/9901046\]. J. Steinheimer, M. Bleicher, H. Petersen, S. Schramm, H. Stocker and D. Zschiesche, Phys. Rev.  C [**77**]{} (2008) 034901 \[arXiv:0710.0332 \[nucl-th\]\]. J. Steinheimer, V. Dexheimer, H. Petersen, M. Bleicher, S. Schramm and H. Stoecker, Phys. Rev.  C [**81**]{} (2010) 044913 \[arXiv:0905.3099 \[hep-ph\]\]. H. Petersen, G. Y. Qin, S. A. Bass and B. Muller, Phys. Rev.  C [**82**]{} (2010) 041901 \[arXiv:1008.0625 \[nucl-th\]\]. H. Petersen, Phys. Rev.  C [**84**]{} (2011) 034912 \[arXiv:1105.1766 \[nucl-th\]\]. J. Steinheimer, S. Schramm and H. Stocker, Phys. Rev.  C [**84**]{} (2011) 045208 \[arXiv:1108.2596 \[hep-ph\]\]. B. Svetitsky, Phys. Rev. D [**37**]{} (1988) 2484. M. Golam Mustafa, D. Pal and D. Kumar Srivastava, Phys. Rev. C [**57**]{} (1998) 889 \[Erratum-ibid. C [**57**]{} (1998) 3499\] \[nucl-th/9706001\]. H. van Hees and R. Rapp, Phys. Rev. C [**71**]{} (2005) 034907 \[nucl-th/0412015\]. H. van Hees, V. Greco and R. Rapp, Phys. Rev. C [**73**]{} (2006) 034913 \[nucl-th/0508055\]. P. B. Gossiaux and J. Aichelin, Phys. Rev.  C [**78**]{} (2008) 014904 \[arXiv:0802.2525 \[hep-ph\]\]. M. He, R. J. Fries and R. Rapp, Phys. Lett.  B [**701**]{} (2011) 445 \[arXiv:1103.6279 \[nucl-th\]\]. H. van Hees, V. Greco and R. Rapp, Phys. Rev.  C [**73**]{} (2006) 034913 \[arXiv:nucl-th/0508055\]. P. Huovinen, P. F. Kolb, U. W. Heinz, P. V. Ruuskanen and S. A. Voloshin, Phys. Lett.  B [**503**]{} (2001) 58 \[arXiv:hep-ph/0101136\]. D. Krieg and M. Bleicher, Eur. Phys. J.  A [**39**]{} (2009) 1 \[arXiv:0806.0736 \[nucl-th\]\]. A. Adare [*et al.*]{} \[PHENIX Collaboration\], Phys. Rev.  C [**84**]{} (2011) 044905 \[arXiv:1005.1627 \[nucl-ex\]\]. \[ALICE Collaboration\], JHEP [**1209**]{} (2012) 112 \[arXiv:1203.2160 \[nucl-ex\]\]. C. Peterson, D. Schlatter, I. Schmitt and P. M. Zerwas, Phys. Rev. D [**27**]{} (1983) 105.
--- abstract: 'The penetration of transverse magnetic flux into a thin superconducting square film in the flux flow state is considered by numerical simulation. Due to the film self-field, the governing equations are nonlinear, and in combination with the finite viscosity of the moving vortices, this sets up a dynamical barrier for flux penetration into the sample. The corresponding magnetization loop is hysteric, with the peak in magnetization shifted from the zero position. The magnetic field in increasing applied field is found to form a well-defined front of propagation. Numerical estimates shows that the dynamical barrier should be measurable on films with low volume pinning.' author: - 'J. I. Vestg[å]{}rden' - 'Y. M. Galperin' - 'T. H. Johansen' bibliography: - 'superconductor.bib' title: Dynamical barrier for flux penetration in a superconducting film in the flux flow state --- Introduction ============ The penetration of magnetic flux into superconductors is delayed due to the presence of surface barriers, such as the Bean-Livingston barrier,[@beanlivingston64; @burlachkov91; @konczykowski91; @olsen04] surface pinning[@flippen95], and various barriers of geometric origin.[@clem73; @brandt93-epl; @mawatari03] (The review by Brandt[@brandt95-rpp] lists 7 different mechanisms) The barriers are particularly important in thin films where the equilibrium field for existence of magnetic flux is much reduced from the bulk lower critical field $H_{c1}$ to $H_{c1}d/2w$, where $d$ is thickness and $w$ is sample width.[@zeldov94-prl] The presence of surface barriers implies that vortices will not necessarily enter the sample when it is energetically favorable for them to reside in the sample center. Of particular importance in thin films, is the geometric barrier caused by the magnetic fields piling up near the edges, which delays penetration until the external field reaches $H_{c1}\sqrt{d/w}$.[@zeldov94-prl] Numerical simulations show that in samples without volume pinning, the magnetic flux that overcomes the barrier tends to pile up in the sample center.[@brandt99] Because the barrier does not prevent vortices from leaving the sample, the magnetization loop is asymmetric, and the magnetization irreversible.[@brandt99-2] The attention so far has mainly been paid to the static nature of barriers. Yet, dynamic effects might also give rise to barriers for flux penetration. In order to investigate if this is the case we consider dynamics of a superconducting film in transverse magnetic field. We assume that the film is sufficiently wide, so that the magnetic field can be treated as a continuum, and the spatio-temporal evolution of the system can be obtained by solution of the Maxwell-equations. In order to separate the dynamical barrier from other kinds of surface barriers, we disregard surface pinning, and assume that $H_{c1}=0$ and the critical current density, $j_c$, is zero. Then, the only mechanism that gives loss in the system is the finite viscosity of the moving vortices, which gives a flux flow resistivity $\rho=\rho_n|H_z|/H_{c2}$, where $H_{c2}$ is the upper critical field. The corresponding dynamical barrier towards flux penetration will thus be strongly dependent on the rate of change the applied field. Model ===== Let us consider a thin superconducting film with thickness $d$, shaped as a square with sides $2a\gg d$. Due to absence of pinning, $j_c=0$, and the resistivity is solely given by the conventional flux flow expression[@bardeen65] $$\rho=\rho_n|H_z|/H_{c2}, \label{rho}$$ where $\rho_n$ is the normal state resistivity, $H_{c2}$ is the upper critical field, and $H_z$ is the transverse component of the magnetic field. The magnetic field has two contributions, the applied field and self-field of the sample,[@vestgarden13-fftsim] $$\label{h1} H_z=H_a+\mathcal F^{-1}\left[\frac{k}{2}\mathcal F\left[g\right]\right] ,$$ where $\mathcal F$, and $\mathcal F^{-1}$ are forward and inverse Fourier transforms respectively, and $k=\sqrt{k_x^2+k_y^2}$ is the wave-vector. The local magnetization $g$ is defined by $\nabla\times\hat zg=\mathbf J$, where $\mathbf J$ is the sheet current. The inverse of Eq.  and a time derivative gives $$\label{dotg1} \dot g = \mathcal F^{-1}\left[\frac{2}{k}\mathcal F\left[\dot H_z-\dot H_a\right]\right] .$$ Inside the sample, Faraday law and the material law, Eq. , gives $$\dot H_z=\nabla\cdot \left(H_z\nabla g\right) \rho_n /(H_{c2}\mu_0),$$ where $H_z$ is given from Eq. . Outside the sample, $\dot H_z $ is calculated by an iterative Fourier space -real space hybrid method which ensures $g=0$ in the vacuum outside the sample. [@vestgarden13-fftsim] Eq.  is non-linear due to the self-field of the sample. In this respect, the situation is different from the parallel geometry, where only the constant applied field enters the expression, and the corresponding equation for the flux dynamics is linear. Let us rewrite the equations on dimensionless form, assuming that the applied field is ramped with constant rate $|\dot H_a|$. We define a time scale and sheet current scale as $$t_0\equiv \sqrt{\frac{\mu_0H_{c2}dw}{\rho_n|\dot H_a|}},\qquad J_0\equiv \sqrt{\frac{\mu_0H_{c2}dw|\dot H_a|}{\rho_n}} .$$ The dimensionless quantities are defined as $\tilde t = t/t_0$, $g/J_0w$, $\tilde H=H/J_0$, $\tilde k=wk$. Eq.  becomes $$\label{dotg2} \frac{\partial \tilde g}{\partial \tilde t} = \mathcal F^{-1}\left[\frac{2}{\tilde k}\mathcal F \left[ \frac{\partial \tilde H_z}{\partial \tilde t} - 1 \right]\right] ,$$ where $$\frac{\partial \tilde H_z}{\partial \tilde t} = \tilde \nabla \cdot \left[\tilde H_z\tilde \nabla \tilde g\right] ,$$ valid inside the sample. As long as $|\dot H_a|$ is constant, there are no free parameters in the problem. We will henceforth omit the tildes in the dimensionless quantities, when reporting the results. A total are of size $1.4\times 1.4$ is discretized on a $512\times 512$ grid. The additional vacuum at the sides of the superconductor is used to implement the boundary conditions. ![ \[fig:moment\] The $m - H_a$ magnetization loop. Even in absence of pinning the loop is hysteric due to the dynamical barrier. ](moment.pdf "fig:"){width="6cm"}\ Result ====== Let us now consider the evolution of the sample as it completes a magnetization cycle. The external field driven with constant rate $|\dot H_a|=1$ until the maximum field $H_a=3$, starting from zero-field-cooled conditions. As applied field is changed, shielding currents are induced in the sample, giving it a nonzero magnetic moment $m$. The magnetic moment is calculated as $m=\int g(x,y) dxdy $. Figure \[fig:moment\] shows the magnetic moment as a function of applied field. The plot contains the virgin branch and a steady state loop. As expected for a superconducting film, the main direction of the response is diamagnetic. The absolute value of $|m|$ reaches a peak for $H_a=0.54$ in the virgin branch and at $H_a=0.35$ in the steady-state loop, while it decreases at higher magnetic fields. The shape of the loop is quite similar to superconductors with a field-dependent critical current,[@mcdonald96] except that the magnetization peak is shifted from $H_a=0$.[@shantsev99] In this respect the dynamical barrier is similar to other kinds of surface barriers.[@burlachkov91; @mawatari03] ![image](HJ.pdf){width="17cm"} Figure \[fig:HJ\] shows $H_z$ and $J$ magnitude and stream lines at various applied fields. The state at $H_a=0.5$ is close to the peak in magnetization in the virgin branch. The flux piles up close to the edges, and falls to zero on a well defined flux front, roughly penetrating one third of the distance to the sample center. The current stream lines are smooth, with highest density in the flux-penetrated region. The flux distribution has some similarity with the square in the critical state,[@brandt95] but the most striking difference is the absence of dark $d$-lines at the diagonals. At $H_a=1.1$, the flux front has reached the center of the sample. The edge of the sample is still white signifying piling up of flux there, but the flux distribution at this time is much more uniform than it was earlier, and the current density is correspondingly much lower. This is a feature caused by the short lifetime of currents of superconductors in the flux flow state. The rightmost panels show the remanent state after the field has been increased to max $H_a=3$ and then back to $H_a=0$. The distributions are star-shaped, with the inner part of the sample has low current and contains a lot of trapped positive flux. The flux is trapped due to a line with $H_z=0$ inside the sample, where the strong shielding currents flow with zero resistivity. The shielding from the currents at this line prevents the trapped flux from leaving the sample. Let us return to the dimensional quantities to determine how easy it is to measure the effect of the dynamic barrier. The most likely candidate for material are superconductors with low intrinsic flux pinning and low first critical field. One such material is MoGe thin-films.[@kubo88] With the values $\mu_0H_{c2}=$3 T, $\rho_n=2\cdot 10^{-6}~\Omega$m, $d=50~$nm, $w=2~$mm, and driving rate $\mu_0\dot H_a=10~$T/s, we get $J_0=35~$A/m and $t_0=4.3~$ms. The characteristic current density will thus be $J_0/d=6.9~10^{8}$A/m$^2$ and the magnetic field values will be of order $\mu_0J_0=0.043~$mT. In this case the dynamical barrier will be larger than the geometric barrier obtained Ref. , which is of order $\mu_0H_p=\mu_0H_{c1}\sqrt{d/w}=0.01~$mT, with $\mu_0H_{c1}=2~mT$. Experimentally it will thus be easy to distinguish the geometric barrier from the dynamical barrier due to the ramp-rate dependency of the latter. Summary ======= The penetration of magnetic flux into superconducting films can be delayed due to a dynamical barrier caused by the viscous motion of the vortices. In this work we have studied this effect on a thin film superconductor of square shape using numerical simulations. The point that makes the dynamics interesting is that in transverse geometry, the flux flow equations are non-linear due to the film self-field, contrary to parallel geometry where they are linear. In small applied magnetic field, the flux penetrates into the sample in a orderly manner with a well-defined flux front, similar to the critical state, but with absence of current discontinuity lines. When the applied field is changed there are fronts moving where the total magnetic field is zero, and shielding currents flow without resistivity. In particular in the remanent state, such a front will prevent magnetic flux from leaving the sample, so that the remanent state contains trapped flux. The magnetization loop is hysteric with the magnetization peak shifted from the zero position. Numerical estimates shows that the effect of the dynamical barrier should be possible to measure on thin films of materials with low volume pinning. The effect is easily distinguished from other kinds of barriers due to its dependence on the rate of change of the applied field. This work was financially supported by the Research Council of Norway.
--- author: - 'Sunil Kumar Yadav, Ulrich Reitebuch, and Konrad Polthier' bibliography: - 'extrinsic.bib' title: Mesh Denoising based on Normal Voting Tensor and Binary Optimization --- denoising is a central preprocessing tool in discrete geometry processing with many applications in computer graphics such as CAD, reverse engineering, virtual reality and medical diagnosis. The acquisition of 3D surface data takes place using 3D measurement technologies such as 3D cameras and laser scanners. During the surface measurement, noise is inevitable due to various internal and external factors; this degrades surface data quality and its usability. The main goal of any mesh denoising algorithm is to remove spurious noise and compute a high quality smooth function on the triangle mesh while preserving sharp features. In general, noise and sharp features both have high frequency components, so decoupling the sharp features from noise is still a challenging problem in mesh denoising algorithms. Traditionally, noise is removed by using a low pass filtering approach, but this operation leads to feature blurring. A variety of Laplacian-based surface smoothing algorithms are available to overcome the problem of feature blurring. Our smoothing approach uses eigenanalysis and a binary optimization of the proposed element based normal voting tensor to decouple noise and sharp features. We design an iterative denoising method that removes low and high frequency noise while preserving sharp features in smooth surfaces. Our algorithm does not produce piecewise flat areas (false features) in the denoised triangular mesh. Contributions ------------- We introduce a simple and effective mesh denoising algorithm which does not follow the classic Laplacian approach of surface smoothing. Our algorithm follows a two stage denoising process. In the first stage, we process noisy face normals. In the second stage, we update the vertex positions accordingly. Our main contributions are as follows: - We propose a tensor-based smoothing technique with stable and fast convergence property to remove the undesired noise from noisy surfaces. - We apply a binary optimization technique on the eigenvalues of the proposed element-based normal voting tensor (ENVT) that helps us to retain sharp features in the concerned geometry and improves the convergence rate of the algorithm. - We give a stochastic analysis of the effect of noise on the triangular mesh based on the minimum edge length of the elements in the geometry. It gives an upper bound to the noise standard deviation to have minimum probability for flipped element normals. ![image](pipeline.jpg){width="0.96\linewidth"} Related Work ============ In the last two decades, a wide variety of smoothing algorithms have been introduced to remove undesired noise while preserving sharp features in the geometry. The most common technique for noise reduction is mainly based on the Laplacian on surfaces. For a comprehensive review on mesh denoising, we refer to [@Botsch] and [@polyMeshAna]. We give a short overview of major related works in this section. Isotropic smoothing methods are the earliest smoothing algorithms. These algorithms have low complexity but suffer from severe shrinkage and further feature blurring[@lapDel]. Desbrun et al.[@Desbrunlambda/u] introduced an implicit smoothing algorithm that produces stable results against irregular meshes and avoids the shrinkage by volume preservation. Later, the concept of the differential coordinates was introduced by Alexa[@Alexa] as a local shape descriptor of a geometry. Su et al. exploited the differential coordinates concept for mesh denoising by computing the mean of the differential coordinates at each vertex and then computes a smooth surface according to the mean differential coordinates[@diffCoordinate]. This method produces less shrinkage but is unable to preserve shallow features. The differential coordinates framework has been extended for a variety of mesh processing algorithms by Sorkine [@CGF:CGF999]. In general, isotropic smoothing methods are prone to shrink volumes and blur features, but effective in noise removal. Anisotropic diffusion is a PDE-based de-noising algorithm introduced by Perona and Malik [@peronaMalik]. The same concept was extended for surface denoising using a diffusion tensor[@Clarenz:2000:AGD:375213.375276] [@Bajaj:2003:ADS:588272.588276]. Similarly, the anisotropic diffusion of surface normals was introduced for surface smoothing by Ohtake et al.[@Ohtake][@Ohtake2]. These methods compute smooth normals by a weighted sum of the neighborhood element normals. The sharp feature identification, using the surface normals, has been introduced by computing the angle between the neighbor normals [@KobbeltFaeture]. Tasziden et al.[@AnisoNormals] exploited the level set surface model along the anisotropic diffusion of surface normals to produce desired results. Later, the prescribed mean curvature-based surface evolution algorithm was introduced by Hildebrandt et al.[@aniso]. It avoids the volume shrinkage and preserves features effectively during the denoising process. Several other algorithms related to anisotropic diffusion are based on the bilateral smoothing[@bilAniso], which was initially introduced by Tomasi et al. [@Tomasi:1998:BFG:938978.939190] for image smoothing. Later, the Gaussian kd-tree was introduced to accelerate the bilateral and local mean filtering[@AdamsKd]. Researchers have proposed a general framework for bilateral and mean shift filtering in any arbitrary domain[@Solomon]. These algorithms are simple and effective against noise and feature blurring. In general, anisotropic denoising methods are more robust against volume shrinkage and are better in terms of feature preservation, but the algorithm complexity is higher compared to isotropic algorithms. The two step denoising methods are simple and quite robust against noise. These algorithms consist of face normal smoothing and vertex position updates[@Ohtake]. Face normals are treated as signals on the dual graph of the mesh with values in the unit sphere. The Laplacian smoothing of the face normals on the sphere was introduced by Taubin[@Taubin01linearanisotropic] where displacement of the concerned face normal is computed along a geodesic on the unit sphere. The face normal smoothing is done by rotating the face normal on the unit sphere according to the weighted average of the neighbor face normals. Different linear and non-linear weighting functions have been introduced by different algorithms for face normal smoothing. For example, Yogou et al.[@meanface] computed the mean and the median filtering of face normals to remove noise. Later, a modified Gaussian weighting function was applied to face normal smoothing in an adaptive manner to reduce feature blurring[@automaticSmoothing]. In continuation, the alpha trimming method introduced a non-linear weighting factor which approximates both, the mean and the median filtering[@alphaTrimming]. Bilateral normal is one of the most effective and simple algorithms among the two step methods[@BilNorm], where the weighting function is computed based on the normal differences (similarity measurement) and spatial distances between neighboring faces. Recently, a total variational method has been also introduced for mesh normal filtering[@peicewiseLin]. After the preprocessing of the face normals, vertex position updates are done by using the orthogonality between the corresponding edge vector and the face normal[@vertexUpdate]. The two step denoising methods are simple in implementation and produce effective results. However, on noisy surfaces, it is difficult to compute the similarity function because of the ambiguity between noise and sharp features; this may lead to unsatisfactory results. In recent mesh denoising methods, compressed sensing techniques are involved to preserve sharp features precisely and remove noise effectively[@optSurvey]. For example, the $L^0$ mesh denoising method assumes that features are sparse on general surfaces and introduces an area based differential operator. This method utilizes the $L^0$-optimization to maximize flat regions on noisy surfaces to remove noise[@L0Mesh]. The $L^0$ method is effective against high noise but also produces piecewise flat area on smooth surfaces. Later, the weighted $L^1$-analysis compressed sensing optimization was utilized to recover sharp features from the residual data after global Laplacian smoothing [@L1Mesh]. Recently, the ROF (Rudin, Osher and Fatemi) based algorithm has been introduced in [@ROFL1]. This method applies $L^1$-optimization on both data fidelity and regularization to remove noise without volume shrinkage. In general, the compressed sensing based denoising algorithms are robust against high intensity noise and recover not only the sharp but also the shallow features, but at the same time these algorithms produce false features (piecewise flat areas) on smooth geometries. A multistage denoising framework was applied in recent methods [@anisobil] and [@NVTsmooth] where feature identification is done by the eigenanalysis of the NVT (normal voting tensor)[@NVT],[@NVTb]. Then, the whole geometry is divided into different clusters based on the features and then smoothing is applied on different clusters independently. Later, the guided mesh normals were computed based on the consistent normal orientation and bilateral filtering was applied [@Guidedmesh]. Recently, Wei et. al.[@binormal] exploited both vertex and face normal information for feature classification and surface smoothing. In continuation, researchers detected features on the noisy surface using quadratic optimization and then remove noise using $L^1$-optimization while preserving features[@robust16]. Multistage denoising algorithms produce effective results against different levels of noise but have higher algorithm complexity because of the different stages. In our method, the face normal smoothing is motivated by the NVT-based algorithms. Noise and features are decoupled using the eigenanalysis of the ENVT and noise is removed by the multiplication of the ENVT to the corresponding face normal. Method ====== Figure \[fig:pipeline\] shows the whole pipeline of our algorithm. The face normal smoothing (the yellow blocks in Figure \[fig:pipeline\]) consists of four steps: (1) We compute the geometric neighborhood for the concerned face using a local binary scheme. (2) We define the element based normal voting tensor within its geometric neighborhood. (3) To remove noise effectively, we apply a binary optimization on the eigenvalues of the computed tensor. (4) We multiply the modified ENVT to the corresponding face normal to suppress the noise. In the last stage (the blue block in Figure \[fig:pipeline\]), we update the vertex positions using the orthogonality between the edge vectors and the face normals. In this section, we explain each stage of the proposed algorithm briefly. Local Binary Neighbor Selection {#locneigh} ------------------------------- The first step of our denoising scheme is the preprocessing of the face normals using the neighboring face normals. To select the neighborhood area $\Omega$, there are three possibilities: Combinatorial, geodesic and geometrical neighborhood. Each of these terms are explained in Appendix \[app:Neigh\]. The geometrical neighborhood is applied in the proposed algorithm because it depends only on the radius of the disk irrespective of mesh resolution unlike the topological neighborhood. The geometric neighborhood elements are weighted based on an angle threshold value $\rho$ [@LBP]. Based on $\rho$, we assign a binary value to the neighborhood elements $f_j$ w.r.t. the central element $f_i$ using the following function: $$w_{ij} = \begin{cases} 1 &\mbox{if } \angle (\mathbf{n}_i,\mathbf{n}_j)\leq \rho \\ 0.1 & \mbox{if } \angle (\mathbf{n}_i,\mathbf{n}_j) > \rho, \end{cases} \label{equ:LBP}$$ where $\mathbf{n}_i$ and $\mathbf{n}_j$ are the face normals of the central element and the neighbor elements. By using the value of 0.1, close to a feature, we still allow the area on the other side of the feature to contribute (so the edge direction can be detected from the computed tensor), but the area on the “same” side of the feature will be dominant. Figure \[fig:neighW\] shows that the contribution of the other side of the feature helps to enhance the sharp corner (Figure \[fig:neighW\](d)). Equation \[equ:LBP\] shows a discontinuous box filter which takes similar faces into consideration and avoids blurring features within the user defined geometric neighborhood. Figure \[fig:neighC\] shows that the weighting function depends on the dihedral angle, which can be unstable intially but stabilizes after a few iterations. In further discussion, local binary neighbor refers to . Element-based Normal Voting Tensor (ENVT) {#ENVT} ----------------------------------------- We define an ENVT on every element of a properly oriented triangulated mesh similar to the vertex based voting tensor proposed in [@NVT]. The ENVT $C_i$ is a covariance matrix, defined on the face $f_i$: $$C_i=\frac{1}{\sum_{j \in \Omega_i }^{} w_{ij}} \sum_{j\in\Omega}^{} w_{ij} A_j \mathbf{n}_j \cdot \mathbf{n}_j^T,$$ where $A_j$ is the area of the corresponding neighbor element $f_j$ and $w_{ij}$ is the weighting function as mentioned in Equation \[equ:LBP\]. Weighting by corresponding element areas makes the ENVT more robust against irregular sampling. The eigenanalyis of the given tensor identifies features on triangulated surfaces similar to the methods [@NVT] and [@NVTsmooth]. In our algorithm, the ENVT is represented as a mesh denoising operator which is able to suppress the noise contents from noisy surfaces while preserving sharp features. The similarity between the ENVT and the shape operator is discussed in Appendix \[app:B\]. The ENVT is a symmetric and positive semi definite matrix so we can represent $C_i$ using an orthonormal basis of the eigenvectors $\mathbf{e}_k$ and real eigenvalues $\lambda_k$: $$C_i = \sum_{k=0}^{2}\lambda_k\mathbf{e}_k\mathbf{e}_k^T.$$ **Geometrical Interpretation:** On a noise free triangulated mesh, a planar area has only one dominant eigenvalue in surface normal direction. Two dominant eigenvalues indicate edge features where the weakest eigenvector will be along the edge direction. At a corner all three eigenvalues are dominant. Consider a cube model where the eigenvalues of the ENVT are sorted in decreasing order ($\lambda_1\geq\lambda_2\geq\lambda_3\geq0$) and normalized, then for orthogonal features we can write: $\{\lambda_1=1, \lambda_2=\lambda_3=0\}$ (face), $\{\lambda_1=\lambda_2=\frac{\sqrt{2}}{2}, \lambda_3=0\}$ (edge) and $\{\lambda_1= \lambda_2=\lambda_3=\frac{\sqrt{3}}{3}\}$ (corner). Eigenvalues Binary Optimization {#sec:binaryOpti} -------------------------------- Let us consider a noisy mesh, corrupted by a random noise with standard deviation $\sigma_n$ bounded by minimum edge length. On a planar area (face) of the geometry: $\lambda_1 \gg \sigma_n$, and the other two eigenvalues will be proportional to noise intensity, $\lambda_2,\lambda_3 \propto \sigma_n$. Similarly, on an edge of the geometry: $\lambda_1, \lambda_2 \gg \sigma_n$ and $\lambda_3 \propto \sigma_n$. On a corner of the geometry: $\lambda_1,\lambda_2, \lambda_3 \gg \sigma_n$. We apply binary optimization to remove noise effectively by setting the less dominant eigenvalues to zero and the dominant eigenvalues to one. Our optimization technique removes noise not only from the planar area but also along the edge direction of sharp features during the denoising process. We implemented the binary optimization by introducing a scalar threshold value $\tau$ which is proportional to the noise intensity $\tau \propto \sigma_n$ and smaller than the dominant eigenvalues. The $\tilde{\lambda}$ are modified eigenvalues of the ENVT using the following optimization technique. There are three eigenvalues for feature classification so our optimization method checks the following three cases: - At corners of noisy surfaces (smooth or sharp), the smallest eigenvalues should be bigger than the threshold value i.e. $\lambda_3\geq \tau$. Hence: $$\begin{aligned} \tilde{\lambda}_i= 1 , \quad i \in \{1,2,3\} \quad &\mbox{if} \quad \lambda_3\geq \tau. \end{aligned}$$ - At edges of noisy geometry (smooth or sharp), the less dominant eigenvalue should be smaller than the threshold value i.e. $\lambda_3 < \tau $ and $\lambda_2 \geq \tau $. Hence: $$\begin{aligned} \tilde{\lambda}_2=\tilde{\lambda}_1=1 \text{,} \quad{\tilde{\lambda}_3=0} \quad &\mbox {if } \quad \lambda_2\geq \tau \text{,} \quad{\lambda_3 < \tau}. \end{aligned}$$ - In the last case, we check for planar area of the geometry. Having $\lambda_2< \tau$ and $\lambda_3< \tau$ show that the only dominant eigenvalue is $\lambda_1$. Hence: $$\begin{aligned} \tilde{\lambda}_1=1 \text{,} \quad{\tilde{\lambda}_2=\tilde{\lambda}_3=0} \quad &\mbox{if } \quad \lambda_1\geq \tau \text{,} \quad{\lambda_3\text{,}\lambda_2 < \tau.} \end{aligned}$$ There are the three possible combinations during the eigenvalue binary optimization. The threshold $\tau$ has to be set by the user according to the noise intensity. De-Noising using ENVT --------------------- Our denoising method is inspired by the feature classification characteristic of the eigenvalues of the ENVT. The smallest eigendirections (one for edges of the geometry, two in planar areas) represent noise. Multiplication of the ENVT with the corresponding element normal will suppress noise in the weak eigendirection. This operation also strengthens the element normal in the strongest eigendirection. The visual representation of this operation is shown and explained in Appendix \[denoise\]. ### Anisotropic Face Normal Denoising {#sec:AFNP} We recompute the ENVT by using the same eigenvectors with modified eigenvalues: $$\label{equ:aniso} \tilde{C}_f=\sum_{k=0}^{2} \tilde{\lambda}_k \mathbf{e}_k \mathbf{e}_k^T.$$ Now, $\tilde{C_f}$ will have the quantized eigenvalues according to the different features on the surface. To remove noise, we multiply the corresponding element normal with the newly computed tensor $\tilde{C_f}$. The multiplication will lead to noise removal while retaining sharp features. $$\label{equ:faceP} \tilde{\mathbf{n}}_i=d\mathbf{n}_i+ \tilde{C}_f \mathbf{n}_i = d\mathbf{n}_i+ \sum_{k=0}^{2} \tilde{\lambda}_k \langle \mathbf{e}_k,\mathbf{n}_i\rangle \mathbf{e}_k,$$ where $d$ is the damping factor to control the speed of preprocessing of the face normals. We use $d=3$ for all experiments. The second row of Figure \[fig:iteration\] shows the face normal denoising using the tensor multiplication. ### Vertex Update {#sec:vertUpdate} In the last stage of the denoising algorithm, we synchronize the vertex position with corresponding newly computed face normals. To compute the proper vertex position, the orthogonality between edge vectors and face normals is used [@vertexUpdate]. An energy function is then defined as follows: $$\begin{aligned} & \underset{v_i}{\text{min}} & & \sum_{j = 0 }^{N_v(i)-1}\sum_{(i,j)\in \partial F_k}^{} \lVert \tilde{\mathbf{n}}_k\cdot (v_i-v_j)\rVert^2, \end{aligned} \label{equ:vertUp}$$ where $v_i$ is the vertex position and $N_v$ represents the number of vertices of the vertex star of $v_i$, $\partial F_k$ is the boundary edge of the vertex star of $v_i$ shared with face $f_k$. $\tilde{\mathbf{n}_k}$ is the smooth face normal at $f_k$. Taubin [@Taubin01linearanisotropic] explained that the face normal vector can be decomposed into a normal and a tangential component and the main problem here is to find the vertex positions which minimize the tangential error. The possible solution of Equation \[equ:vertUp\] may be a mesh with degenerate triangular faces. Like Taubin [@Taubin01linearanisotropic], to avoid the degenerate solution, we are using gradient descent that will lead to the optimal vertex positions. $$\tilde{v}_i= v_i + \frac{1}{F(v_i)}\sum_{j = 0 }^{N_v(i)-1}\sum_{(i,j)\in \partial F_k}^{} \tilde{\mathbf{n}}_k\cdot (v_i-v_j), \label{equ:dampingFactor}$$ where $F(v_i)$ is the number of faces connected to the vertex $v_i$. We iterate the whole procedure several times and the number of iterations depends on the noise intensity. Figure \[fig:stages\] shows the effect of each stage of the face normal processing in the proposed algorithm. Effect of Noise on the Proposed Method {#sec:noise} -------------------------------------- Noise is inevitable during digital data acquisition of real life objects. The high intensity of noise flips edges in the geometry and that leads to the inconsistent face normals on a geometry. As we mentioned in section 3.2, the ENVT is defined on properly oriented surfaces with consistence face normals because the spectral decomposition of the ENVT is invariant to the face normal orientation. In this section, we give a stochastic approximation about the relation between noise and geometry resolution to prevent edge flips in the geometry. Let us consider a smooth triangular mesh $\mathcal{M}_s$ which is corrupted by noise $\mathcal{N}$, $\mathcal{M}=\mathcal{M}_s+\mathcal{N}$. The noise $\mathcal{N}$ can be approximated by a random vector $X_n$ consisting of three independent random variables. We assume, that the random vector $X_n$ follows the Gaussian distribution; this is a realistic model for noise from 3D scanning [@noiseScanner]. Let $\sigma_n$ be the standard deviation of noise in each independent direction, then: $$\begin{aligned} P\{\lvert X_n \rvert \leq \sigma_n \} &=0.682, \\ P\{\lvert X_n \rvert\leq 2\sigma_n\}&=0.954, \\ P\{\lvert X_n \rvert\leq 3\sigma_n\}&=0.997.\end{aligned}$$ To explain the probability of normal flips, we switch to the 2D case of a polygon in $\mathbb{R}^2$. Let us consider an edge vector $l$ between two vertices $v_0$ and $v_1$ in $\mathbb{R}^2$: $l =(\vec{v_0}-\vec{v_1})$. We give a probabilistic estimation of the effect of noise on the edge $l$ w.r.t. the noise intensity (standard deviation) $\sigma_n$. Our analysis is mainly focused on the proper orientation of the edge normal. Wrong orientation of the edge normal $\mathbf{n}_l$ leads to an edge flip in the smooth geometry. We denote by $\Omega_1$ and $\Omega_2$ the sets of correctly oriented edge normals and wrong oriented edge normals respectively. The probabilistic estimation of the orientation of the edge normals based on noise intensity and edge length is given as follows: - Probability of an edge to have a correctly oriented edge normal: $$P\{\vec{\mathbf{n}_l}\in \Omega_1 \} \geq \begin{cases} 0.682 &\mbox{if } \sigma_n \leq \frac{\lvert l \rvert}{2} \\ 0.954 & \mbox{if } \sigma_n \leq \frac{\lvert l \rvert}{4}. \end{cases}$$ - Similarly, the probability of an edge to have a wrong oriented edge normal: $$P\{\vec{\mathbf{n}_l}\in \Omega_2 \} \leq \begin{cases} 0.318 &\mbox{if } \sigma_n \leq \frac{\lvert l \rvert}{2} \\ 0.046 & \mbox{if } \sigma_n \leq \frac{\lvert l \rvert}{4}. \end{cases}$$ Due to the presence of noise, edge flipping may occur when the vector sum of the vertex dislocations at the edge is bigger than the edge length. This is similar to the sampling theorem, where a signal can be reconstructed properly if and only if the data is sampled with a frequency bigger than twice the highest frequency of the data signal. Using the given analysis, for a given probability density function and an upper bound to the standard deviation, we can estimate the expected number of edge flips in the geometry. If a surface is affected by noise only in normal direction, then there is no edge flip, irrespective of the probability density function of the noise. We also experimented with uniformly distributed noise where the random variable $X_n$ follows the uniform distribution so we can write: $\ P\{\lvert X_n \rvert \leq \sigma_n \}= 1$. If the noise intensity is less than half of the minimum edge length in the geometry then there will be no edge flip as shown in . \[tab:quant\] Experiments, Results and Discussion {#exp} =================================== We evaluated the capacity of our algorithm on various kinds of CAD (Figure \[fig:neighComp\] - \[fig:fanDisk\]), CAGD (Figure  \[fig:tauinc\],  \[fig:juli\], \[fig:dragKd\]) models corrupted with synthetic noise and real scanned data (Figure  \[fig:realdata\],  \[fig:realdiff\],  \[fig:compLu\]) models with different types of features. Noisy surfaces with non-uniform mesh corrupted with different kinds of noise (Gaussian, Impulsive, Uniform) in different random directions are also included in our experiments. We compared our method to several state-of-the-art denoising methods in which we implemented [@aniso], [@BilNorm], [@L0Mesh] and [@BilFleish] based on their published article and several results of [@AdamsKd], [@Guidedmesh], [@binormal] and [@robust16] are provided by their authors. **Parameters:** We discussed several parameters (geometric neighbor radius $r$, dihedral angle threshold $\rho$, eigenvalue threshold $\tau$, damping factor $d$ and iteration $p$). Throughout, the whole experimentation, we fixed $\rho=0.8$, $d=3$. Effectively, there are only **3 parameters** to tune the results, in which $\tau$ is the most important as it depends on noise intensity but at the same time this parameter is not highly sensitive. We use [$\tau \in \{0.3-0.4\}$]{} for synthetic data and [$\tau \in \{0.05-0.1\}$]{} for real data because real data have smaller noise intensity compared to synthetic data in our experiments. The neighborhood radius $r$ depends on the number of elements within the geometric neighborhood region. We iterate several times ([$p \in \{40-60\}$]{}) to obtain better result. In the quantitative comparison Table \[tab:quant\], the parameters are given in the following format:. For the [@Guidedmesh] and [@binormal] methods, we mention *Default* in the parameter column because smooth models are provided by those authors. We are following a similar pattern for other algorithms too. $(\sigma_c, \sigma_s, p)$ for [@BilFleish], $(\sigma_s, p)$ for [@BilNorm], $(\lambda, s, p)$ for [@aniso] and $(\alpha)$ for [@L0Mesh], where $\sigma_s, \sigma_c$ are the standard deviation of the Gaussian function in the bilateral weighting. $s$ and $\lambda$ represent the step size and the smoothing threshold. The term $\alpha$ controls the amount of smoothing. **Effect of $\tau$:** To see the effect of different values of $\tau$, we have experimented with two different models, the Box model (real data, with different level of features and less noise) and the Cube model (with limited features and high noise). With smaller values of $\tau \in [0.01,0.05]$, there is not much change on the cube model because of the higher noise whereas the box model manages to remove the noise and also the shallow features with increasing $\tau$. So $\tau$ is responsible for removing the noise and also for preserving features. If the feature size is smaller than the noise intensity, feature preservation is an ill-posed problem as shown in Figure \[fig:tauinc\]. **Neighborhood Comparison:** Figure \[fig:neighComp\] shows that the geometrical neighborhood is more effective against irregular meshes compared to the topological neighborhood. The geodesic neighborhood is quite similar to the geometrical neighborhood but it is not appropriate when a model is corrupted by high intensity of noise. **Visual Comparison:** The Block (Figure \[fig:neighComp\]), the Joint (Figure \[fig:jointSharp\]), the Cube (Figure \[fig:cube\]) and the Devil (Figure \[fig:devRock\]) have non-uniform meshes corrupted with Gaussian noise in random direction. Figure \[fig:jointSharp\] shows that the proposed method produces a smooth model with sharp features without creating any false features (piecewise flat areas like [@L0Mesh]) while method [@BilNorm] does not manage to remove low frequency noise (we can see smooth ripples). Method [@Guidedmesh] produces good results but at the narrow cylindrical area it could not manage to retain the circular area. It also produces some false features at the non-uniform sharp corner. Method [@binormal] could not manage to retain the sharp features. Similarly, we can see this behavior for the Cube model in Figure \[fig:cube\]. The Rockerarm model (Figure \[fig:rocker\]) has a considerably non-uniform mesh and our method better retains sharp features (the screw part) compared to [@Guidedmesh], [@aniso], [@BilFleish] while removing the noise better compared to methods [@BilNorm], [@L0Mesh]. Figure \[fig:devRock\] shows the robustness of our method against volume shrinkage. The horns of the model have the minimum shrinkage compared to state-of-the-art methods. The Fandisk model contains both cylindrical and sharp feature regions and is corrupted by high intensity Gaussian noise in random direction. Figure \[fig:fanDisk\] shows that the proposed method delivers both sharp features and umbilical regions without noise component and false features. Figure \[fig:dragKd\] shows that our method effectively removes noise (around the teeth), keeps small smooth features (on the body) and creates almost null edge flips (around the claw of the dragon) compared to method [@AdamsKd]. In Appendix \[app:mcc\], the surfaces are colored by absolute value of the mean curvature to compare the proposed method with several state-of-the-art methods, in terms of suppressing noise and keeping sharp features. For real data, we can not see considerable differences between the proposed method and state-of-the-art denoising methods because the noise intensity is quite low and state-of-the-art methods also produce good results. shows that our method better retains features in the right eye of the angel model, but for the Rabbit model, our result is quite similar to [@Guidedmesh] and better compared to other methods. Figure \[fig:compLu\] shows the comparison of our method to method [@robust16] with four different models (real and synthetic data). For the Pierret and the Cube models, our method produces smoother (non-feature region of the Pierret model) results while preserving all necessary features (corners at the cube) compared to method [@robust16]. For the Julius model, our method produces a smoother result, but at the cost of some fine features (around the eyes) compared to method [@robust16]. For the Vase model, method [@robust16] produces quite similar results to ours. However, the shrinkage effect is bigger in method [@robust16] as shown in Table \[tab:quant\]. Figure \[fig:realdiff\] shows the robustness of the proposed algorithm against irregular meshes and holes in the real data. The Gorgoyle and the Eagle model have several holes and spikes, but our smoothing algorithm manages to produce a smooth surface with proper features. **Robustness against noise:** Our method is invariant against different kinds of noise as shown in Figure \[fig:diffNoise\] where the vase model is corrupted by impulsive and uniform noise. The proposed method does not produce appropriate results above a certain level of noise as shown in Figure \[fig:noiseLevel\]. Figure \[fig:edgeFlip\] shows that our method is robust against random edge flips. **Quantitative Comparison:** In this section, we give a quantitative comparison of our method with state-of-the-art methods. We are using two different parameters: $E_v$ ($L^2$ vertex-based error) and MSAE (the mean square angular error). The positional error from the original ground truth model is represented by $E_v$ and defined as [@vertexUpdate]: $$E_v = \sqrt{\frac{1}{3\sum_{k\in F}^{} A_k} \sum_{i\in V}^{} \sum_{j\in F_v(i)}^{} A_j {\mathrm{dist}(\tilde{v}_i, T)}^2},$$ where $F$ is the triangular element set and $V$ represents the set of vertices. $A_k$ and $A_j$ are the corresponding element areas and $F_v(i)$ is the number of elements in the $i^{th}$ vertex-ring. $\mathrm{dist}(\tilde{v}_i, T)$ is the closest $L^2$-distance between the newly computed vertex $\tilde{v}_i$ and the triangle $T$ of the reference model. The MSAE computes the orientation error between the original model and the smooth model and is defined as: $$MSAE = E[\angle ({\mathbf{\tilde{n}, n}})],$$ where $\mathbf{\tilde{n}}$ is the newly computed face normal and $\mathbf{n}$ represents the face normal of the reference model. $E$ stands for the expectation value. The quantitative comparison Table \[tab:quant\] shows that our method performs better for most of the models e.g. Cube, Devil, Joint etc. For some model like Fandisk, our method produces quite similar numeric errors as state-of-the-art methods. ![Convergence plot comparison between the methods [@aniso], [@BilNorm] and our method. The error metrics MSAE and $E_v$ are computed for the Cube model and $p$ is the number of iterations. Figure shows that the proposed method has better convergence rate compared to the methods [@aniso] and [@BilNorm]. []{data-label="fig:errGraph"}](msae.jpg){width="7cm"} **Convergence:** Our smoothing algorithm has a stable and fast convergence (as shown in Figure \[fig:errGraph\]) because of the eigenvalue binary optimization. Modification of the eigenvalues of the ENVT will not affect the orientation of the corresponding face normal when noise is removed because the difference between two eigenvalues will be zero and also the less dominant eigenvalue will be zero. There will be no more modification on noisy surfaces after some iteration when we meet the explained scenario as shown in Figure \[fig:iteration\], where there is no significant change (visually) after 60 iterations. Figure \[fig:errGraph\] shows that the proposed method converges with minimum error compared to the methods [@aniso] and [@BilNorm]. We can see that after 40 iterations, our method is almost stable and does not produce significant changes. The eigenvalue binary optimization not only helps in preserving features, but also improves the convergence rate of the algorithm. **Running Time Complexity:** Running time complexity of the proposed method is similar to most of the two-step denoising methods [@Ohtake], [@Taubin01linearanisotropic], [@alphaTrimming], [@BilNorm]. The neighborhood computation is done by the growing disk method to compute the ENVT. The ENVT computation has the complexity of $O(c \cdot n_f \cdot p )$, where $c$ is the number of elements within the neighborhood, $n_f$ and $p$ are the numbers of elements and iterations respectively. The tensor multiplication procedure has the running complexity of $O(n_f)$. Similarly, the vertex update procedure has the complexity of $O(c \cdot n_v \cdot p )$, where $n_v$ is the number of vertices in the geometry. In general, $n_f>n_v$, so the overall complexity of the algorithm will be $O(c \cdot n_f \cdot p )$. The number of elements in the geometric neighborhood $c$ plays an important role in the running time of the algorithm as shown in Table \[tab:runn\]. For example, the Devil model has smaller number of elements and larger running time compared to the Joint model because of the different geometric neighborhood radius. The Bilateral normal method [@BilNorm] uses a fix number of neighborhood elements (depending on the valence of vertex) for face normal smoothing and is a bit faster compared to the proposed method. However, the other recent two step denosing methods [@Guidedmesh], [@binormal] are slower compared to our method because of their additional denoising steps. Conclusion and Future work ========================== In this paper, we presented a simple and effective tensor multiplication algorithm for feature-preserving mesh denoising. The concept of the element-based normal voting tensor (ENVT) has been introduced and eigenanalysis of this tensor leads to decoupling of features from noise. We have shown that the proposed method does not need any additional Laplacian-based smoothing technique to remove the noise, like multistage state-of-the-art methods [@anisobil], [@NVTsmooth], [@Guidedmesh], [@binormal]. Our method removes noise by multiplying the ENVT to the corresponding face normal, and this reduces the complexity of the algorithm. We have introduced the concept of eigenvalue binary optimization that not only enhances sharp features but also improves the convergence rate of the proposed algorithm. The local binary neighborhood selection helps to select similar elements in the neighborhood to compute the element based normal voting tensor which avoids feature blurring during the denoising process. We provide a stochastic analysis of the noise effect on the geometry depending on the average edge length of the triangulated mesh. On the basis of this analysis, we can provide an upper bound on the noise standard deviation depending on the minimum edge length to reconstruct the smooth surface from the noisy surface. The experimental results (visual and quantitative) show the capability of the proposed algorithm. Our method produces good results not only in terms of visual but also quantitatively with all kind of data including CAD, CAGD and real data. We have also shown the robustness of the algorithm against different kinds and levels of noise. We also discussed the wrong orientation of triangles in presence of strong noise. In future work, we would like to solve problem of edge flips and extend our algorithm to point set smoothing. Neighborhood definitions {#app:Neigh} ======================== [Combinatorial neighborhood]{} is defined as the set of all elements connected with the vertices of the corresponding face: $$\Omega_i = \{f_j|{v_i}_1\in f_j \lor {v_i}_2\in f_j \lor {v_i}_3\in f_j\},$$ where the neighborhood region is presented by $\Omega$ and the vertices ${v_i}_1$, ${v_i}_2$ and ${v_i}_3$ belong to the face $f_i$. [Geometrical neighborhood]{} is defined as the set of all elements belonging to the disk area of the desired radius and centered at the corresponding element: $$\Omega_i = \{ {f_j}| \quad {\lvert c_j-c_i\rvert \leq r} \},$$ where $c_i$ and $c_j$ are the centroids of the central and neighbor element and $r$ is the radius of the neighbor disk for the geometrical neighborhood. [Geodesic neighborhood]{} is defined as the set of all elements within the shortest distance defined by the radius r: $$\Omega_i = \{f_j | \quad \mathcal{D}(f_i,f_j) \leq r \},$$ where $f_i$ is the source point and $\mathcal{D}(f_i,f_j):\mathcal{M} \to \mathbb{R}$ is a geodesic distance function on a manifold surface $\mathcal{M}$. ![Shows the basic idea behind the proposed method to remove noise in $\mathbb{R}^2$ where [$e_i$]{} and $\lambda_i$ represent the eigenvectors and eigenvalues of the proposed element based voting tensor and [$n$]{} shows the noisy normal. We rotate the noisy normal [$n$]{} towards the dominant eigendirection [$e_1$]{} by a corresponding tensor multiplication. []{data-label="fig:idea"}](idea4.jpg){width="\linewidth"} Relation between the Shape Operator and the ENVT {#app:B} ================================================ Let us consider a smooth manifold surface $\mathcal{M}$ embedded in $\mathbb{R}^3$. We assume that the surface is orientable and it has well defined normal field $N:\mathcal{M} \to S^2$ on $\mathcal{M}$. Then we can define our proposed tensor $C_\Omega(p): \mathbb{R}^3 \to \mathbb{R}^{3\times 3}$ around a point $p$: $$C_\Omega(p) = \int_{\Omega}^{} (n\cdot n^T)d\Omega,$$ where $n\in N$ and $\Omega$ is the local neighborhood area of point p. To compute the shape operator on a surface, the surface must be $C^2$ and properly oriented. The shape operator is defined on the tangent plane $\mathcal{S}:T_p\mathcal{M} \times T_p\mathcal{M} \to \mathbb{R}$. $T_p\mathcal{M}$ represents the tangent plane, spanned by the basis vectors $\xi_1$ and $\xi_2$. The eigenvalues of the shape operator $\mathcal{S}$ are the principle curvatures $\kappa_1$ and $\kappa_2$ and eigenvectors will be a basis of the tangent plane ($\xi_1, \xi_2$). The eigenvectors are called the principle curvature directions. Represented in the orthonormal basis of the normal and the principle curvature directions, the surface normal in a local neighborhood of point $p$ can be approximated by: $$n(\xi_1,\xi_2)=[1, \kappa_1\xi_1, \kappa_2\xi_2].$$ Now, we can compute the covariance matrix using the above normal vector: $$n\cdot n^T = \begin{bmatrix} 1 & \kappa_1\xi_1 & \kappa_2\xi_2 \\ \kappa_1\xi_1 & \kappa_1^2\xi_1^2 & \kappa_1\xi_1\kappa_2\xi_2 \\ \kappa_2\xi_2 & \kappa_1\xi_1\kappa_2\xi_2 &\kappa_2^2\xi_2^2 \end{bmatrix}.$$ For a small symmetric $\Omega$, the integral for the off-diagonal components will be zero: $$\int_{\Omega}^{} n\cdot n^T d\Omega = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \kappa_1^2\xi_1^2 & 0 \\ 0 & 0 &\kappa_2^2\xi_2^2 \end{bmatrix}.$$ So by this approximation, the shape operator is contained in the covariance matrix as the lower right $2\times2$ sub-matrix. Therefore the second and third eigenvalue of the NVT approximate the squares of the principal curvatures $\kappa_1$ and $\kappa_2$. A Visual Representation of the mesh denoising {#denoise} ============================================= Traditionally, face normal smoothing is done by rotating the concerned face normal along a geodesics on the unit sphere whereas our method aligns the face normals by projection. We project noisy face normals towards the smooth normals by multiplication of the ENVT to the corresponding face normal. A demonstration of the ENVT multiplication to the corresponding noisy face normal in $\mathbb{R}^2$ is shown in Figure \[fig:idea\]. If $\lambda_2 > \lambda_1$, then it will strengthen the face normal in the desired direction $\mathbf{e}_1$ and suppresses noise in the $\mathbf{e}_2$ direction. The whole procedure consists of the following steps: - A noisy face normal $\mathbf{n}$ is decomposed according to the eigenbasis ($\mathbf{e}_1$ and $\mathbf{e}_2$) of the element based normal voting tensor, Figure \[fig:idea\] (left). - Then, the modified eigenvalues ($\lambda_1$ and $ \lambda_2$) get multiplied in the corresponding eigendirections to suppress the weak eigenvalue in weak eigendirection, Figure \[fig:idea\] (middle). - Finally, the new element normal $\mathbf{n}^p$ is obtained by normalizing $C_i\cdot \mathbf{n}$, in Figure \[fig:idea\] (right). Mean curvature coloring {#app:mcc} ======================= Mean curvature coloring can be used as a tool to show the capability of a denoising algorithm. In our experiments, we use the cotangent mean curvature operator that is computed at each vertex of the geometry [@aniso]. The coloring is done by using the absolute value of the mean curvature vector. Figure \[fig:devRockmc\] shows that our method is able to remove noise quite similar to methods [@BilNorm] and [@Guidedmesh] while it produces less shrinkage and better retains the sharper features compared to the other methods. Similarly, for the Julius model (Figure \[fig:julimc\]), the proposed method produces a smoother surface compared to methods [@Guidedmesh] and [@robust16] and retains sharper features similar to the Bilateral normal method [@BilNorm], as shown in Figure \[fig:julimc\]. The Cube model has non-uniform meshes and mean curvature coloring indicates that our method is better compared to other state-of-the-art methods in terms of noise removal and retaining sharp features. Figure \[fig:realdatamc\] shows that the proposed method is also effective for real data (data obtained from 3D scanners). Comparison with Robust Implicit Moving Least Squares (RIMLS) ============================================================ The RIMLS algorithm is a features preserving point set surface reconstruction method [@rimls]. It follows the moving least squares (MLS) method along with robust statistics to produce a high fidelity surface in presence of noise. Figure \[fig:rimls\] shows the comparison of the proposed method with the RIMLS method.
--- abstract: | $Id: inverserelaxation.tex,v 1.73 2019/07/18 07:31:25 mal Exp $ A distributed internal RC structure is observed in electrical measurements of the supercapacitors. The distribution is caused by the hierarchical porous structure of the electrodes. In the present work an inverse relaxation technique is proposed to characterization of the internal RC structure. The technique allows to determine the ratio of “easy” and “hard” to access capacitances. The method consists in shorting a supercapacitor (initially charged to a $U_0$) for a short (lower than the internal $RC$) time $\tau$, then switching to the open circuit regime and measuring an initial rebound $U_1$ and a long–time asymptotic $U_2$. The main result is the possibility to find the ratio of “easy” and “hard” to access capacitances as $\eta=(U_0-U_2)/(U_2-U_1)$. This characteristic is an immanent characteristic of a supercapacitor, it is stable in several orders of $\tau$ range. The theory is implemented numerically and tested on a number of model structures. The models are then compared to temporal and impedance measurements of commercial supercapacitors of several manufacturers. The approach can be effectively applied to characterization of supercapacitors and other relaxation type systems with porous internal structure. address: 'Ioffe Institute, St. Petersburg, 194021' author: - Mikhail Evgenievich - Vladislav Gennadievich bibliography: - 'echem.bib' - 'LD.bib' date: 'July 7, 2019' title: On The Inverse Relaxation Approach To Supercapacitors Characterization --- =1 \[intro\]Introduction ===================== A distributed internal RC structure is observed in electrical measurements of the supercapacitors. The distribution is caused by the hierarchical porous structure of the electrodes. The two most commonly used technologies for manufacturing the carbon structures for supercapacitor electrodes are CDS and Activated carbon. [Carbide–derived carbon (CDC)](https://en.wikipedia.org/wiki/Carbide-derived_carbon) materials are derived from carbide precursors[@oschatz2017carbide]. An initial crystal structure of the carbide is the primary factor affecting the CDC porosity. [Activated carbon](https://en.wikipedia.org/wiki/Activated_carbon) is typically derived from a charcoal or biochar[@abioye2015recent]. It’s structure is inherited from the starting material and has a surface area in excess of $2,000m^2/g$ [@mangun2001surface]. See [@borenstein2017carbon] for a review of carbon materials used in supercapacitor electrodes. All the technologies used for supercapacitor manufacturing lead to a complex, “self–assembled” type of internal structure. In applications the most interesting is not the internal structure of a device per se, but it’s manifestation in the electrical properties. While Li–ion systems are the most effective in energy storage applications[@du2003comparative], supercapacitors are the most effective in high–power applications. For Li–ion batteries the two characteristics are typically provided by manufacturers: specific energy and specific power. For supercapacitors the other two characteristics are typically provided by manufacturers: capacitance and internal resistance. Standard methods of characterization create a substantial uncertanty, because a supercapacitor’s characteristics change during the discharge process. The knowledge about an internal $RC$ distribution due to porous structure of the electrodes is missed from standard characterizations; it can be obtained from impedance type of measurements, but it is a low–current linear technique. In this paper a supercapacitor intrinsic characteristic $\eta$ is introduced (\[Cratio\]); the characteristic is obtained only from temporal electric measurements. The ratio (\[Cratio\]) can be viewed as a manifestation of the interplay between “deep pores” and “shallow pores” of electrodes internal structure exhibited in the supercapacitor electric properties. \[TheModel\]Inverse Relaxation Model And $(U_0-U_2)/(U_2-U_1)$ Characteristic ============================================================================= ![\[RCscheme\] Supercapacitor hierarchical structure and the simplistic two–$RC$ model. ](inv_capacitors){width="12cm"} ![\[Utshorting\] A $U(t)$ dependence. 1. Initial shorting for $\tau\ll RC$. 2. Immediate rise from $0$ to $U_1$. 3. In the open circuit regime a slow final rise from $U_1$ to $U_2$ due to internal charge redistribution. ](Utshorting){width="7cm"} In terms of the electric properties supercapacitor’s elecrodes internal porous structure can be conveniently considered as electric capacitance of two kind: an “easy” and a “hard” to access, Fig. \[RCscheme\]. Actual distribution of the internal $RC$ can be of different forms, but this separation is the simplest practical way to characterize an internal structure, however such a separation is sufficient for most applications and more objective, since it characterizes the discharge as a whole. When a supercapacitor is in the stationary state, all the potentials in Fig. \[RCscheme\] are equal to the one on the electrodes, there is no internal current. When a supercapacitor is in a non–stationary state then internal charge redistribution takes place, it can be directly observed through the dynamics of electrodes potential. Consider a measurement technique: the system is charged to some initial potential $U_0$, then it is short–circuited for a short time (lower than the supercapacitor’s internal $RC$) to create a non–stationary state, after that it is switching to the open circuit regime and $U(t)$ is recorded to observe internal relaxation. The $U(t)$ dependence is: - From the initial potential $U_0$ to zero (shorting to create a non–stationary state). - After switching to the open circuit regime the potential jumps to $U_1$. There is a similar current-‐interruption technique used in fuel cell measurements [@larminie2003fuel], page 64, the immediate rise voltage $V_r=IR_i$. - A slow final rise to $U_2$, Fig. \[Utshorting\]. The $U(t)$ relaxation from $U_1$ to $U_2$ may be of a single of multiple exponent type, this depends on the supercapacitor’s internal structure. For the two–$RC$ model a pure linear dependence is observed, single exponent relaxation in Fig. \[Ut\]. For three–$RC$ supercapacitor model there are two exponents in $U(t)$ evolution, one can clearly observe the deviation from a linear dependence in Fig. \[Ut3\] below. ![\[Ut\] A $U_2-U(t)$ evolution in two–$RC$ model systems. $R1=2\Omega$, $C1=5F$, $R2=8\Omega$, $C2=2F$). One can see a pure linear dependence ($U_2-U(t)$ is in log scale) for two–$RC$ model (single exponent). ](inv_tworc){width="6cm"} Before we consider a more realistic model, let us demonstrate how the ratio of easy and hard to access capacitances can be found with the inverse relaxation technique for a *two*–$RC$ model. In this case the separation on “easy” and “hard” to access capacitance is trivial: $C1$ is easy to access, $C2$ is hard to access. In two–$RC$ model an internal charge redistribution between $C1$ and $C2$ is: $$\begin{aligned} \Delta Q_{C1}&=&\Delta Q_{C2} \label{chargedisbalance}\\ C1\cdot(U_2-U_1)&=&C2\cdot(U_0-U_2) \\ \eta=\frac{C1}{C2}&=&\frac{U_0-U_2}{U_2-U_1} \label{Cratio}\end{aligned}$$ ![\[EtaOnTau\] The dependence of $\eta$ on shorting time $\tau$ for two two–$RC$ models: $R1=2\Omega$, $C1=5F$, $R2=8\Omega$, $C2=2F$ (circles) and $R1=4\Omega$, $C1=5F$, $R2=4\Omega$, $C2=2F$ (triangles), both models have $\eta=2.5$. The $\eta$ is a constant in two orders of shorting time range. ](rcinv_theoretical){width="8cm"} Important, that the ratio (\[Cratio\]) of “easy”and “hard” capacitances does not depend on shorting time and on specific values of $R1$ and $R2$. In two capacitors model the values $U_0$, $U_1$, and $U_2$ can be obtained analytically, but we are going to present a numerical solution with the goal to study a more complex model later on. In Fig. \[EtaOnTau\] the dependence of $\eta$ on $\tau$ is presented for two different two–$RC$ capacitor models with the same $\eta$. One can clearly see that the ratio (\[Cratio\]) does not differ from the exact value $C1/C2=2.5$ when shorting time $\tau$ changes in two orders or magnitude range. A deviation from the constant arises only when shorting time $\tau$ becomes comparable to the supercapacitor’s internal $RC$. For a small $\tau$ (relatively to the internal $RC$) charge redistribution inside a supercapacitor leads to $\eta$ independence on $\tau$. When one starts to increase the $\tau$ — an initial charge redistribution becomes more prolonged and the deviation of $\eta$ from a constant can be observed. The independence of $\eta$ on $\tau$ allows us to consider the ratio (\[Cratio\]) as an **immanent** characteristic of the system. This makes the inverse relaxation a well suitable tool for characterization of supercapacitors and other relaxation type systems with porous structure. ![\[Ut3\] A $U_2-U(t)$ evolution in three–$RC$ model systems. $R1=1\Omega$, $C1=5F$, $R2=1\Omega$, $C2=20F$, $R3=20\Omega$, $C3=80F$. A deviation from a linear dependence ($U_2-U(t)$ is in log scale) is clearly observed. ](inv_threerc){width="6cm"} The two–capacitors inverse relaxation model is a trivial one, it provides a single exponent behaviour in $U(t)$. In a system with a number of porous branches the behavior is more complex. In the Fig. \[Ut3\] a three–$RC$ supercapacitor model is presented. There are two exponents in $U(t)$ evolution, one can clearly observe the deviation from a linear dependence. However, the ratio (\[Cratio\]) is stable in both cases. In Table \[tableRC3\] a three capacitors model for the system in Fig. \[Ut3\] is presented. One can clearly see the stability of the ratio (\[Cratio\]) with $\tau$ changing in two orders or magnitude range: for $0.01\le\tau\le 1$ the $\eta$ is almost a constant and only for $\tau>1$, when it becomes comparable to internal $RC$, we start observing a devaiation: $\eta$ starts to increase. [l|l|n[3]{}[5]{}|n[3]{}[5]{}|n[3]{}[5]{}]{} & [$U_0$]{} & [$U_1$]{} & [$U_2$]{} & [$\eta=\frac{U_0-U_2}{U_2-U_1}$]{}\ 0.01 & 1 & 0.998023737 & 0.999723488 & 0.162677945181313\ 0.025 & 1 & 0.995074045 & 0.999309733 & 0.162964552629928\ 0.05 & 1 & 0.990196827 & 0.998622828 & 0.163443132750619\ 0.075 & 1 & 0.985367841 & 0.997939252 & 0.163923365483796\ 0.1 & 1 & 0.980586574 & 0.997258972 & 0.164405144358957\ 0.25 & 1 & 0.95287339 & 0.993244705 & 0.167329080065882\ 0.5 & 1 & 0.91018254 & 0.986797172 & 0.172327761099211\ 0.75 & 1 & 0.871482454 & 0.980628508 & 0.177482293588002\ 1 & 1 & 0.836372962 & 0.974712712 & 0.182791193420546\ 2.5 & 1 & 0.683309612 & 0.943368587 & 0.217763732245734\ 5 & 1 & 0.55472864 & 0.900865674 & 0.286401962986717\ 7.5 & 1 & 0.492362933 & 0.864462898 & 0.36424916621532\ 10 & 1 & 0.454000053 & 0.831302791 & 0.447113662345063\ Regardless the specific model used for internal $RC$ structure the simulation confirms that the ratio (\[Cratio\]) is stable for $\tau$ varying in orders of magnitude range. This makes us to conclude that the ratio (\[Cratio\]) is *more general*, than a specific model used. This ratio $\eta$ is the system intrinsic propeperty, it is the characteristic to separate easy– and hard– to access capacitances. ![\[EtaOnTauExperim\] The dependence of $\eta$ on shorting time $\tau$ for two commercial supercapacitors [`AVX-SCCS20B505PRBLE`]{} (circles) and [`Eaton-HV1020-2R7505-R`]{} (triangles); stable $\eta$ is observed. ](FigEtaTRemporal){width="8cm"} \[sc\]The Experimental Measurement Of Supercapacitors ===================================================== A simple theoretical model of previous section shows the independence of the capacitance ratio $\eta$ on shorting time $\tau$. We have tested the ratio $\eta$ of several commercial supercapacitors, the results below are presented for two models: [[`AVX-SCCS20B505PRBLE`]{}](http://datasheets.avx.com/AVX-SCC-LE.pdf) and [[`Eaton-HV1020-2R7505-R`]{}](https://www.eaton.com/content/dam/eaton/products/electronic-components/resources/data-sheet/eaton-hv-supercapacitors-cylindrical-cells-data-sheet.pdf), both $5F$ with $2.7V$ max. The measurements are not actually complex, only three potentials $U_0$, $U_1$, and $U_2$ have to be measured, no measurement of exponentially small values is required: the $U_0-U_1$ and $U_2-U_1$ are not small for actual $\tau$ values used in the experiment. The potentials $U_0$, $U_1$, and asymptotic $U_2$ are measured directly. This makes the approach suitable to measurements in the high current regime without involving a nonlinear impedance concept[@kompan2010nonlinear]. The results for two commercial supercapacitors [`AVX-SCCS20B505PRBLE`]{} and [`Eaton-HV1020-2R7505-R`]{} are presented in Fig. \[EtaOnTauExperim\]. One can clearly observe a stable $\eta$. The $U(t)$ relaxation from $U_1$ to $U_2$ carry additional information about the distribution of internal $RC$, a deviation from a linear behaviour is a sign of the developed porous structure. In Fig. \[experimentatime\] the $U_2-U(t)$ relaxation is presented (in $\log$ scale) for [`AVX-SCCS20B505PRBLE`]{} and [`Eaton-HV1020-2R7505-R`]{} models. ![\[experimentatime\] The $U_2-U(t)$ (in $\log$ scale) for [`AVX-SCCS20B505PRBLE`]{} (red) and [`Eaton-HV1020-2R7505-R`]{} (green). A deviation from a linear dependence is clearly observed. A “noise” observed at large $t$ is due to exponentially small value of $U_2-U(t)$. ](UUtime){width="11cm"} The Fig. \[experimentatime\] illustrates that inverse relaxation is typically **not** a single–exponent type of behavior. The relaxation at small time is faster than at large time. An ultimate situation of such a behavior is presented in Fig. \[Ut3\] for a model system. The deviation $\log(U_2-U(t))$ from a linear law is related to a distribution of internal $RC$. This deviation from a linear dependence is the second source of information about supercapacitor internal structure. In contrast with $\eta$ measurement this measurement requires to measure exponentially small value $U_2-U(t)$, for this reason it is typically more susceptible to measurement errors. However, when done right, it is an important source of information about supercapacitor’s internal porous structure[^1]. \[impedance\]Impedance characteristics of the supercapacitors ------------------------------------------------------------- The biggest advantage of impedance spectroscopy is that it can capture a wide range (many orders) of frequencies. The disadvantages of the technique are: measurement equipment complexity, impedance interpretation difficulity, and typically a low current linear regime, thus non–linear effects are problematic to study[@kompan2010nonlinear]. A common impedance analysis method is a [Nyquist plot](https://en.wikipedia.org/wiki/Nyquist_stability_criterion#Nyquist_plot). In Fig. \[FigImpedance\] the Nyquist plot $Z^{\prime}Z^{\prime\prime}$ is presented along with ZView fitting by two–$RC$ model for [`AVX-SCCS20B505PRBLE`]{} and [`Eaton-HV1020-2R7505-R`]{} supercapacitors. The impedance measurements have been performed in a frequency range $10^{-3}\div 10^{5}$Hz. In this range Nyquist plot has a complex behavior caused by a complex internal structure of the device. In supercapacitor applications the frequencies of practical interest are the ones below $30\div 50$Hz. For a simple models (such as in Fig. \[RCscheme\]) it would be rather naive trying to fit many orders of frequency range by a simple scheme of several $RC$ chains. For these reasons we limit the frequency range by $10^{-3}\div 30$Hz. A simple one–$RC$ model has a vertical asymptotic behavior at low frequencies. Two–$RC$ chains give some slope at low frequencies, observed in Fig. \[FigImpedance\]. In ZView, a model with two PCE elements (one with small exponent, a second one is close to $1$, almost the capacitance) allows to obtain a very good fit of the impedance curve in the entire $10^{-3}\div 10^{5}$Hz frequency range. The PCE element by itself can be modeled as a sequence of $RC$ elements[@valsa2013rc], thus the value and exponent of a PCE describe the supercapacitor’s internal structure. However, a limited range of practically interesting frequencies along with interpretation difficulties makes this approach not very appealing. [8cm]{} ![\[FigImpedance\] The $\eta=C_1/C_2$ ratio obtained from impedance curve. Impedance curves are presented for two commercial supercapacitors with the potential offsets $DC=0V$ and $DC=2.5V$. The curves are then fitted with two chain $RC$ model in Fig. \[RCscheme\] using [ZView](https://www.ameteksi.com/products/software/zview-software) program, the values ($R_1$, $C_1$, $R_2$, $C_2$) are obtained from the fitting. The frequency range has been chosen as $10^{-3}\div 30$Hz, the impedance was measured at very small $5mV$ AC amplitude. ](FigImpedanceBlue "fig:"){width="8cm"} [8cm]{} ![\[FigImpedance\] The $\eta=C_1/C_2$ ratio obtained from impedance curve. Impedance curves are presented for two commercial supercapacitors with the potential offsets $DC=0V$ and $DC=2.5V$. The curves are then fitted with two chain $RC$ model in Fig. \[RCscheme\] using [ZView](https://www.ameteksi.com/products/software/zview-software) program, the values ($R_1$, $C_1$, $R_2$, $C_2$) are obtained from the fitting. The frequency range has been chosen as $10^{-3}\div 30$Hz, the impedance was measured at very small $5mV$ AC amplitude. ](FigImpedanceGreen "fig:"){width="8cm"} A very important feature of a supercapacitor, not observed in a regular capacitor, is that the impedance curve depends strongly on DC potential applied. When a DC potential is applied to the supercapacitor, impedance characteristics change quite substantially. When the DC potential changes from $0V$ to $2.5V$ the impedance curve shifts to the right (the supercapacitor’s internal resistance increase) and the Nyquist plot changes substantially. The $\eta$ typically decreases with DC potential increase: the $\eta$ changes from $2.82$ to $1.06$ for [`AVX-SCCS20B505PRBLE`]{} and from $7.76$ to $1.95$ for [`Eaton-HV1020-2R7505-R`]{}, the same behavior was observed in the other supercapacitors we measured. The dependence of the capacitance on the applied potential is a known effect. It can be caused by the denstity of state changes[@kompan2015ultimate], double layer structure changes[@kornyshev2007double; @bagotsky2015electrochemical; @zhan2017computational], or redox–active electrolyte processes[@dai2016voltage; @ban2013charging] of both reversable (Faraday’s capacitance) and irreversable (electrochemical decomposition) types. Impedance measurement of $\eta$, especially for $0V$ DC offset, is similar to the ones obtained from the inverse relaxation technique. Important, that the inverse relaxation regime is close to “natural” fast discharge regime of a supercapacitor deployment and the measurement technique is much simpler, than the impedance techique. \[Discussion\]Discussion ======================== In this work a novel intrinsic characteristic of a supercapacitor is proposed: the ratio or “easy” and “hard” to access capacitances. Besides of the simplicity of the technique (no fitting is required), the most important feature of the inverse relaxation approach is that there is no measurement of exponentially small values. In a wide range of shorting time $\tau$ the differences $U_0-U_2$ and $U_2-U_1$ are not small, what provides a stability of the characteristic. Numerical modelling along with experimental measurements show the stability of this characteristic in several orders of shorting time range. A remarkable feature of (\[Cratio\]), is that it does not have any exponentially small value to measure, while, in the same time, all the measurements are performed not in the frequency domain, but in the time domain[^2]. These factors can introduce measurement errors into the inverse relaxation technique: - Inadequate shorting time $\tau$. The situation of a too long $\tau$ have been discussed above. A too short $\tau$ can also be an issue, because a typical equaipment in electrochemical time–domain measurements can seldom handle shoter than $10^{-2}$sec intervals. However, the stability of $\eta$ in a range of several orders of the value of $\tau$ always allows to obtain a proper setup. - Internal leakage. Supercapacitors, while have a very high specific capacitance, also have a substantial internal leakage. This effect can be modeled by adding a resistor connecting internal circuit and ground potential in Fig. \[RCscheme\]. Given an infinite time the $U_2$ potential will be zero because of an internal self–discharge. In practice, however, the internal self–discharge is typically not an issue for the reason of large difference of supercapacitor’s internal $RC$ and self–discharge $RC$. For quality supercapacitors this ratio is less than $10^{-5}$. - Non–linear behavior. The supercapacitors often exhibit a non–linear behavior[@kompan2010nonlinear], especially for close to maximal $U$ potentials. The non–linearity typically manifests in increased leakage and a dependence of the capacitance on the potential applied. The non–linearity effects, however, are typically more important in impedance measurements rather than in inverse relaxation measurements, see Section \[impedance\] above. Modeling supercapacitors internal structure in electronic circuit software is a common field of study[@johansson2008comparison; @logerais2015modeling; @pean2016multi]. In [@fletcher2017modelling] the voltage rebound effect, shorting and then switching to open circuit was also modeled. However, only in our work the ratio $\eta$ of “easy” and “hard” to access capacitances is introduced. Similar pulse–response characteristics of Li–ion batteries have been studied in [@barai2018study] with the emphasis on time–scale. A very important feature of $\eta$ is that it does not depend on short–circuiting time $\tau$ in several orders range. In addition to software modeling an experimental study has been performed to prove the adequacy of the model used. In contrast with impedance study short–circuiting and then inverse relaxation observation can be performed at high current, this is not a small signal approach typical for impedance spectroscopy. \[psiX\]Software Modeling ========================= The systems have been modeled in [Ngspice circuit simulator](http://ngspice.sourceforge.net/). The circuit have been created in gschem program of [gEDA](http://www.geda-project.org/) project. To run the simulator download[@RCsimulator] the file [RCcircuit.zip](http://www.ioffe.ru/LNEPS/malyshkin/RCcircuit.zip) and decompress it. To test the simulator execute ngspice Farades_y_with_variables.sch.autogen.net.cir Because original gschem+ngspice do not have a convenient parameterisation, a perl script `run_auto.pl` have been developed. To run the simulator with $\tau=2.5$ parameter execute: perl -w run_auto.pl Farades_y_with_variables.sch 2.5 This script take the `Farades_y_with_variables.sch` with $\tau=2.5$ as input, modify it, and then run ngspice. The result is saved to `n0_output.txt`. [^1]: There is a much more advanced Radon–Nikodym technique[@2016arXiv161107386V] can be applied to obtain relaxation rate distribution as matrix spectrum for relaxation type of data such as in Fig. \[experimentatime\]. The distribution of the eigenvalues (using the Lebesgue quadrature[@ArxivMalyshkinLebesgue] weight as eigenvalue weight) is an estimator of the distribution of relaxation rates observed in the measurement; Radon–Nikodym approach is much less sensitive to measurement errors compared to an inverse Laplace transform type of analysis. [^2]: The biggest advantage of impedance spectroscopy is that impedance function is a ratio of two polynomials, thus it can be measured/interpolated/approximated with a high degree of accuracy for the measurements in a wide range (over $9$ orders, typically $10^{-3}\div10^{6}$Hz) of frequency responses. However, in time–domain, where exponentially small values need to be measured, a much smaller range of time–scales are accessible (less than $2$ orders, often just a single order), hence in standard mathematical techniques, such as an inverse Laplace transform, any type of noise/discretization/measurement error/window effect have a huge impact on exponentially small Laplace transform contributions[@2016arXiv161107386V]
psfig.sty \#1 \#1[to 0pt[\#1]{}]{} \#1 Introduction {#intro} ============ Weak gravitational lensing has emerged to become an important probe of cosmology ([@wittman00; @vW00; @bacon00; @kwl00; @maoli01; @rhodes01; @h02; @jarvis; @pen03]; see review by [@bs01]). Much of the current discussion on potential dark energy constraints from weak lensing has focused on the use of the shear/convergence power spectrum, or equivalent measures, as a function of source redshift ([@hu02; @aba03; @bv03] but see [@hui99; @vw01; @bb01; @huterer02; @munshi03; @alex03; @tj03] for dark energy constraints from skewness or bispectrum). In these types of investigations, information about dark energy (both its abundance and equation of state) is encoded in the combination of geometrical distances and fluctuation growth rate that determines the observed lensing power spectrum. In this paper, we would like to pose and answer the question: is it possible to separate out the information purely from geometry i.e. irrespective of details of the mass power spectrum and its growth ? Such an exercise is useful because a method to do so allows us to derive dark energy constraints without making assumptions about the underlying large scale structure model (Cold Dark Matter, Gaussian initial conditions, etc). Comparing lensing constraints obtained via such a geometrical method against lensing constraints that carry more theoretical baggage provides an important consistency check. Moreover, a geometrical method allows us to make use of lensing measurements on small scales, scales which are often ignored in conventional methods because of worries about the ability to predict the nonlinear power spectrum accurately. Our discussion is divided as follows. In §\[scaling\], we point out an interesting scaling of lensing signals (i.e. shear-shear and galaxy-shear power spectra) with the source distance. Such a scaling can be used to obtain essentially an estimate of angular diameter distance (or more precisely, combinations of angular diameter distances) as a function of source redshift, without making any assumptions about the mass/galaxy power spectrum. We contrast this scaling with a different interesting scaling investigated by Jain & Taylor (2003), especially in terms of the demand on photometric redshift accuracy. The scaling we focus on can be applied to both galaxy-shear and shear-shear data, whereas the scaling of Jain & Taylor applies only to galaxy-shear. To understand what kind of constraints one could obtain about dark energy from our geometrical method, we perform a Fisher matrix analysis in §\[fisher\]. The conclusions are summarized in Fig. \[OmegaVw\_20\] and \[Omegaww\_prime\_20\]. The geometrical method above is a very conservative one: it makes absolutely no assumptions about the underlying large scale structure and its evolution. In §\[2ws\] we investigate a method at the other end of the spectrum: it assumes the shape of the mass/galaxy power spectrum is known. It differs from more conventional methods (such as lensing tomography of Hu 2002, Abazajian & Dodelson 2003) only in that the geometrical information and growth rate information are separated to provide a consistency check. In practice, there is of course a whole continuum of methods to obtain dark energy constraints from lensing data, varying from the most conservative (like the geometrical method emphasized here) to ones that make strong large scale structure assumptions. We conclude in §\[discuss\]. A word on the history of this project is in order. When we started, our initial focus was on shear-shear correlation. Since then, an elegant paper by Jain and Taylor (2003, [@jt03] hereafter) appeared which addressed similar issues, but using the galaxy-shear correlation (see also interesting developments in Bernstein & Jain 2003 \[[@bj03]\]). Therefore we decide to include both in our discussion here. While our results are in qualitative agreement, we find quantitative differences. In particular we find dark energy constraints that are weaker than [@jt03]. As we will explain in detail later, it is not [*a priori*]{} obvious whose constraints should be stronger. This is because we focus on a source-scaling of lensing signals that is different from JT03. Our scaling is less demanding on the photometric redshift accuracy and can be applied to both galaxy-shear and shear-shear correlation data, but introduces more free parameters. [*However*]{}, even if we employ exactly the same scaling adopted by [@jt03], we find statistical errors that are larger than JT03, the origin of which is discussed in detail in Appendix A. Related issues are discussed by Song & Knox (2003) and Hu & Jain (2003). A word on terminology. [@jt03] used the term cross-correlation tomography to describe their method. This term can also be used to refer to the technique of cross-correlating shear/galaxy-density fields from different redshifts in general (e.g. [@tw03]). What we would like to focus on, as in the case of [@jt03], is the use of the cross-correlation technique to extract cosmological constraints that are purely geometrical in origin. To avoid confusion, we will generally not use the term cross-correlation tomography. We will simply refer to our approach as a geometrical method. To distinguish the source-scaling we exploit from the one used by [@jt03], we refer to ours as the offset-linear scaling (as opposed to the linear scaling adopted by [@jt03]). The difference between these two scalings will be explained in the next section. In the bulk of this paper, the term shear is loosely used to refer to its scalar part (i.e. the convergence). For simplicity, most of our expressions focus on correlations involving the convergence, and they assume a flat universe. Expressions for the more directly observable components of shear and for a non-flat universe are given in Appendix B. A Useful Scaling of Lensing Signals with Source Distance {#scaling} ======================================================== Here we consider several redshift distributions of galaxies some of which are considered foreground distribution (labeled by $f$) and other considered background galaxies (labeled by $b$). The idea being that the background galaxies are behind the foreground galaxies. One measures the lensing shear field of the different background galaxy populations and correlates it with either the shear field or the surface number density fluctuations of the foreground galaxies. By determining how these correlations scale with the redshift distribution of the background galaxies we hope to learn about the cosmology in a way which is independent of assumptions about inhomogeneities in the universe and depends only on the overall geometry. It will be important that the foreground galaxies are indeed in front of the background galaxies and hence that they do not overlap in redshift with the background galaxies. [^1] One can measure precisely using spectroscopy or estimate approximately using multi-color photometry the redshift distribution of the different populations. With spectroscopic redshifts it is a simple matter to assure that the foreground distribution and the background distribution overlap very little while with photometric redshifts this requires more care. In §\[fisher\] below we will show how a small contamination of background galaxies in front of foreground galaxies affects the results. In this paper we find it more useful to express everything in terms of comoving distance from the observer, $\chi$, rather than redshift $z$. Of course, observationally one measures $z$ and can only infer exact values of $\chi$ once one [*assumes*]{} a cosmology, which then gives you the function $\chi(z)$. The idea is to find cosmological parameters that give distances which best match the observed lensing correlations. Of crucial importance will be the $z$-distributions of galaxy populations, $dN(z)/dz$, but below we will use the distance distribution $$\label{DistanceDistribution} W(\chi(z))\equiv{{dN(z)\over dz}\over {d\chi(z)\over dz}\,\int_0^\infty dz'\,{dN(z')\over dz'}}$$ so that $\int d\chi\,W(\chi)=1$. We will add a $f$ or $b$ subscript to $W$ for foreground or background populations, respectively. Other cosmological quantities we will use are the scale factor $a(\chi)$ defined by $a(\chi(z))=1/(1+z)$, the Hubble parameter $H(\chi)=-c a'(\chi)/a(\chi)^2$, and $\Omega_{\rmm0}$ is the present density of matter (dark + baryonic) in units of the critical density. We define $H_0\equiv H(0)$ and $c$ is the speed of light. For simplicity we assume a flat universe in the bulk of the paper. All expressions, in particular the scaling of interest, can be generalized to a non-flat universe as discussed in Appendix B. Also, the expressions in the bulk of the paper are given in Fourier space. The real space counterparts are discussed in Appendix B as well. We are interested in 2 kinds of correlations. One is correlating the background shear ($\gamma$) field with the foreground galaxy density field, and the other is with some foreground $\gamma$ field. The first is usually referred to as [*galaxy-galaxy lensing*]{} and the second is known as [*shear-shear correlation*]{}. In both cases the shear that is correlated is only the scalar (a.k.a. G-mode or E-mode) component of the shear pattern [^2] (see [@stebbins96]). Unless otherwise stated, we will use $\gamma$ to refer to this scalar part: the convergence. Using a Limber approximation for small angles (large $\ell$) the resulting angular cross power spectra, $P_{\rmg\gamma}(\ell)$ and $P_{\gamma\gamma}(\ell)$ can be written as (Blandford et al. 1991, Miralda-Escude 1991, [@kaiser92], Jain & Seljak 1997) $$\begin{aligned} \label{Pgs} P_{\rmg\gamma}(\ell;f,b)={3\Omega_{\rmm0} H_0^2 \over 2 c^2} \int {d\chi_\rmf\over a(\chi_\rmf)} W_f(\chi_\rmf) \int d\chi_\rmb W_b(\chi_\rmb) \nonumber \\ \times{\chi_\rmb-\chi_\rmf\over\chi_\rmb\chi_\rmf}\, P_{\rmg\delta}(\frac{\ell}{\chi_\rmf},\chi_\rmf)\,\Theta(\chi_\rmb-\chi_\rmf)\end{aligned}$$ and $$\begin{aligned} \label{Pss} &&\hskip-10pt P_{\gamma\gamma}(\ell;f;b)=\left({3\Omega_{\rmm0}H_0^2\over 2 c^2}\right)^2 \\ \nonumber && \times \int d\chi_\rmf W_f(\chi_\rmf) \int d\chi_\rmb W_b(\chi_\rmb) \\ \nonumber &&\hskip-15pt\times\int {d\chi \over a(\chi)^2} {\chi_\rmb - \chi \over \chi_\rmb} {\chi_\rmf - \chi \over \chi_\rmf} P_{\delta\delta}(\frac{\ell}{\chi},\chi) \Theta(\chi_\rmb-\chi)\,\Theta(\chi_\rmf-\chi) .\end{aligned}$$ Here $\Theta(\chi_\rmb-\chi)$ is the Lorentz-Heaviside function which is unity if $\chi<\chi_\rmb$, and zero otherwise. Also $P_{g\delta} (k,\chi)$ and $P_{\delta\delta}(k,\chi)$ are respectively the 3-d galaxy-mass power spectrum and 3-d mass power spectrum, both evaluated at 3-d wavenumber $k$ and at a time corresponding to distance $\chi$, and $\ell$ is the angular wavenumber. As always with the Limber approximation there is a one-to-one correspondence with the 3-d wavenumber and angular wavenumber at a given distance $\chi$: $k\leftrightarrow\frac{\ell}{\chi}$. Offset-Linear Scaling --------------------- The key step for the purpose of this paper is to note that if the foreground distribution $W_f$ and the background distribution $W_b$ overlap very little then it is an excellent approximation to make the substitution $$\label{GalGalApprox} W_f(\chi_\rmf)\,W_b(\chi_\rmb)\,\Theta(\chi_\rmb-\chi_\rmf)\rightarrow W_f(\chi_\rmf)\,W_b(\chi_\rmb)$$ in eq. \[\[Pgs\]\], or $$\begin{aligned} \label{ShearApprox} &&W_f(\chi_\rmf)\,W_b(\chi_\rmb)\,\Theta(\chi_\rmb-\chi)\, \Theta(\chi_\rmf-\chi) \\ \nonumber &&\hskip35pt\rightarrow W_f(\chi_\rmf)\,W_b(\chi_\rmb)\,\Theta(\chi_\rmf-\chi) .\end{aligned}$$ in eq.\[\[Pss\]\]. Under this approximation the angular power spectrum will exhibit an [*offset-linear scaling*]{}: $$\begin{aligned} \label{approxscaling} P_{\rmg\gamma} (\ell;f,b)\approx F(\ell;f)+G(\ell;f)/\chieff(b) \\ \nonumber P_{\gamma\gamma}(\ell;f,b)\approx A(\ell;f)+B(\ell;f)/\chieff(b)\end{aligned}$$ where $$\begin{aligned} \label{chieff} {1\over \chieff (b)}\equiv\int d\chi_\rmb W_b(\chi_\rmb)\,{1\over \chi_\rmb} \end{aligned}$$ and $$\begin{aligned} \label{FGAB} && F(\ell;f)\equiv{3\Omega_{\rmm0}H_0^2\over2c^2} \int {d\chi_\rmf\over a(\chi_\rmf)} \,{W_f(\chi_{\rmf}) \over \chi_\rmf} P_{g\delta}(\frac{\ell}{\chi_\rmf}, \chi_{\rmf}) \\ \nonumber && G(\ell; f) \equiv - {3\Omega_{\rmm0} H_0^2 \over 2 c^2} \int{d\chi_\rmf\over a(\chi_\rmf)}\,W_f(\chi_\rmf)\, P_{g\delta}(\frac{\ell}{\chi_\rmf}, \chi_{\rmf}) \\ \nonumber && A(\ell;f)\equiv\left({3\Omega_{\rmm0} H_0^2 \over 2 c^2}\right)^2 \int {d\chi_\rmf}\,W_f(\chi_\rmf)\\ &&\nonumber \hskip75pt\times\int_0^{\chi_\rmf}{d\chi \over a(\chi)^2}\, {\chi_\rmf-\chi\over\chi_\rmf}\,P_{\delta\delta}(\frac{\ell}{\chi}, \chi) \\ \nonumber && B(\ell;f)\equiv-\left({3\Omega_{\rmm0} H_0^2 \over 2 c^2}\right)^2 \int {d\chi_\rmf}\,W_f(\chi_\rmf) \\ \nonumber &&\hskip70pt\times\int_0^{\chi_\rmf}{d\chi\over a(\chi)^2}\, {\chi_\rmf - \chi \over \chi_\rmf}\,\chi\,P_{\delta\delta}(\frac{\ell}{\chi}, \chi)\end{aligned}$$ This is the scaling we wish to exploit: for a fixed foreground population, $W_f$, as one varies the background redshift distribution $W_b$, the lensing power spectra $P_{\rmg\gamma}$ and $P_{\gamma\gamma}$ scale in a definite manner, namely linearly through the factor $1/\chieff(b)$ but with an offset given by $F$ or $A$ (hence the name [*offset-linear scaling*]{}). This should be contrasted with the linear scaling described below. Moreover, this factor $1/\chieff(b)$ is purely geometrical. It is the inverse source distance averaged over the background redshift distribution (eq. \[\[chieff\]\]). It is important to emphasize that eq. \[\[approxscaling\]\] holds even if $W_f$ and $W_b$ are broad distributions – the only requirement is that they have little overlap. We will discuss what requirement this places on the photometric redshift accuracy in §\[fisher\]. Such a scaling is very useful in confirming the lensing hypothesis of the observed correlation intrinsic alignment is not expected to produce this kind of scaling. This fact can be exploited to weed out contamination of the observed signals from intrinsic alignment, which will be further explored in a future paper. A more ambitious goal is to use this scaling to effectively measure the angular diameter distance as a function of redshift (more precisely, measure $\chieff(b)$ as a function of distribution $W_b$), and use this to constrain cosmological parameters, especially those pertaining to dark energy, in a way independent of assumptions about the large scale structure of galaxy and mass. This is the topic of §\[fisher\]. Comparison with Linear Scaling {#complinear} ------------------------------ At this point, it is useful to compare the scaling displayed in eq. \[\[approxscaling\]\] with the scaling used by [@jt03]. Unlike offset-linear scaling, the [@jt03] scaling can only be applied to $P_{\rmg\gamma}$, to galaxy-galaxy lensing. [@jt03] assumed $W_f$ is well approximated by a delta function at a distance, $\hat{\chi}_\rmf$, in which case $G(\ell;f)=-\hat{\chi}_\rmf\,F(\ell;f)$ and $P_{\rmg\gamma}$ follows a scaling that is even simpler than in eq. \[\[approxscaling\]\] (although eq. \[\[approxscaling\]\] still holds) with no offset: $$\begin{aligned} P_{\rmg\gamma} (\ell; f,b)\approx F(\ell;f)\,\left(1-\frac{\hat{\chi}_\rmf}{\chieff(b)}\right) \label{JTscaling}\end{aligned}$$ Note that all of the uncertainties associated with large scale structure come in the prefactor $F(\ell;f)$. Here the background distribution, $W_b$, does not have to be well approximated by a $\delta$-function, only the foreground distribution, $W_f$, does. One also requires that $W_b$ not extend significantly into the foreground just as with the offset-linear scaling. For a fixed foreground redshift, varying the background distribution produces a definite [*linear scaling*]{} (with no offset) of $P_{\rmg\gamma}$ with the geometrical factor $1-\hat{\chi}_\rmf/\chieff(b)$. [@jt03] proposed that one can examine the ratio of $P_{\rmg\gamma}$’s measured using two different background distributions ($W_b$ and $W_{b'}$) but the same foreground: [^3] $$\begin{aligned} \label{Pratio} {P_{\rmg\gamma} (\ell;f,b) \over P_{\rmg\gamma} (\ell;f,b')} \approx{\hat{\chi}_\rmf^{-1}-\chieff(b )^{-1} \over\hat{\chi}_\rmf^{-1}-\chieff(b')^{-1}}.\end{aligned}$$ One can infer values for cosmological parameters with this equation by measuring the left-hand-side and then finding the parameters for which the right-hand-side yield the same values. The Foreground Width Systematic {#deltafunc} ------------------------------- In practice the foreground galaxies will not have zero uncertainty in distance, and unless one has spectroscopic redshifts for the foreground galaxies (McKay et al. 2001, Sheldon et al. 2003), $W_f$ will have some non-negligible spread. Such a spread implies the ratio of the observed $P_{\rmg\gamma}$’s will differ from the idealized limit of eq. \[\[Pratio\]\] which can lead to systematic errors in estimates of cosmological parameters, the dark energy equation of state, $w$, if one uses the linear scaling but not if one uses the offset-linear scaling. If the foreground distribution $W_f$ is not a delta function, eq.s \[\[JTscaling\],\[Pratio\]\] should be replaced by: $$\begin{aligned} P_{\rmg\gamma} (\ell; f,b)\approx F(\ell;f)\,\left(1-\frac{\tilde{\chi}_\rmf}{\chieff(b)}\right) \label{JTscalingOffset}\end{aligned}$$ $$\begin{aligned} \label{PratioOffset} {P_{\rmg\gamma} (\ell;f,b) \over P_{\rmg\gamma} (\ell;f,b')} \approx{\tilde{\chi}_f(\ell)^{-1}-\chieff(b )^{-1} \over\tilde{\chi}_f(\ell)^{-1}-\chieff(b')^{-1}}\end{aligned}$$ where $\tilde{\chi}_f(\ell)\equiv-G(\ell;f)/F(\ell;f)$. While eq.s \[\[Pratio\],\[PratioOffset\]\] are similar in form the right-hand-side of the latter is $\ell$ dependent and depends on non-measured and non-geometrical quantities like the galaxy-mass power spectrum $P_{\rmg\delta}(k,\chi)$. To bring the non-geometrical character of eq. \[\[PratioOffset\]\] into better focus, let us perform an expansion of $\tilde{\chi}_f$ around the mean distance $\bar{\chi}_f \equiv \int d\chi_\rmf W_f(\chi_\rmf)\,\chi_\rmf$ [*i.e.*]{} $\tilde{\chi}_f=\bar{\chi}_f+\Delta\chi_{f(2)}+\Delta\chi_{f(3)}+\ldots$ where $\Delta\chi_{f(n)}$ is order $n$ in the width of $W_f$ (the 1st order term is zero). The lowest order correction is $$\begin{aligned} \label{tildechi} && \Delta\chi_{f(2)}(\ell)=-\frac{\sigma_\chi^2}{\bar{\chi}_f} \times \, \\ \nonumber && \left(1-{a_f\,\bar{\chi}_f\,H_f\over c}+n_f(\ell) +{2\,a_f H_f\bar{\chi}_f \,\Upsilon_f(\ell)\over c} \right)\end{aligned}$$ where $\sigma_\chi$ is the width of $W_f$ [*i.e.*]{} $\sigma_\chi^2 \equiv \int d\chi_\rmf W_f(\chi_\rmf) (\chi_\rmf - \bar{\chi}_f)^2$, $a_f\equiv a(\bar{\chi}_f)$, $H_f\equiv H(\bar{\chi}_f)$ is the Hubble constant at the time corresponding to $\bar{\chi}_f$, $$\label{SpectralIndex} n_f(\ell)\equiv \left.\frac{d{\rm ln}P_{\rmg\delta}(k,\bar{\chi}_f)}{d{\rm ln}k} \right|_{k=\frac{\ell}{\bar{\chi}_f}}$$ is the spectral index evaluated at that foreground redshift, and $$\label{GrowthExponent} \Upsilon_f(\ell)\equiv {1\over 2}\left. {d{\rm ln}P_{\rmg\delta}(\frac{\ell}{\bar{\chi}_f},\chi)\over d{\rm ln}a(\chi)} \right|_{\chi=\bar{\chi}_f}$$ tells us about the growth of correlations with time. The terms $n_f (\ell)$ and $\Upsilon_f (\ell)$ in $\Delta\chi_{f(2)}(\ell)$ clearly depend on a non-geometrical quantity, namely the 3-d galaxy-mass power spectrum $P_{g\delta} (k,\bar\chi_f)$. One can imagine improving upon the JT03 procedure by accounting for corrections due to such terms when fitting the ratio of lensing correlations for dark energy parameters (eq. \[\[PratioOffset\]\]). This somewhat compromises the original goal of isolating the purely geometrical information. A more serious problem is that a quantity like $\Upsilon_f (\ell)$, which is the growth rate of the galaxy-mass correlation, is fundamentally rather uncertain because of the uncertain relation between galaxy and mass. Conservatively, this leads to an order $\sigma_\chi^2/\bar\chi_f$ uncertainty in any estimate of the correction $\Delta\chi_{f(2)} (\ell)$. In other words, as long as the foreground distribution $W_f$ has a finite width, the ratio of correlations considered by JT03 does not give eq. (\[Pratio\]), but instead gives eq. (\[PratioOffset\]), where $\tilde{\chi}_f = \bar\chi_f + O(\sigma_\chi^2/\chi_f)$ and the correction $O(\sigma_\chi^2/\chi_f)$ is uncertain. Attempts to make use of the JT03 linear scaling to infer dark energy constraints is therefore subject to a systematic error that depends on the width of $W_f$. Following JT03, consider using a foreground distribution by taking a photometric redshift bin centered at for instance $z_p = 0.3$, with a top-hat width of $\Delta z_p = 0.1$. To obtain the actual distribution $W_f$ of [*true*]{} redshifts, one has to convolve such a top-hat photometric redshift bin with the photometric redshift error distribution, which we model as a Gaussian of dispersion $\sigma_z$ (this is described more fully in §\[fisher\]). We find that the JT03 method (eq. \[\[PratioOffset\]\]) is susceptible to a systematic error of $\sim 30 \%$, $5\%$ or $1\%$ on the dark energy equation of state $w$, for $\sigma_z = 0.05$, $0.02$ or $0.01$ respectively. The JT03 linear scaling is therefore quite demanding on the photometric redshift accuracy if one would like to keep the systematic error below say $1\%$. Unless spectroscopic redshifts are available, we think it is more productive to make use of the offset-linear scaling which makes no assumptions about the width of $W_f$ and can be applied to both pure lensing data and galaxy-galaxy lensing. The Ratio of Power Spectrum Differences --------------------------------------- For a zero width foreground galaxy distribution linear scaling means the ratio of the power spectrum leads to a purely geometric expression (eq. \[\[Pratio\]\]), while more generally with offset-linear scaling it is the ratio of difference of power spectra which is purely geometrical: $$\begin{aligned} \label{Pdiffratio} {P(\ell;f,b) -P(\ell; f,b') \over P(\ell;f,b'')-P(\ell; f,b''')} = {\chieff(b) ^{-1} -\chieff(b')^{-1} \over\chieff(b'')^{-1}-\chieff(b''')^{-1}}\end{aligned}$$ where here $P$ can be either $P_{\gamma\gamma}$ or $P_{\rmg\gamma}$. Here we illustrate the general case of 4 background populations $b,b',b'',b'''$; but the expression still gives a non-trivial result for 3 populations, say if $b=b''$. Unlike eq. \[\[Pratio\]\] this expression makes no assumptions about the width of $W_f$ being small. It does not depend on the mass power spectrum or its growth, but depends only on the background redshift distributions and cosmological parameters of interest, such as the equation of state and abundance of dark energy. The Redshift Tail Systematic {#ztailsystematic} ---------------------------- Another systematic effect which is common to [*both*]{} linear and offset-linear scaling comes from the approximations of eq.s \[\[GalGalApprox\],\[ShearApprox\]\] that the foreground populations are completely in front of the background populations. If this is not true then the eq. \[\[approxscaling\]\] is not exact, but the exact expression is $$\begin{aligned} \label{fullscaling} P_{\rmg\gamma} (\ell; f,b)=F(\ell;f)+G(\ell;f)/\chieff(b)+I(\ell;f,b) \\ \nonumber P_{\gamma\gamma}(\ell; f,b)=A(\ell;f)+B(\ell;f)/\chieff(b)+D(\ell;f,b)\end{aligned}$$ where the additional terms are given by $$\begin{aligned} && I(\ell; f,b) \equiv{3\Omega_{\rmm0} H_0^2 \over 2 c^2} \int_0^\infty {d\chi_\rmf\over a(\chi_\rmf)} W_f (\chi_\rmf) \\ \nonumber && \hskip50pt \times\int_0^{\chi_\rmf} d \chi_\rmb W_b (\chi_\rmb)\, {\chi_\rmf-\chi_\rmb\over \chi_\rmb \chi_\rmf}\, P_{g\delta}(\frac{\ell}{\chi_\rmf}, \chi_\rmf) \\ \nonumber && D(\ell;f,b)\equiv\left({3\Omega_{\rmm0} H_0^2 \over 2 c^2}\right)^2 \int_0^\infty d\chi_\rmf\,W_f (\chi_\rmf) \\ \nonumber &&\hskip-15pt \times\int_0^{\chi_\rmf} d\chi_\rmb\,W_b (\chi_\rmb) \int_{\chi_\rmb}^{\chi_\rmf}{d\chi \over a(\chi)^2}\, {\chi-\chi_\rmb\over \chi_\rmb} {\chi_\rmf-\chi\over\chi_\rmf}\, P_{\delta\delta} (\frac{\ell}{\chi}, \chi)\ ,\end{aligned}$$ which are both positive (at least so long as $P_{\rmg\delta}>0$). The ratio of power spectrum differences is given by eq. \[\[Pdiffratio\]\] only to the extent that the additional terms $I$ or $D$ are negligible. Note that $I$ and $D$ are non-zero only when the foreground distribution $W_f (\chi_f)$ and background distribution $W_b (\chi_b)$ have non-vanishing overlap [*i.e.*]{} some of the galaxies identified as foreground are actually behind the galaxies identified as background ($\chi_f > \chi_b$). This systematic effect differs from the foreground width systematic discussed earlier in that it depends on the tail of the distributions of redshift uncertainties. This is different from a systematic caused by the width of the foreground distribution because one can reduce the overlap, and hence the systematic, by selecting the foreground and background populations in a way which further separates them in redshift. Since the tail of the redshift distribution is likely to fall off rapidly, increasing the separation can greatly decrease the amount of overlap, and hence the size of $I$ and $D$, and therefore inaccuracy of eq. \[\[Pdiffratio\]\]. In contrast the foreground width systematic, which only effect the linear scaling, is not decreased by further separating the foreground and background populations. We quantify how large a systematic error this effect will have on our analysis in §\[fisher\]. A Fisher Matrix Analysis Exploiting the Source Scaling {#fisher} ====================================================== Here, we would like to find out the dark energy constraints one can in principle obtain from the offset-linear scaling described in eq. \[\[approxscaling\]\], using purely geometric quantities like eq. \[\[Pdiffratio\]\]. Given several redshift bins, one can imagine there are many ways, or at least many combinations like eq. \[\[Pdiffratio\]\], to obtain dark energy constraints. Given a set of $P_{\rmg\gamma}(\ell;f,b)$’s and $P_{\gamma\gamma}(\ell;f,b)$ ’s for a whole range of $f,b$, the best way is probably to fit them using the offset-linear scaling of eq. \[\[approxscaling\]\], and marginalize over $A$,$B$,$F$ and $G$. To estimate the statistical errors, we will assume the mass and galaxy density fields are approximated by Gaussian random noise. On large scales, the near Gaussianity of cosmological inhomogeneities is quite well established. Even on small scales where the 3-d mass and galaxy distribution are far from Gaussian, the projected galaxy and mass surface density (which gives the shear) are much more Gaussian since they are a projection of many 3-d structures ([@szh], [@white], [@cooray]). The expected non-Gaussianity will lead to a small underestimate of errorbars but does not lead to a bias. To predict the uncertainties in cosmological parameters we use a Fisher matrix calculation. For a zero mean Gaussian distribution the Fisher matrix element for parameters $p_\alpha$ and $p_\beta$ is given by ([@max]) $$\begin{aligned} F_{\alpha\beta}={1\over2}\, {\rm Tr}\left[\bfC^{-1}\cdot{\partial\bfC\over\partial p_\alpha}\cdot \bfC^{-1}\cdot{\partial\bfC\over\partial p_\beta }\right]\end{aligned}$$ where $\bfC$ is the correlation matrix of the data vector $\bfd$, $\bfC\equiv\langle \bfd^{\rm T}\bfd\rangle$. [^4] Here the elements of $\bfd$ might consist of local (in angle) shear and galaxy surface density estimators, however it is more convenient to make linear combinations which, for each distance bin (foreground or background), are mode amplitudes for approximate eigenmodes of the angular Laplace operator, with approximate eigenvalue $-\ell\,(\ell+1)$ (for shear we want only the scalar (E-) eigenmodes). Discrete sampling by galaxies and incomplete sky coverage will prevent one from constructing exact eigenmodes in practice, but to estimate the errors it is a good approximation[^5] to assume that such modes exist, that the different angular modes are uncorrelated, and the number of modes with wavenumber $\ell$ is $(2\ell+1)\,f_{\rm sky}$ where $f_{\rm sky}$ is the fraction of the sky one has surveyed. Since the modes are uncorrelated and the modes for the same angular wavenumber, $\ell$, should have the same correlations, the correlation matrix $\bfC$ is block diagonal, and we may rewrite the Fisher matrix element as $$\begin{aligned} F_{\alpha\beta}=f_{\rm sky}\,\sum_\ell (2\ell+1)\,F_{\ell,\alpha\beta}\end{aligned}$$ where $$\begin{aligned} F_{\ell,\alpha\beta}={1\over2} {\rm Tr}\left[\bfC^{-1}_\ell\cdot{\partial\bfC_\ell\over\partial p_\alpha}\cdot \bfC^{-1}_\ell\cdot{\partial\bfC_\ell\over\partial p_\beta } \right]\ .\end{aligned}$$ If there are $n_{\rm bin}$ redshift bins each $\bfC_\ell$ block can be divided into $n_{\rm bin}\times n_{\rm bin}$ sub-blocks as follows $$\label{BIGC} \bfC_\ell\equiv\left( \begin{array}{ccc} \bfC_{\ell,1,1} & \cdots & \bfC_{\ell,1,n_{\rm bin}} \\ \vdots & \ddots & \vdots \\ \bfC_{\ell,n_{\rm bin},1}& \cdots & \bfC_{\ell,n_{\rm bin},n_{\rm bin}} \end{array} \right)\ ,$$ one for each ordered pair of distance bins, $(i,j)$. The sub-blocks are $2\times2$ matrices given by $$\bfC_{\ell,i,j}\equiv\left(\begin{array}{cc} P_{\rm gg}(\ell;i,j)+\delta_{ij}{1\over\bar n^g_i} & P_{\rmg\gamma}(\ell; i,j) \\ P_{\rmg\gamma}(\ell; j,i) & P_{\gamma\gamma}(\ell;i,j)+\delta_{ij}{\sigma_{\gamma,i}^2\over\bar n^\rmg_i} \end{array} \right)$$ where $P_{gg} (\ell;i,j)$ is the cross power spectrum at wavenumber $\ell$ of galaxies in redshift bin $i$ and bin $j$, $\bar n^\rmg_i$ is the surface density of galaxies in redshift bin $i$ which tells us the [*shot noise*]{}, and $\sigma_{\gamma,i}^2$ is the intrinsic noise of the shear from one galaxy in redshift bin $i$ which tells us the [*shape noise*]{}. Since $P_{\rm gg}(\ell;i,j)=P_{\rm gg}(\ell;j,i)$ and $P_{\gamma\gamma}(\ell;i,j)=P_{\gamma\gamma}(\ell;i,j)$ we see that $\bfC_{\ell,i,j}=\bfC_{\ell,j,i}^{\rm T}$ and that $\bfC_\ell$ is symmetric. If the redshift bins are reasonably large, it will be a good approximation to ignore galaxy correlations between bins, we assume $P_{\rm gg}(\ell;i,j)=\delta_{ij}P_{\rm gg}(\ell;i)$. We suppose that the galaxies in each population have measured [*photometric*]{} redshifts, $z_{\rm p}$ which estimate the true redshift. We assume that the distribution of $z_{\rm p}$ from all galaxies in all bins is $$\begin{aligned} \label{dNdz} {d N_\rmp(z_\rmp)\over dz_\rmp}\propto z_\rmp^2\,e^{-\left({z_\rmp\over 1.0}\right)^{1.5}}\ .\end{aligned}$$ We divide this total population into $n_{\rm bin}$ top-hat bins in $z_\rmp$-space, such that bin $i$ contains all galaxies with $(i-1)\,\Delta z_\rmp\le z_\rmp<i\,\Delta z_\rmp$. We suppose the photometric redshifts are unbiased estimators of the true redshift with errors distributed like a Gaussian with variance $\sigma_z^2$ so that the distribution of true redshifts in bin $i$ is $$\begin{aligned} \label{Witrue} {d N_i(z)\over dz}\propto \int_{(i-1)\,\Delta z_{\rm p}}^{i\,\Delta z_{\rm p}}dz_\rmp\, {d N_\rmp(z_\rmp)\over dz_\rmp}\, e^{-{1\over 2}\left({z-z_\rmp\over\sigma_z}\right)^2}\end{aligned}$$ which then is related to $W_i(\chi)$ through eq. \[\[DistanceDistribution\]\] and from that we can compute, for a given set of cosmological parameters, the effective distance to each bin, $\chieff(i)$, from eq \[\[chieff\]\]. Note that we have not yet defined foreground and background bins or exploited offset-linear scaling. To do so we define foreground/background pairs by the requirement that bin $j$ is a background bin to bin $i$ if $j\ge i+\Delta_{\rm bin}$. If $b=j$ is a background bin to foreground bin $f=i$ then $P_{\rmg\gamma}(\ell;f,b)$ and $P_{\gamma\gamma}(\ell;f,b)$ are given by eq. \[approxscaling\] while $P_{\rmg\gamma}(\ell;b,f)=0$ and $P_{\gamma\gamma}(\ell;b,f)=P_{\gamma\gamma}(\ell;f,b)$. One minimally requires that $\Delta_{\rm bin}=1$, however in this case one is subject to systematics by the redshift tail to the extent that the redshift distributions between adjacent bins overlap. Increasing $\Delta_{\rm bin}$ decreases any systematic effect from redshift tails, however it also leads to larger statistical errors because more information is thrown away. We will discuss below the choice of $\Delta_{\rm bin}$ and redshift binning. The correlation matrix depends on several functions having to do with the power spectrum: there are the foreground functions $A(\ell;f)$, $B(\ell;f)$, $F(\ell,f)$, and $G(\ell,f)$; and then there are the lensing power spectra $P_{\rmg\gamma}(\ell;i,j)$ and $P_{\gamma\gamma}(\ell;i,j)$ where neither $\{i,j\}$ nor $\{j,i\}$ forms a foreground-background pair (as defined above via $\Delta_{\rm bin}$); finally there are the galaxy angular power spectra $P_{gg} (\ell; i,j)$. These are functions we are not interested in because we are only interested in obtaining constraints on dark energy properties which are independent of the values of these functions. So we assume their values are not known [*a priori*]{}, and for each $\ell$ we take their values to be unknown [*nuisance parameter*]{}s, each corresponding to one component of $p_\alpha$. So the number of unknown parameters will be very large, but since dependence on each of these uninteresting parameters is confined to a single block, $\bfC_\ell$, the computation of the Fisher matrix remains tractable. In addition to the nuisance parameters the correlation matrix depends on the background bin distances: $\chieff(b)$ (eq. \[\[chieff\]\]). These will depend on interesting cosmological parameters through the function $\chi(z)$ and eq. \[\[DistanceDistribution\]\]. The cosmological parameters we are actually interested in are: $w$ the equation of state of dark energy, $w' \equiv dw/dz$, and $\Omega_{\rm de}$ the dark energy density today in unit of the critical density. We assume a flat universe here, so the matter density is given by $\Omega_{\rmm0}=1-\Omega_{\rm de}$. To remove the nuisance parameters we can marginalize over their values. This can be done by inverting the full Fisher matrix, $F_{\alpha\beta}$, and then restricting the inverse to the interesting cosmological parameters, let us denote them by $\tilde\alpha, \tilde\beta$:[^6] $$\begin{aligned} \widetilde E_{\tilde\alpha \tilde\beta} \equiv (F^{-1})_{\tilde \alpha \tilde \beta}\end{aligned}$$ According to the Cramer-Rao inequality the minimum possible error ellipses in parameter space (for unbiased estimators) have principal axes in the directions of the eigenvectors of $\widetilde{E}_{\tilde\alpha\tilde\beta}$ with size given by the square root of the corresponding eigenvalues of $\widetilde{E}_{\tilde\alpha\tilde\beta}$. Maximum likelihood parameter estimators (MLEs) will approach this accuracy where the errorbars are small enough. For the problem at hand we expect these minimum errors to close to what can be obtained in practice. To obtain prediction for how accurately one can constrain cosmological parameters using offset-linear scaling, we take $f_{\rm sky}=0.1$, (a $4000\,(^\circ)^2$ survey), $\sigma_{\gamma,i}^2=0.3^2/2$ (shape noise), and $\sum_{i=1}^{n_{\rm bin}}\bar{n}^\rmg_i=100/(')^2$. For the fiducial cosmological and structure formation model, we use $w = -1$, $\Omega_{\rm de} = 0.7$, a scale invariant primordial mass power spectrum with a linear amplitude of $\sigma_8 = 0.9$, and for the galaxy and galaxy-mass power spectra, we employ the halo model (Sheth & Jain 1997, Ma & Fry 2000, Seljak 2000, Scoccimarro et al. , Guzik and Seljak 2001). To distribute galaxies inside halos, we use the occupation function given by Kratsov et al. (2003), with a galaxy (subhalo) masscut at each redshift that matches the redshift distribution given in eq. \[\[dNdz\]\] with a total integrated number density of $\sum_{i=1}^{n_{\rm bin}}\bar{n}^\rmg_i=100/(')^2$. We have experimented with using only the more massive halos as foreground following JT03, but found it did not lead to an improvement in statistical errors. Carrying out the Fisher matrix calculation as outlined above we obtain the dark energy constraints as shown in Fig. \[OmegaVw\_20\] and Fig. \[Omegaww\_prime\_20\] which show, respectively the constraints on $\Omega_{\rm de}$ and $w$ when $w'=0$, and the constraints on $w$ and $w'$ when $\Omega_{\rm de}$ is assumed to be known to $3\%$. The symbol $w'$ denotes $dw/dz$. A common alternative parametrization of evolution of $w$ has $w_a = 2w'$ at $z=1$ (Linder 2002). The solid, dashed and dotted contours give 1 $\sigma$ errors that correspond to a photometric redshift accuracy of $\sigma_z = 0.01, 0.02$ and $0.05$ respectively. For each photometric redshift accuracy, we choose a redshift binning that keeps the redshift tail systematic error (§\[ztailsystematic\]) at a sub-percent level while minimizing the statistical error. (Keeping the systematic error on dark energy parameters at a sub-percent level is probably more stringent than is necessary given the size of the statistical error as it turns out.) For a photometric redshift error of $\sigma_z = 0.05$, we choose $(\Delta z_\rmp, n_{\rm bin}, \Delta_{\rm bin}) = (0.15, 20, 2)$; for $\sigma_z = 0.02$, we choose $(0.1, 30, 2)$, and for $\sigma_z = 0.01$, we consider $(0.15, 20, 1)$. The values of $\sigma_z$ considered should span a reasonable range of what can be achieved with photometric redshifts. The redshift bins stretch out to $z\sim3$ which encompasses the redshift range of most normal galaxies. Sampling photometric redshift space more finely by decreasing $\Delta z_\rmp$ while increasing $n_{\rm bin}$ accordingly gives negligible improvement in errorbars mainly because of the increased importance of shot-noise when the bin size is made small (see [@hu99]). Note that because we marginalize over all parameters that are determined by the mass, galaxy and galaxy-mass power spectra, it is sensible for us to use information from all scales (from the fundamental mode to $\ell \sim 10^5$)– there is no need to stay away from nonlinear scales because of worries about how well one can predict the mass and galaxy power spectra. Of course, at sufficiently high $\ell$’s, shape-noise dominates and not much information is gained from the very high $\ell$ modes. Fig.s \[OmegaVw\_20\] & \[Omegaww\_prime\_20\] show that our geometrical method employing offset-linear scaling yields weaker dark energy constraints than conventional weak lensing methods which make more assumptions about the structure formation model ($w$ generally constrained at the few percent level for a comparable survey as above; see [*e.g.*]{} Hu 2002, Abazajian & Dodelson 2003). This is of course not surprising, since the offset-linear scaling method throws away non-geometrical information that is utilized in conventional methods. However, the geometrical constraints are still sufficiently tight to provide an interesting consistency check: dark energy constraints obtained using the two different methods should agree; disagreement would point to flaws in the structure formation model assumed, or to systematic errors in the data. Note that our constraints are a bit weaker than those obtained by [@jt03] and [@bj03]. One might think this could be due to the fact that we use the offset-linear scaling rather than the Jain-Taylor linear scaling – the former involves more parameters than the latter (compare eq. \[\[approxscaling\]\] and \[\[JTscaling\]\]). On the other hand, the offset-linear scaling allows the use of both shear-shear and galaxy-shear correlations while the Jain-Taylor linear scaling can be applied only to galaxy-shear. So, it is not [*a priori*]{} obvious how our constraints should compare with those of JT03 and BJ03. In Appendix A, we will discuss what happens if we carry out parameter estimation using the linear scaling. We find dark energy constraints that are weaker than those obtained by [@jt03] and [@bj03] even in that case. The reasons are discussed in Appendix A. Geometry as a Consistency Check {#2ws} =============================== Our procedure, described in the last section, making use of the offset-linear scaling of eq. \[\[approxscaling\]\], is very conservative we marginalize over all possible 3-d mass, galaxy, and galaxy-mass power spectra in order to extract the pure geometrical information. In truth, we do know a fair amount about these power spectra, especially from non-lensing observations. The conventional approach is to assume the 3-d mass power spectrum is well constrained from other observations (such as the microwave background), and fit for dark energy constraints from shear-shear correlations which depend on dark energy parameters through both geometrical distances and the growth rate of the mass power spectrum (which of course implicitly assumes a structure formation model such as Cold-Dark-Matter; see [*e.g.*]{} Hu 2002, Abazajian & Dodelson 2003). In other words, unlike the offset-linear scaling method which introduces a whole set of nuisance parameters in addition to dark energy parameters, the conventional approach has [*only*]{} the dark energy parameters as free parameters. A simple alternative, which is less conservative than the offset-linear scaling method, but allows a consistency test that the conventional approach does not offer, is as follows. Follow the conventional approach, but split the dark energy parameters into two kinds: those that enters the growth factor, and those that enters the geometrical distances, and fit for these separately. With such parameter-splitting (Stebbins 2003), one does not expect and will not obtain better constraints compared to the conventional approach where equivalence between these two sets of parameters is enforced. The rationale for parameter-splitting is to check for consistency: if we could verify that the values of $w$ for example obtained separately from geometry and from growth (let us call them $w({\rm geometry})$ and $w({\rm growth})$) are consistent with each other, this would increase our confidence in the values obtained; if they disagree, the discrepancy would help isolate what was going wrong, say systematic errors ([@h03]), or contamination of shear by intrinsic alignments ([@lp00; @cm00; @hrh00; @bthd00; @ckb01; @mwk01; @p01a; @vv02; @maller01; @vB02; @hz02]), or incorrect assumptions about the mass power spectrum. As an illustration, in Fig. \[split\], we show such a consistency test via parameter-splitting. We adopt the same fiducial model as in Fig. \[OmegaVw\_20\], and estimate the constraints on $w({\rm geometry})$ and $w({\rm growth})$ from both the shear-shear power spectrum $P_{\gamma\gamma}$ and the galaxy-shear power spectrum $P_{g\gamma}$. To fit the galaxy-shear data, we assume the galaxies are linearly biased with respect to the mass, and we marginalize over an independent galaxy-bias for each redshift bin ($n_{\rm bin} = 20$). We limit ourselves to information from $\ell < 200$, for two reasons: the galaxy-bias is probably not linear on smaller scales; the nonlinear mass power spectrum might not be accurately predicted even though we assume here the linear mass power spectrum is well constrained from other observations. Fig \[split\] shows that such a consistency test can yield constraints that are interesting precision-wise. It is also interesting how using both $P_{\gamma\gamma}$ and $P_{g\gamma}$ gives significantly better constraints than using just one of them. Discussion {#discuss} ========== In this paper, we have introduced a special scaling, which we call the offset-linear scaling: imagine one has a foreground population of galaxies from which one forms either a galaxy-density field or a shear field; when one cross-correlates this field with the shear measured from some background population, the cross-correlation signal ($P_{\rmg\gamma}$ or $P_{\gamma\gamma}$) scales with the redshift of the background population in a way that is specific to lensing. This is the content of eq. \[\[approxscaling\]\]. Such a scaling can be exploited to extract purely geometrical information from a lensing survey. Effectively, one can measure angular diameter distances (or more accurately, combinations thereof; eq. \[\[Pdiffratio\]\]) from a lensing experiment without making any assumptions about the shape or growth of the mass/galaxy power spectrum. The idea is to measure the galaxy-shear and shear-shear power spectra, $P_{\rmg\gamma}$ and $P_{\gamma\gamma}$ for a variety of foreground and background redshift bins. Given a sufficient number of bins, one can fit for all the quantities $A$, $B$, $F$, $G$ and $\chieff$ in eq. \[\[approxscaling\]\]. One can then obtain dark energy constraints from $\chieff$ alone, which is a purely geometrical quantity, essentially an angular diameter distance weighed in a particular way (eq. \[\[chieff\]\]). Such an approach has certain virtues. The obvious one is that the resulting constraints are free of assumptions about one’s structure formation model (typically a Cold Dark Matter model with a nearly scale invariant primordial power spectrum). Because of this, one can also make use of information on smaller scales than what one would otherwise feel uncomfortable using, either because of non-linearity in the case of $P_{\gamma\gamma}$, or because of galaxy-biasing in the case of $P_{\rmg\gamma}$. The level of constraints from this method employing the offset-linear scaling is shown in Fig. \[OmegaVw\_20\] and Fig. \[Omegaww\_prime\_20\]. The constraints are weaker than conventional methods such as lensing tomography (Hu 2002, Abazajian & Dodelson 2003). This is not surprising since the offset-linear scaling method isolates and uses only the geometrical information, whereas conventional methods make use of information from both growth and geometry and assumes the mass power spectrum is well constrained from other methods. Nonetheless, the constraints are sufficiently interesting to make our geometrical method a useful consistency check on assumptions behind conventional methods ([*e.g.*]{} Cold Dark Matter structure formation model). Comparing against [@jt03] and [@bj03], who used a similar geometrical approach as here but a different scaling (we call the linear scaling; eq. \[\[JTscaling\]\]), it appears our constraints are weaker than theirs. We believe the reason is largely because the statistical errors have been underestimated by [@jt03] and [@bj03]. This is discussed in detail in Appendix A. A useful feature of the offset-linear scaling is that it is not as demanding on the photometric redshift accuracy as the linear scaling (see §\[deltafunc\], \[ztailsystematic\]). Another useful feature: the offset-linear scaling can be applied to both shear-shear and galaxy-shear correlations, whereas the linear scaling can be applied only to the latter (see §\[complinear\]). In §\[2ws\], we introduce the idea of parameter splitting. In fitting for dark energy parameters to the observed lensing power spectra (as done in the conventional approach), one can artificially split them up into those that control the growth factor, and those that control the geometrical distances. Consistency between the two sets would be a good check for the presence of systematic errors, intrinsic alignment or incorrect assumptions about the nature of the mass fluctuations. This consistency test is less conservative than the one using the offset-linear scaling. In a sense, the techniques outlined in §\[scaling\] and §\[2ws\] represent two extremes of a whole spectrum of ways to separate geometrical information from growth information: from making no assumptions about the mass (and galaxy-mass) power spectrum to assuming that it is known to high precision. There are likely techniques that are intermediate in this spectrum that might also prove useful. We are grateful to Gary Bernstein, Bhuvnesh Jain, and especially Wayne Hu for useful discussions. Support for this work is provided in part by the DOE and its Outstanding Junior Investigator Program, by NASA grant NAG5-10842 and NSF grant AST-0098437. LH is grateful to the organizers of the superstring cosmology program at the KITP, where part of this work was done. [ZZZZZZZZZZZ1999]{} Abazajian, K. & Dodelson, S. 2003, PRL, 91, 041301 Bacon, D.J., Refregier, A.R. & Ellis, R.S. 2000, MNRAS, 318, 625 Bartelmann, M. & Schneider, P. 2001, Phys. Rep., 340, 291 Bernstein, G., Jain, B. 2003, ApJ, in press, astro-ph 0309332 \[BJ03\] Benabed, K. & Bernardeau, F. 2001, Phys. Rev. D64, 083501 Benabed, K. & van Waerbeke, L. 2003, Blandford, R.D., Saust A.B., Brainerd, T.G. & Villumsen, J.V. 1991, MNRAS, 251, 600 Brainerd, T.G., Blandford, R.D. & Smail, I. 1996, ApJ, 466, 623 Brown, M.L., Taylor, A.N., Hambly, N.C., & Dye, S. 2000, submitted to MNRAS, astro-ph 0009499 Catelan, P., Kamionkowski, M. & Blandford, R.D. 2001, MNRAS, 320, 7 Crittenden, R.G., Natarajan, P., Pen, U, & Theuns, T. 2001, ApJ, 559, 552 Croft, R.A.C. & Metzler, C.A. 2000, ApJ, 545, 561 Cooray, A. & Hu, W. 2001, ApJ, 554, 56 Fischer, P. [*et al.*]{} 2000, AJ, 120, 1198 Guzik, J., Seljak, U. 2001, MNRAS, 321, 439 Heavens, A., Refregier, A., & Heymans, C. 2000, MNRAS, 319, 649 Hoekstra, H. 2003, submitted to MNRAS, astro-ph 0306097 Hoekstra, H., Yee, H., & Gladders, M. 2002, ApJ, 577, 595 Hu, W. 1999, ApJL, 522, 21 Hu, W. 2002, Phys. Rev. D66, 3515 Hu, W., Jain, B. 2003, preprint Hui, L. 1999, ApJL, 519, 9 Hui, L, & Gaztanaga, E. 1999, ApJ, 519, 622 Hui, L. & Zhang, J. 2002, submitted to ApJ, astro-ph 0205512 Huterer, D. 2002, Phys. Rev. D65, 3001 Jain, B. & Seljak, U. 1997, ApJ, 484, 560 Jain, B. & Taylor, A. 2003, submitted to PRL, astro-ph 0306046 \[JT03\] Jarvis, M., Bernstein, G.M., Fischer, P., Smith, D., Jain, B., Tyson, J.A., Wittman, D. 2003, AJ, 125, 1014 Kaiser, N. 1992, ApJ, 388, 272 Kaiser, N., Wilson, G. & Luppino, G. 2000, astro-ph 0003338 Kravtsov, A. V., Berlind, A. A., Wechsler, R. H., Klypin, A. A., Gottloeber, S., Allgood, B., Primack, J. R. 2003, submitted to ApJ, astro-ph 0308519 Lee, J., & Pen, U. 2000, ApJL, 532, 5 Linder, E. V. 2002, astro-ph 0210217 Ma, C.-P., Fry, J. N. 2000, ApJL, 531, 87 Mackey, J., White, M. & Kamionkowski, M. 2002, MNRAS, 332, 788 Maoli, R., van Waerbeke, L., Mellier, Y., Schneider, P., Jain, B., Bernardeau, F., Erhen, T. & Fort, B. 2001, A & A, 368, 766 Maller, A. H., Dekel, A. & Somerville, R. S. 2002, MNRAS, 329, 423 McKay, T. et al. 2001, submitted to ApJ, astro-ph 0108013 Miralda-Escude, J. 1991, ApJ, 380, 1 Munshi, D., Wang, Y. 2003, ApJ, 583, 566 Pen, U.-L., Lu, T., van Waerbeke, L., Mellier, Y. 2003, submitted to MNRAS, astro-ph 0304512 Porciani, C., Dekel, A. & Hoffman, Y. 2002, MNRAS, 332, 325 Refregier, A., Massey, R., Rhodes, J., Ellis, R., Albert, J., Bacon, D., Bernstein, G., McKay, T., Perlmutter, S. 2003, submitted to ApJ, astro-ph 0304419 Rhodes, J., Refregier, A. & Groth, E. J. 2001, ApJL, 552, 85 Soccimarro, R., Sheth, R., Hui, L., Jain, B. 2001, ApJ, 546, 20 Scoccimarro, R., Zaldarriaga, M., Hui, L. 1999, ApJ, 527, 1 Seljak, U. 2000, MNRAS, 318, 203 Sheldon, E., Johnston, D. E., Frieman, J. A., Scranton, R., McKay, T. A., Connolly, A. J., Budavari, T., Zehavi, I., Bahcall, N., Brinkmann, J., Fukugita, M. 2003, submitted to AJ, astro-ph 0312036 Sheth, R. K., Jain, B. 1997, MNRAS, 285, 231 Song, Y.-S., Knox, L. 2003, submitted to PRD, astro-ph 0312175 Stebbins, A. 1996, astro-ph/9609149 Stebbins, A. 2003, in preparation Szapudi, I., Szalay, A. 1998, ApJL, 494, 41 Takada, M., Jain, B. 2003, submitted to MNRAS, astro-ph 0310125 Takada, M., White, M. 2003, submitted to ApJL, astro-ph 0311104 Tegmark, M. 1997, Phys. Rev. D55, 5895 van den Bosch, F.C., Abel, T., Croft, R.A.C., Hernquist, L. & White, S.D.M. 2002, ApJ, 576, 21 van Waerbeke, L., Mellier, Y., Erben, T. et al. 2000, A& A, 358, 30 van Waerbeke, L., Takashi, H., Scoccimarro, R., Colombi, S., Bernardeau, F. 2001, MNRAS, 322, 918 Vitvitska, M. et al. 2002, ApJ, 581, 799 White, M. & Hu, W. 2000, ApJ, 537, 1 Wittman, D.M., Tyson, J.A., Kirkman, D., Dell’Antonio, I. & Bernstein, G. 2000, Nature, 405, 143 Appendix A – Comparison with JT03 {#appendixA .unnumbered} ================================= Our aim in this Appendix is to discuss our differences from Jain & Taylor (2003) \[[@jt03]\] and to a lesser extent Bernstein & Jain (2003) \[BJ03\]. We all share the common goal of isolating geometrical constraints on dark energy from lensing data. JT03/BJ03 focused on the use of the linear scaling (eq. \[\[JTscaling\]\]) while we focus on the offset-linear scaling (eq. \[\[approxscaling\]\]). The linear scaling introduces fewer nuisance parameters but can only be applied to galaxy-shear, not shear-shear, correlation data. It is therefore not [*a priori*]{} obvious whose constraints should be stronger. The most direct comparison can be made between the solid contour of our Fig. \[OmegaVw\_20\] and the smallest contour in Fig. 1 of JT03[^7]. Our constraints appear to be weaker by about a factor of 3 compared to JT03. What is puzzling is that even when we adopt exactly the JT03 linear scaling, and redo our calculation, the constraints are still weaker than those of JT03 by a factor of at least 3 or more. (The discrepancy depends on exactly how the JT03 scaling is implemented, particularly on the choice of redshift bins; the choice of bins in JT03 seems to lead to statistical errors larger than factor of 3, see below). This translates into at least an order of magnitude difference in the variance. This is not a small discrepancy, particularly when we use exactly the JT03 method. In this Appendix, we will focus on this discrepancy with JT03, but will also briefly comment on the treatment of BJ03 (who obtained similar constraints as JT03). We believe the statistical error quoted in JT03 have been underestimated. There appears to be several different reasons, the first two of which were pointed out to us by Wayne Hu (see Hu & Jain 2003). First, JT03 adopted a singular isothermal spherical profile for cluster halos that they considered. More realistic profiles such as NFW produce a smaller lensing signal. Second, it appears profile aside, the lensing signal itself is overestimated. Third, which is the aspect we would like to focus on, we believe not all sources of statistical errors were taken into account by JT03. Hu & Jain (2003) also independently reached the same conclusions. To recapitulate, [@jt03] proposed to examine the ratio of the galaxy-shear correlation at two different redshifts. For simplicity, we will consider the ratio of the galaxy-convergence correlation instead, which can of course be obtained from the galaxy-tangential-shear correlation: [^8] $$\begin{aligned} \label{Ri} R^{i} = P_{\rmg\kappa}^{i,1} / P_{\rmg\kappa}^{i,2}\end{aligned}$$ where $i$ specifies some foreground population, and $1$ and $2$ refers to convergence from 2 different background redshift bins. We use the symbol $P_{\rmg\kappa}$ loosely to refer to either galaxy-convergence correlation function, or the galaxy-convergence power spectrum. Which is which should be clear from the context (actual power spectrum will usually have argument $\ell$). [^9] The statistical error on dark energy parameters clearly comes from the statistical error on $R^{i}$, which in turn is determined by the statistical error of the $P_{\rmg \kappa}$ correlations. Before launching onto a detailed calculation, it is helpful to indicate roughly where we disagree with JT03 (and also BJ03). Think of $P_{\rmg\kappa}$ as $\sim \langle \delta_g \kappa \rangle$. Its variance under Gaussian random approximation (relaxing the Gaussian assumption would only increase the error) should be $\langle \delta_g \kappa \delta_g \kappa \rangle - \langle \delta_g \kappa \rangle \langle \delta_g \kappa \rangle \sim \langle \kappa \kappa \rangle \langle \delta_g \delta_g \rangle + \langle \delta_g \kappa \rangle \langle \delta_g \kappa \rangle$. As we will argue, what JT03 appeared to have considered is only the part of the variance that comes from the product of shape-noise in $\langle \kappa \kappa \rangle$, and shot-noise in $\langle \delta_g \delta_g \rangle$ [*i.e.*]{} $\sigma_\kappa^2 / (\bar n^B \bar n^g)$, where $\sigma_\kappa^2$ is the shape-noise of each background galaxy, $\bar n^B$ is the number density of background and $\bar n^g$ is the number density of foreground galaxies. In other words, JT03 appeared to have ignored sampling variance terms. Not only do these terms ignored by JT03 increase the variance of the measured $P_{\rm g\kappa}$ (and $R^i$), they also introduce correlation in errors between $R^i$’s measured from different foreground bins, which was also absent in JT03. Let us now derive the errorbar on $R^i$ in detail. The estimator for $P_{\rmg\kappa}$ can be written as $$\begin{aligned} \hat P_{\rmg\kappa} = \sum_{\alpha\beta} \delta^g_\alpha \kappa_\beta \tilde W_{\alpha\beta} \label{Pgshat}\end{aligned}$$ The picture in mind is to think of the survey being divided into pixels, and $\delta^g_\alpha$ is the galaxy overdensity in pixel $\alpha$, while $\kappa_\beta$ is the convergence in pixel $\beta$. The symbol $\tilde W_{\alpha\beta}$ can stand for many different things. For example, if one is interested in the real space correlation function at separation $\Delta \theta$, $\tilde W_{\alpha\beta}$ should be equal to zero when the separation between $\alpha$ and $\beta$ differs from $\Delta \theta$, or else equal to $1/N$, where $N$ is the total number of pairs of pixels at that separation. If one is interested in the power spectrum at wavenumber $\ell$, $\tilde W_{\alpha\beta} = (1/ N_{\rm pix}^2) {\,\rm exp} (-i \ell \cdot \Delta \theta_{\alpha\beta})$ where $N_{\rm pix}$ is the total number of pixels, and $A_T$ is the total survey area. [^10] [@jt03] considered a particular $\tilde W_{\alpha\beta}$ that corresponds to averaging the galaxy-convergence correlation over some aperture. We will keep $\tilde W_{\alpha\beta}$ general for now. One word about the estimator $\hat P_{g \kappa}$. It might appear very different from the way one usually thinks of galaxy-galaxy lensing. The usual approach is to sit on a foreground galaxy, measure the background tangential shear averaged around a circle, then average over all foreground galaxies (Brainerd, Blandford & Smail 1996, Fischer et al. 2000, McKay et al. 2001). This is equivalent to measuring $\sum_{\alpha\beta} (n^g_\alpha/\bar n^g) \gamma^t_\beta \tilde W_{\alpha\beta}$ where $\gamma^t$ is the background tangential shear, and $n^g_\alpha$ is equal to unity if pixel $\alpha$ has a foreground galaxy or vanishes otherwise, and $\bar n^g$ is its average over the survey. It is easy to see that such an estimator on average is equivalent to $\sum_{\alpha\beta} \delta^g_\alpha \gamma^t_\beta \tilde W_{\alpha\beta}$, where $\delta^g_\alpha = n^g_\alpha / \bar n^g - 1$. The only difference between this and the estimator in eq. \[\[Pgshat\]\] is the replacement of $\gamma^t$ by $\kappa$. This is merely for the sake of simplifying our following expressions. Finally, note that using $\delta^g$ in place of $n^g/\bar n^g$ is generally a good idea because it reduces the variance of the estimator (Szapudi & Szalay 1998). The estimator for $R^i$ is given by $$\begin{aligned} \hat R^i = \hat P_{\rmg\kappa}^{i,1} / \hat P_{\rmg\kappa}^{i,2} \label{Rhat}\end{aligned}$$ We caution here that the above estimator is unbiased only to the lowest order in fluctuations, but we will ignore such complications here ([@hg99]). Eq.s \[\[Pgshat\],\[Rhat\]\] imply the following expression for the fractional variance of the ratio $R^i$: $$\begin{aligned} \label{VR} V (i) \equiv \langle (\delta \hat R^i)^2 \rangle / (R^i)^2 = V^1 (i) + V^2 (i) - 2 V^{1,2} (i)\end{aligned}$$ where $V^1$ is the fractional variance of $\hat P_{\rmg\kappa}^{i,1}$, $V^2$ is the corresponding quantity for $\hat P_{\rmg\kappa}^{i,2}$, and $V^{1,2}$ is the cross-variance between them, and they are given by (approximating fluctuations as Gaussian random): $$\begin{aligned} \label{VRs} && V^1 (i) = [\int {d^2 \ell \over (2\pi)^2} P_{\rmg\kappa}^{i,1} (\ell) J(\ell)]^{-2} \times \\ \nonumber && \int {d^2 \ell \over (2\pi)^2} {|J_\ell |^2 \over A_T} [ P_{\rmg\kappa}^{i,1} (\ell)^2 + (P_{gg}^{i,i} (\ell) + {1\over \bar n^g_i}) (P_{\kappa\kappa}^{1,1} (\ell) + {\sigma_\kappa^2 \over \bar n^B_{1}} )] \\ \nonumber && V^2 (i) = [\int {d^2 \ell \over (2\pi)^2} P_{\rmg\kappa}^{i,2} (\ell) J(\ell)]^{-2} \times \\ \nonumber && \int {d^2 \ell \over (2\pi)^2} {|J_\ell |^2 \over A_T} [ P_{\rmg\kappa}^{i,2} (\ell)^2 + (P_{gg}^{i,i} (\ell) + {1\over \bar n^g_i}) (P_{\kappa\kappa}^{2,2} (\ell) + {\sigma_\kappa^2 \over \bar n^B_{2}} )] \\ \nonumber && V^{1,2} (i) = [\int {d^2 \ell \over (2\pi)^2} P_{\rmg\kappa}^{i,1} (\ell) J(\ell)]^{-1} \times \\ \nonumber && \quad \quad \quad \quad [\int {d^2 \ell \over (2\pi)^2} P_{\rmg\kappa}^{i,2} (\ell) J(\ell)]^{-1} \times \\ \nonumber && \int {d^2 \ell \over (2\pi)^2} {|J_\ell |^2 \over A_T} [P_{\rmg\kappa}^{i,1} (\ell) P_{\rmg\kappa}^{i,2} (\ell) + (P_{gg}^{i,i} (\ell) + {1\over \bar n^g_i}) P_{\kappa\kappa}^{1,2} (\ell)]\end{aligned}$$ Here, $P_{\rmg\kappa}^{i,1}$ is the power spectrum between galaxies in foreground bin $i$ and convergence in the background bin $1$ (there are only 2 background bins in JT03), $P_{gg}^{i,i}$ is the power spectrum of foreground galaxies with themselves in bin $i$, and so on. The symbol $\sigma_\kappa^2$ represents the variance in convergence due to the intrinsic noise of each galaxy, and $\bar n^g_i$ is the galaxy density in foreground bin $i$, $\bar n^B_{1}$ is the density of galaxies in background bin $1$, and so on. The total survey area is $A_T$. The quantity $J(\ell)$ is the Fourier transform of the estimator kernel $\tilde W_{\alpha\beta}$: $$\begin{aligned} \label{Jell} J(\ell) \equiv \left[\sum_{\Delta\theta_{\alpha\beta}} \tilde W_{\alpha\beta}\right]^{-1} \sum_{\Delta\theta_{\alpha\beta}} \tilde W_{\alpha\beta} {\,\rm exp} [-i \ell \cdot \Delta\theta_{\alpha\beta}] \end{aligned}$$ Among all the terms for $V (i)$, which correspond to those considered by [@jt03]? They are the sum of terms in $V^1 (i)$ and $V^2 (i)$ that consist of the product of shape-noise and shot-noise (we will refer to these loosely as shot-noise terms): $$\begin{aligned} && V^{\rm \cite{jt03}} (i) \equiv \\ \nonumber && [\int {d^2 \ell \over (2\pi)^2} P_{\rmg\kappa}^{i,1} (\ell) J(\ell)]^{-2} {\sigma_\kappa^2 \over A_T \bar n^g_i \bar n_1^B} \int {d^2 \ell \over (2\pi)^2} {|J (\ell) |^2} \\ \nonumber && + [\int {d^2 \ell \over (2\pi)^2} P_{\rmg\kappa}^{i,2} (\ell) J(\ell)]^{-2} {\sigma_\kappa^2 \over \bar A_T \bar n^g_i \bar n_2^B} \int {d^2 \ell \over (2\pi)^2} {|J (\ell) |^2}\end{aligned}$$ To see that this does correspond to what JT03 considered, note that [@jt03] focused on the measurement of the real-space galaxy-convergence correlation smoothed within some aperture (of, say, area $A_p$). This corresponds to a choice of $J(\ell)$ (or $\tilde W_{\alpha\beta}$ in eq. \[\[Pgshat\]\]) such that $(2\pi)^{-2} \int d^2\ell |J (\ell) |^2 \sim 1/A_p$. Therefore, the above expression reduces to $$\begin{aligned} && V^{\rm JT03} (i) = {\sigma_\kappa^2 \over A_T A_P \bar n^g_i \bar n_1^B [P^{i,1}_{g\kappa}(A_P)]^2} \\ \nonumber && \quad \quad \quad \quad + {\sigma_\kappa^2 \over \bar A_T A_P \bar n^g_i \bar n_2^B [P^{i,2}_{g\kappa}(A_P)]^2}\end{aligned}$$ where we have abused the notation a little bit to denote the real-space galaxy-convergence correlation smoothed in an aperture of area $A_P$ by $P^{i,1}_{g\kappa} (A_P)$. The above can be compared directly with equation 12 of JT03. The dictionary for translating our symbols to theirs is as follows: $\sigma_\kappa^2 \rightarrow \sigma_\epsilon^2/2$, $\bar n_1^B \rightarrow n_1$, $\bar n_2^B \rightarrow n_2$, $[P^{i,1}_{g\kappa}(A_P)]^2 \rightarrow \langle \gamma \rangle^2_{\ell 1}$, $[P^{i,2}_{g\kappa}(A_P)]^2 \rightarrow \langle \gamma \rangle^2_{\ell 2}$, $A_T \rightarrow A$, and $A_P \bar n^g_i \rightarrow f_\ell$. The last item requires a little explanation. JT03 defined $f_\ell$ to be the fraction of the survey that is covered by the apertures centered on foreground objects. This is equal to $A_P \times (\bar n^g_i A_T) / A_T$, where $A_T$ is the total survey area. With this, the correspondence with the expression of [@jt03] is manifest. The expressions for statistical errors are actually simpler in Fourier space suppose instead of measuring the galaxy- convergence correlation smoothed in some aperture, one measures the galaxy-convergence power spectrum at wavenumber $\ell$. One can obtain the ratio $R^i$ for each $\ell$, and then combine all these estimates of $R^i$ from each $\ell$ in a minimum variance manner. Note that while this is different from the procedure of [@jt03], the procedure here will likely produce smaller errorbars on $R^i$, since it makes use of all information contained in the modes instead of focusing on fluctuations at particular scales. Let us focus on a particular wavenumber (or band) $\ell$ for the moment. Eq.s \[\[VR\],\[VRs\]\] reduce to something quite simple: $$\begin{aligned} \label{Viell} V_\ell (i) = [P_{\rmg\kappa}^{i,1} (\ell)]^{-2} (P_{gg}^{i,i} (\ell) + {1\over \bar n^g_i}) (P_{\kappa\kappa}^{1,1} (\ell) + {\sigma_\kappa^2 \over \bar n^B_1}) \\ \nonumber + [P_{\rmg\kappa}^{i,2} (\ell)]^{-2} (P_{gg}^{i,i} (\ell) + {1\over \bar n^g_i}) (P_{\kappa\kappa}^{2,2} (\ell) + {\sigma_\kappa^2 \over \bar n^B_2}) \\ \nonumber - 2 [P_{\rmg\kappa}^{i,1} (\ell)]^{-1} [P_{\rmg\kappa}^{i,2} (\ell)]^{-1} (P_{gg}^{i,i} (\ell) + {1\over \bar n^g_i}) P_{\kappa\kappa}^{1,2} (\ell)\end{aligned}$$ where we have used $J (\ell) = |J (\ell)|^2 = (2\pi)^2 \delta^2 (\ell - \ell') / A_T$, and we have introduced subscript $\ell$ to $V$ to emphasize this is the variance of $R^i$ from Fourier bin $\ell$. The Fourier analog of the approximation made by JT03 would be to retain only the following terms in the variance: $$\begin{aligned} V_\ell^{\rm JT03} (i) = [P_{\rmg\kappa}^{i,1} (\ell)]^{-2} {1\over \bar n^g_i}{\sigma_\kappa^2 \over \bar n^B_1} + [P_{\rmg\kappa}^{i,2} (\ell)]^{-2} {1\over \bar n^g_i} {\sigma_\kappa^2 \over \bar n^B_2}\end{aligned}$$ This misses a number of terms compared to $V_\ell (i)$ in eq. \[\[Viell\]\]. Each of the terms ignored by JT03 are of order unity. They can be thought of as sampling variance terms. While there is some partial cancellation among them, they do not cancel exactly and should be retained. [@jt03] considered the constraint on dark energy from the ratio $R^i$ for $i$ ranging over 10 different foreground redshift bins, ranging from $z = 0$ to $z = 1$, each with $\Delta z = 0.1$. In addition to the diagonal variance considered above, there will in general be covariance between $R$ measured from foreground bin $i$ and foreground bin $j$, which was not considered by [@jt03]: $$\begin{aligned} \label{Vijell} && V_\ell (i,j) \equiv \langle \delta \hat R^i \delta \hat R^j \rangle / (R^i R^j) \\ \nonumber && = \delta_{ij} V_\ell (i) + [P_{\rmg\kappa}^{i,i1} (\ell)]^{-1} [P_{\rmg\kappa}^{j,j1} (\ell)]^{-1} P_{\rmg\kappa}^{i,j1} P_{\rmg\kappa}^{j,i1} \\ \nonumber && + [P_{\rmg\kappa}^{i,i2} (\ell)]^{-1} [P_{\rmg\kappa}^{j,j2} (\ell)]^{-1} P_{\rmg\kappa}^{i,j2} P_{\rmg\kappa}^{j,i2} \\ \nonumber && - [P_{\rmg\kappa}^{i,i1} (\ell)]^{-1} [P_{\rmg\kappa}^{j,j2} (\ell)]^{-1} P_{\rmg\kappa}^{i,j2} P_{\rmg\kappa}^{j,i1} \\ \nonumber && - [P_{\rmg\kappa}^{i,i2} (\ell)]^{-1} [P_{\rmg\kappa}^{j,j1} (\ell)]^{-1} P_{\rmg\kappa}^{i,j1} P_{\rmg\kappa}^{j,i2}\end{aligned}$$ Note the somewhat clumsy notation: instead of specifying the 2 background bins by just $1$ and $2$, we now have to specify them by $i1$ and $i2$ which refers to the 2 background bins that correspond to the $i$-th foreground bin, and similarly for $j1$ and $j2$. The covariance $V_\ell (i,j)$, when $i \ne j$, is non-vanishing – the positive and negative terms present do not exactly cancel each other, and generically result in something of same order of each of these terms, with perhaps some mild suppression. Making use of $V_\ell (i,j)$ one can then work out the dark energy constraints from the linear scaling of JT03. Adopting the survey specifications and redshift-binning according to JT03, we find constraints that are shown in Fig. \[jtconstraints\]. This can be compared against the smallest contour in Fig. 1 of JT03. Hu & Jain (2003) independently reached similar conclusions as in Fig. \[jtconstraints\]. In summary, it appears JT03 ignored certain contributions to the variance (and covariance) of the ratio $R^i$. They are primarily sampling variance terms. These are automatically taken into account in our Fisher matrix analysis in §\[fisher\], which actually does not require an explicit computation of all these variance terms. This should be contrasted with the Fisher matrix calculation of BJ03: while we start with the galaxy-density and shear fields as input Gaussian random data and compute constraints on parameters which enter into the correlation matrix (eq. \[\[BIGC\]\]), BJ03 started with the quadratic estimates of lensing power spectra themselves as Gaussian distributed input data. The latter approach requires explicit computation of the variance and covariance of these quadratic estimates, and care should be taken to include all contributions. It appears some of these contributions were not included in the analysis of BJ03. We have not, however, performed an analysis replicating the details of BJ03. Appendix B – Non-flat Universe, Shear and Real Space Correlations {#appendixB .unnumbered} ================================================================= Our goal in this Appendix is to state our main results in this paper for the more general case of a non-flat universe, for shear instead of convergence, and in real as well as Fourier space. Some of the expressions have appeared in the literature. They are given here for completeness. Let us start with what is most commonly measured in galaxy-galaxy lensing experiments, and relate it to the galaxy-convergence power spectrum $P_{\rmg\kappa} (\ell)$ given in eq. \[\[Pgs\]\] ([@kaiser92]): $$\begin{aligned} \label{xiggamma} \xi_{g\gamma^+} (\theta) = - \int {\ell d \ell \over 2\pi} P_{\rmg\kappa} (\ell) J_2 (\ell \theta)\end{aligned}$$ where $J_2$ is the second order Bessel function, [^11] and $\xi_{g\gamma^+} (\theta)$ is the cross-correlation between galaxies and tangential shear at separation $\theta$, a quantity that is most commonly discussed in galaxy-galaxy lensing measurements. Alternatively, in a fixed coordinate system where $\gamma^1$ and $\gamma^2$ are the two components of shear, the 2 different galaxy-shear power spectra $P_{g\gamma^1} (\ell)$ and $P_{g\gamma^2} (\ell)$ are related to the galaxy-convergence power spectrum $P_{\rmg\kappa}$ by: $$\begin{aligned} \label{Pggamma} P_{g\gamma^1} (\ell) = {\,\rm cos} (2\phi_\ell) P_{\rmg\kappa} (\ell) \, , \, P_{g\gamma^2} (\ell) = {\,\rm sin} (2\phi_\ell) P_{\rmg\kappa} (\ell)\end{aligned}$$ where $\phi_\ell$ specifies the orientation of the wavevector: $\ell {\,\rm cos}\phi_\ell$ is the x-component while $\ell {\,\rm sin}\phi_\ell$ is the y-component. Similarly, the two quantities that are commonly considered in actual shear-shear correlation measurements are related to the convergence power spectrum $P_{\kappa\kappa} (\ell)$ of eq. \[\[Pss\]\] by ([@kaiser92]): $$\begin{aligned} \label{xigammagamma} \xi_{\gamma^+\gamma^+} (\theta) = {1\over 2} \int {\ell d\ell \over 2\pi} P_{\kappa\kappa} (\ell) [J_0 (\ell\theta) + J_4 (\ell\theta)] \\ \nonumber \xi_{\gamma^\times \gamma^\times} (\theta) = {1\over 2} \int {\ell d\ell \over 2\pi} P_{\kappa\kappa} (\ell) [J_0 (\ell\theta) - J_4 (\ell\theta)]\end{aligned}$$ where $\gamma^+$ and $\gamma^\times$ are the tangential and ortho-tangential (or radial) shear defined with respect to separation between two points of interest. Alternatively, the two different shear-shear power spectra in a fixed coordinate system are related to the convergence power spectrum by $$\begin{aligned} \label{Pgammagamma} P_{\gamma^1\gamma^1} (\ell) = {\,\rm cos}^2 (2 \phi_\ell) P_{\kappa\kappa} (\ell) \\ \nonumber P_{\gamma^2\gamma^2} (\ell) = {\, \rm sin}^2 (2\phi_\ell) P_{\kappa\kappa} (\ell)\end{aligned}$$ The main results of this paper derive from writing $P_{\rmg\kappa} (\ell; f,b)$ and $P_{\kappa\kappa} (\ell; f,b)$, which are the galaxy-convergence and convergence-convergence power spectra between a foreground bin $f$ and background bin $b$, in the form of eq. \[\[fullscaling\]\], and noticing some of the terms are small, which leads to the offset-linear scaling of eq. \[\[approxscaling\]\]. Let us first give the expressions for each term in eq. \[\[fullscaling\]\] (and eq. \[\[approxscaling\]\]) in the case of a non-flat universe. Then, we will discuss how similar expressions hold for shear measurements, and in real space. The non-flat space analogs of eq.s \[\[chieff\],\[FGAB\]\] are $$\begin{aligned} \label{FGHnonflat} && F(\ell; f)\equiv{3\Omega_{\rmm0}H_0^2 \over 2 c^2}\int{d\chi\over a}W_f(\chi) {1\over r(\chi)} \\ \nonumber && \quad \quad \quad \quad \quad \quad {\, \rm cs} (\chi) P_{g\delta} ({\ell \over r(\chi)}) \\ \nonumber && G(\ell; f) \equiv - {3\Omega_{\rmm0} H_0^2 \over 2 c^2} \int {d\chi\over a} W_f (\chi) {1 \over r(\chi)} \\ \nonumber && \quad \quad \quad \quad \quad \quad {\, \rm si} (\chi) P_{g\delta} ({\ell \over r(\chi)}) \\ \nonumber && I(\ell; f,b) \equiv - {3\Omega_{\rmm0} H_0^2 \over 2 c^2} \int {d\chi\over a} W_f (\chi) \int d \chi' W_b (\chi') \\ \nonumber && \quad \quad \quad \quad {r(\chi' - \chi) \over r(\chi') r(\chi)} P_{g\delta} ({\ell \over r(\chi)}) \Theta(\chi-\chi')\end{aligned}$$ $$\begin{aligned} \label{chieffnonflat} {1\over \chieff (b)} \equiv \int d\chi' W_b(\chi') {1\over {\rm ta} (\chi')} \end{aligned}$$ $$\begin{aligned} \label{ABDnonflat} && A(\ell; f) \equiv \left({3\Omega_{\rmm0} H_0^2 \over 2 c^2}\right)^2 \int {d\chi''} W_f (\chi'') \\ \nonumber && \quad \quad \quad \int {d\chi \over a^2} {r(\chi'' - \chi) \over r(\chi'')} {\rm cs} (\chi) P_{\delta\delta} ({\ell \over r(\chi)}) \Theta(\chi''-\chi) \\ \nonumber && B(\ell; f) \equiv - \left({3\Omega_{\rmm0} H_0^2 \over 2 c^2}\right)^2 \int {d\chi''} W_f (\chi'') \\ \nonumber && \quad \quad \quad \int {d\chi \over a^2} {r(\chi'' - \chi) \over r(\chi'')} {\,\rm si} (\chi) P_{\delta\delta} ({\ell \over r(\chi)})\,\Theta(\chi''-\chi) \\ \nonumber && D(\ell; f,b) \equiv - \left({3\Omega_{\rmm0} H_0^2 \over 2 c^2}\right)^2 \int {d\chi''} W_f (\chi'') \int d \chi' W_b (\chi') \\ \nonumber && \quad \quad \int {d\chi \over a^2} {r(\chi' - \chi) \over r(\chi')} {r(\chi'' - \chi) \over r(\chi'')} P_{\delta\delta} ({\ell\over r(\chi)}) \\ \nonumber && \quad \quad \Theta(\chi-\chi')\,\Theta(\chi''-\chi)\end{aligned}$$ where $r(\chi)$ is the comoving angular diameter distance which is related to the comoving radial distance $\chi$ as follows: $r(\chi) = K^{-1/2} {\,\rm sin} K^{1/2} \chi, (-K)^{-1/2} {\,\rm sinh} (-K)^{1/2} \chi, \chi$ for a closed, open and flat universe respectively, and $K = - \Omega_k H_0^2/c^2$, where $\Omega_k$ is the curvature in unit of the critical density. The quantities, ${\,\rm cs} (\chi)$, ${\,\rm si} (\chi)$, and ${\,\rm ta} (\chi)$ are defined as: $$\begin{aligned} {\,\rm cs} (\chi) = {\, \rm cos} K^{1\over 2} \chi, {\,\rm si} (\chi) = {\, \rm sin} K^{1\over 2} \chi, {\, \rm ta} (\chi) = {\, \rm tan} K^{1\over 2} \chi \end{aligned}$$ if $K > 0$, $$\begin{aligned} {\,\rm cs} (\chi) = 1, {\,\rm si} (\chi) = \chi, {\,\rm ta} (\chi) = \chi\end{aligned}$$ if $K = 0$, and $$\begin{aligned} {\,\rm cs} (\chi) = {\, \rm cosh} (-K)^{1\over 2} \chi, {\,\rm si} (\chi) = {\, \rm sinh} (-K)^{1\over 2} \chi, \\ \nonumber {\,\rm ta} (\chi) = {\, \rm tanh} (-K)^{1\over 2} \chi\end{aligned}$$ if $K < 0$. As before, the offset-linear scaling (eq. \[\[approxscaling\]\]) follows from eq. \[\[fullscaling\]\] by noticing that $D$ and $I$ are small provided that $W_i$ and $W_j$ have little overlap, except that the relevant quantities $A$, $B$, $D$, $F$, $G$ and $I$ are defined as above. With the above expressions, one can in principle fit for $\Omega_k$ in addition to the dark energy parameters in carrying out the exercise of §\[fisher\]. Lastly, it is trivial to generalize the offset-linear scaling of eq. \[\[approxscaling\]\] to galaxy-shear and shear-shear (instead of galaxy-convergence and convergence-convergence as before) power spectra by using eq.s \[\[Pggamma\],\[Pgammagamma\]\] simply multiply eq. \[\[approxscaling\]\] by appropriate factors of ${\, \rm sin} (2\phi_\ell)$ or ${\,\rm cos} (2\phi_\ell)$. Rewriting the scaling in real-space is no less difficult: simply substitute eq. \[\[approxscaling\]\] into the expressions for $\xi_{g\gamma^+} (\theta)$, $\xi_{\gamma^+\gamma^+}$ or $\xi_{\gamma^\times \gamma^\times}$ in eq. \[\[xiggamma\]\] and (\[xigammagamma\]). One can see that the scaling continues to hold for real-space analogs of $A$, $B$, etc. In particular, eq. \[\[Pdiffratio\]\] holds for any of these real space correlation functions $$\begin{aligned} {\xi_{g\gamma^+} (\theta; f,b) - \xi_{g\gamma^+} (\theta; f,b') \over \xi_{g\gamma^+} (\theta; f,b'') - \xi_{g\gamma^+} (\theta; f,b''')} = {\chieff (b) ^{-1} - \chieff (b')^{-1} \over \chieff (b'')^{-1} - \chieff (b''')^{-1}}\end{aligned}$$ where $\xi_{g\gamma^+} (\theta; f,b)$ refers to the galaxy-tangential-shear correlation between foreground redshift bin $f$ and background redshift bin $b$. [^1]: However, it is completely unimportant whether different background (or foreground) populations overlap, or even whether they contain common members. [^2]: Also known as “electric-”, or “gradient-” component. This excludes the other component, known as “curl-”, “C-”, “magnetic-”, “B-’, or pseudoscalar- component, which will have a much smaller signal and less useful information for our purposes. [^3]: We have paraphrased [@jt03] a little bit here. They considered the ratio of correlations measured in real space instead of Fourier space. [^4]: [@bj03] also performed a Fisher matrix analysis. Their starting point was different from ours, however (in addition to other differences discussed in Appendix A). [@bj03] started from a likelihood that treated the power spectra themselves as data which are Gaussian distributed. [^5]: This is true when most of the Fisher information comes from angular scales much smaller than the survey size and much greater than the typical inter-galaxy separation. [^6]: In order for $F_{\alpha\beta}$ to be invertible we require that $n_{\rm bin}\ge\Delta_{\rm bin}+3$ otherwise there are never the 3 background bins required to construct the ratio of power spectrum differences, eq. \[\[Pdiffratio\]\], so that one can make use of the offset-linear scaling. [^7]: The method of JT03 requires high photometric redshift accuracy, hence the $\sigma_z = 0.01$ contour of our Fig. \[OmegaVw\_20\] is the relevant one to compare against. [^8]: In previous sections of the paper, we have been loosely using the term shear $\gamma$ as equivalent to convergence. In the appendix here, to avoid confusion especially in Appendix B, we explicitly use the symbol $\kappa$ when we are discussing convergence. [^9]: [@jt03] actually considered halo-shear rather than galaxy-shear. We will continue to use the term galaxy-shear. All our expressions are equally valid for special classes of foreground ’galaxies’ such as groups or clusters. [^10]: Strictly speaking, one is usually interested in the power spectrum at a given $|\ell |$, and so $\tilde W_{\alpha\beta}$ should involve an average over directions of $\ell$. Note also that the $\tilde W_{\alpha\beta}$ differs from the conventional one by a factor of $A_T$, but that is fine since we are only interested in fractional error. Our choice is to enforce $\sum_{\alpha\beta} \tilde W_{\alpha\beta} = 1$, which simplifies some of our expressions below. [^11]: $J_n(y) = {1\over 2 \pi} \int_{-\pi}^\pi d\eta {\,\rm cos} [y {\,\rm sin}\eta - n\eta]$
--- abstract: 'Loop quantum cosmology applies techniques derived for a background independent quantization of general relativity to cosmological situations and draws conclusions for the very early universe. Direct implications for the singularity problem as well as phenomenology in the context of inflation or bouncing universes result, which will be reviewed here. The discussion focuses on recent new results for structure formation and generalizations of the methods.' address: 'Max-Planck-Institute for Gravitational Physics, Albert-Einstein-Institute, Am Mühlenberg 1, 14476 Potsdam, Germany' author: - Martin Bojowald title: The Early Universe in Loop Quantum Cosmology --- AEI-2005-022 Introduction ============ The distinguishing feature of general relativity, in comparison to other interactions, is the fact that the metric as its basic field does not just provide a stage for other fields but is dynamical itself. In particular in cosmological situations the metric differs significantly from a static background and cannot be written as a perturbation. Thus, a faithful quantization requires a background independent formalism which then must be non-perturbative. An approach to quantum gravity which realizes this from the outset is loop quantum gravity [@Rov:Loops; @ThomasRev; @ALRev; @Rov]. Here, background independence leads to a discrete structure of geometry whose scale is a priori free (set by the Barbero–Immirzi parameter [@AshVarReell; @Immirzi]; in this paper we set the value equal to one for simplicity) but fixed to be close to the Planck scale by black hole entropy calculations [@ABCK:LoopEntro; @IHEntro; @Gamma; @Gamma2]. Thus, it is not of relevance on directly accessible scales and will only become noticeable in high curvature regimes. In particular, this is the case close to the big bang where the universe itself is small. Classically, the universe would emerge from or evolve into a singularity at those scales, where energy densities blow up and Einstein’s equations break down. For a long time, it has been hoped that quantum gravity will resolve this problem and provide a more complete framework which does not break down. Moreover, since this will inevitably come with modifications of the classical theory at small scales, one can expect phenomenological and potentially observable consequences in the very early universe. Even classically, it is difficult to analyze the situation in full generality, and the quantum theory is even more complicated and less understood. A common strategy in such a situation consists in introducing symmetries which can be taken as homogeneity or isotropy in the cosmological context. In contrast to earlier approaches initiated by Wheeler and DeWitt [@DeWitt; @QCReview], the theory has now been developed to such a level that the introduction of symmetries can be done at the quantum level by employing symmetric states [@SymmRed], rather than reducing the classical theory first and then quantizing. The relation to the full theory is thus known, and it is possible to ensure that special features required for a consistent background independent formulation translate to the symmetric context. It is then possible to take properties of the full theory, transfer them to symmetric models and analyze them in this simpler context. In particular, the discreteness of spatial geometry survives the reduction [@cosmoII], which is already a difference to the Wheeler–DeWitt quantization. It also implies that there are in fact modifications at small scales coming from the full theory, whose phenomenological consequences can be studied in cosmological models [@LoopCosRev]. Variables ========= A spatially isotropic space-time has the metric $${{\rm d}}s^2 = -{{\rm d}}t^2 +\frac{a(t)^2}{(1-kr^2)^2} {{\rm d}}r^2+a(t)^2 r^2{{\rm d}}\Omega^2$$ where $k$ can be zero or $\pm1$ and specifies the intrinsic curvature of space, while the scale factor $a(t)$ describes the expansion or contraction of space in time. It is subject to the Friedmann equation $$3(\dot{a}^2+k)a= 8\pi GH_{\rm matter}(a,\phi,p_{\phi})$$ where $G$ is the gravitational constant and $H_{\rm matter}$ the matter Hamiltonian (assumed here to be given only by a scalar $\phi$ and its momentum $p_{\phi}$). The matter Hamiltonian depends on the matter fields, but also on the scale factor since matter couples to geometry. In the case of a scalar, for instance, we have $$\label{Hmatter} H_{\rm matter}=\case{1}{2}a^{-3}p_{\phi}^2+a^3 V(\phi)$$ with the scalar potential $V(\phi)$ and the classical momentum $p_{\phi}=a^3\dot{\phi}$. Loop quantum gravity is based on Ashtekar variables, which provide a canonical formulation of general relativity in terms of a densitized triad and an SU(2) connection on space. In the isotropic context this reduces to working with the isotropic triad component $p$ with $|p|=a^2$ and the isotropic connection component $c=\frac{1}{2}(k+\dot{a})$ which are canonically conjugate: $\{c,p\}=8\pi G/3$. There is one essential difference to the metric formulation: $p$ can take both signs since it depends on the orientation of the triad. Thus, $p$ does not only determine the size of space through $|p|$, but also its orientation via ${\mathop{\rm sgn}}p$. (Another difference is that, when isotropic models are derived through homogeneous ones, a canonical formalism is not available for $k=-1$. We will thus restrict ourselves to $k=0$ and $k=1$.) Dynamics in the canonical formulation is dictated by the Hamiltonian constraint $$H=-3(4\pi G)^{-1}\left[2c(c-k)+k^2\right]\sqrt{|p|}+H_{\rm matter}(p,\phi,p_{\phi})=0$$ which indeed reduces to the Friedmann equation upon using the definition of $p$ and $c$. Moreover, the Hamiltonian constraint $H$ gives Hamiltonian equations of motion for gravitational variables, such as $\dot{c}=\{c,H\}$ resulting in the Raychaudhuri equation $$\frac{\ddot{a}}{a}=-\frac{4\pi G}{3a^3}\left(H_{\rm matter}(a,\phi,p_{\phi})- a\frac{\partial H_{\rm matter}(a,\phi,p_{\phi})}{\partial a}\right) \,,$$ and matter equations of motion, e.g. for a scalar $$\begin{aligned} \dot{\phi} &=& \{\phi,H\}= p_{\phi}/a^3\\ \dot{p}_{\phi} &=& \{p_{\phi},H\}= -a^3 V'(\phi)\end{aligned}$$ which lead to the Klein–Gordon equation $$\ddot{\phi}+3\dot{a}a^{-1}\dot{\phi}+V'(\phi)=0\,.$$ Loop quantization ================= While a Wheeler–DeWitt quantization would start with a Schrödinger representation and work with wave functions $\psi(a)$ such that $a$ is represented as a multiplication operator and its momentum related to $\dot{a}$ by a derivative operator, the loop quantization implies an inequivalent representation [@Bohr]. Here, one usually starts in the connection representation such that states are functions of $c$, an orthonormal basis of which is given by $$\label{states} \langle c|\mu\rangle = e^{i\mu c/2}, \qquad \mu\in{\mathbb R}\,.$$ Since these states are by definition normalized, it is clear that the Hilbert space is non-separable (it does not have a countable basis) and that the representation is inequivalent to that assumed in the Wheeler–DeWitt quantization. Basic operators, which quantize $p$ and $c$, also have properties different from $a$ as a multiplication operator or its conjugate as a derivative operator. The action of basic operators on states (\[states\]) is given by $$\begin{aligned} \hat{p}|\mu\rangle &=& {\textstyle\frac{1}{6}}{\ell_{\rm P}}^2\mu|\mu\rangle\label{p}\\ \widehat{e^{i\mu'c/2}}|\mu\rangle &=& |\mu+\mu'\rangle \label{c}\end{aligned}$$ with the Planck length ${\ell_{\rm P}}=\sqrt{8\pi G\hbar}$. Thus, since all eigenstates $|\mu\rangle$ of $\hat{p}$ are normalizable, $\hat{p}$ has a discrete spectrum. Moreover, there is only an operator for the exponential of $c$, not $c$ directly. Both properties are very different from the corresponding operators in the Wheeler–DeWitt quantization where the analog of $p$, the scale factor $a$, has a continuous spectrum and its momentum has a direct quantization. On the other hand, the properties of the basic operators (\[p\]), (\[c\]) are analogous to those in the full theory, where also flux operators quantizing the triad have discrete spectra and only holonomies of the connection are well-defined operators but not the connection itself. In the full theory, these properties are consequences of the background independent formulation: One has to smear the basic fields given by the connection and the densitized triad in order to have a well-defined Poisson algebra to represent on a Hilbert space. In field theory this is usually done in a three-dimensional way using the background metric to provide a measure. This is certainly impossible in a background independent formulation, but there are natural, background independent smearings of the connection along one-dimensional curves and of the densitized triad along surfaces. Their algebra, the holonomy-flux algebra, is well-defined and one can then look for representations. Here, it turns out that there is a unique one carrying a unitary action of the diffeomorphism group [@FluxAlg; @Meas; @HolFluxRep; @SuperSel; @WeylRep]. In this representation, fluxes and thus spatial geometric operators which are built from triad components have discrete spectra as a direct consequence of background independence. This representation is then carried over to symmetric models such that triads have discrete spectra, too, and only exponentials of connection components are directly represented. These properties of the loop representation define the structure of the algebra of basic operators, and they have far-reaching consequences: (160,80)(0,0) (80,70)[(0,0)[discrete triad only holonomies]{}]{} (30,60)[(0,-1)[10]{}]{} (130,60)[(0,-1)[10]{}]{} (80,45)[(0,0)[finite inverse volume discrete evolution]{}]{} (40,40)[(1,-1)[10]{}]{} (125,40)[(-1,-1)[10]{}]{} (80,25)[(0,0)[ non-singular]{}]{} (30,35)[(0,-1)[20]{}]{} (130,35)[(0,-1)[20]{}]{} (80,5)[(0,0)[non-perturbative modifications higher order terms]{}]{} As a consequence of the discrete triad spectrum, operators quantizing the inverse volume are finite despite the classical divergence. This already signals a more regular behavior at the classical singularity which has to be confirmed by using the quantum dynamics. This dynamics, as a consequence of the second basic quantity that only exponentials of $c$ are represented, happens in discrete internal time. Together with the properties of inverse volume operators this combines to a non-singular cosmological evolution. Moreover, inverse volume operators imply non-perturbative modifications to the classical Friedmann equation, while the second basic property leads to perturbative higher order terms. Both corrections have phenomenological consequences. Non-singular evolution ====================== In this section we discuss the consequences of basic loop properties as for their implication of the quantum evolution [@Sing; @DynIn; @Essay]. In the following section we will then turn to phenomenological consequences [@Inflation]. Finite inverse volume --------------------- Since, as one of the basic loop effects, the triad operator $\hat{p}$ has a discrete spectrum containing zero it does not have a densely defined inverse. On the other hand, if we want to quantize a matter Hamiltonian such as (\[Hmatter\]) which enters the dynamics, we always need inverse powers of the scale factor in the kinetic term. It seems that quantum cosmology based on a loop representation would already come to an end at this basic step. However, there are general methods in the full theory [@QSDV] which allow us to find a well-defined quantization of $a^{-3}$. To that end we first rewrite the classical expression in an equivalent way which is better suited to quantization. Since such a rewriting can be done in many classically equivalent ways, this in general leads to quantization ambiguities in non-basic operators. For instance, the inverse volume can be written as $$\label{densclass} d(a):=a^{-3}= \left(\frac{3}{8\pi Glj(j+1)(2 j+1)}\sum_{I=1}^3{\rm tr}_j(\tau_I h_I\{h_I^{-1},|p|^l\})\right)^{3/(2-2l)}$$ where $j\in\frac{1}{2}{\mathbb N}$ (denoting the SU(2) representation in which we take the trace of holonomies $h_I=\exp(c\tau_I)$ with SU(2) generators $\tau_I=-i\sigma_I/2$ in terms of Pauli matrices $\sigma_I$) and $0<l<1$ are ambiguity parameters. The advantage of these new expressions is that we now have only positive powers of $p$ on the right hand side which, as well as the holonomies, we can easily quantize. The Poisson bracket will then be turned into a commutator at the quantum level resulting in a well-defined operator whose eigenvalues $$\widehat{d(a)}_{\mu}^{(j,l)} = \left(\frac{9}{\ell_{\rm P}^2lj(j+1)(2j+1)} \sum_{k=-j}^j k|p_{\mu+2k}|^l\right)^{3/(2-2l)}$$ on eigenstates $|\mu\rangle$ follow from the action of basic operators. Since this operator is finite [@InvScale], the classical divergence of $a^{-3}$ is now indeed removed as can be seen from the eigenvalues. In particular for larger $j$, which are sometimes helpful for phenomenological purposes, the exact expression can be difficult to use and the approximation [@Ambig; @ICGC] $$\label{deff} d(a)^{(j,l)}_{\rm eff}:= \widehat{d(a)}_{\mu(a^2)}^{(j,l)}= a^{-3} p_l(3a^2/j\ell_{\rm P}^2)^{3/(2-2l)}$$ with $\mu(p)=6p/\ell_{\rm P}^2$ and $$\begin{aligned} p_l(q) &=&\frac{3}{2l}q^{1-l}\left( \frac{1}{l+2} \left((q+1)^{l+2}-|q-1|^{l+2}\right)\right.\\ && - \left.\frac{1}{l+1}q \left((q+1)^{l+1}-{\rm sgn}(q-1)|q-1|^{l+1}\right)\right)\nonumber\end{aligned}$$ is helpful. Even though there are quantization ambiguities, the important properties such as the finiteness of the operator and its classical limit are robust. Difference equation ------------------- In order to consider dynamics and to decide whether or not the classical singularity persists as a boundary to the evolution, we need to quantize the Friedmann equation [@IsoCosmo]. This is most conveniently expressed in the triad representation given by coefficients $\psi_{\mu}(\phi)$ in an expansion $|\psi\rangle=\sum_{\mu}\psi_{\mu}(\phi)|\mu\rangle$ in triad eigenstates. Since we have to use the basic operator (\[c\]) which is a shift operator on triad eigenstates, the quantized Friedmann equation becomes a difference equation for $\psi_{\mu}$: $$\begin{aligned} && (V_{\mu+5}-V_{\mu+3})e^{ik}\psi_{\mu+4}(\phi)- (2+k^2) (V_{\mu+1}-V_{\mu-1})\psi_{\mu}(\phi)\\\nonumber &&+ (V_{\mu-3}-V_{\mu-5})e^{-ik}\psi_{\mu-4}(\phi) = -\frac{4}{3}\pi G\ell_{\rm P}^2\hat{H}_{\rm matter}(\mu)\psi_{\mu}(\phi)\end{aligned}$$ in terms of volume eigenvalues $V_{\mu}=(\ell_{\rm P}^2|\mu|/6)^{3/2}$. There are also possible ambiguities in this constraint, for instance analogous to the parameter $j$ above which have been analyzed in [@AmbigConstr]. Moreover, a symmetrized version is possible, which we do not discuss here for simplicity. The evolution dictated by this difference equation in internal time $\mu$ does not stop at any finite value of $\mu$. In particular, we can uniquely evolve initial values for the wave function through the classical singularity situated at $\mu=0$. Thus, there is no singularity where energy densities would diverge or the evolution would stop. This comes about as a consequence of the basic loop properties: the discreteness of spatial geometry leads to finite operators for the inverse volume as well as evolution in discrete internal time. Both properties enter in the demonstration of singularity free evolution. Physically, this means that around the classical singularity continuous space-time and with it the classical theory dissolve. Discrete quantum geometry, on the other hand, still makes sense and allows us to evolve to the other side of the classical singularity. Phenomenology ============= The density (\[densclass\]) does not just give us a kinematical hint for the removal of classical singularities, it is also important as an ingredient in matter Hamiltonians such as (\[Hmatter\]). Since at small scales the classical $a^{-3}$ is modified by (\[deff\]), we obtain modified Hamiltonian equations of motion and a modified Friedmann equation. For a scalar they are the effective Friedmann equation $$\label{effFried} 3a\dot{a}^2=8\pi G \left({\textstyle\frac{1}{2}}d(a)_{\rm eff}\, p_{\phi}^2+a^3 V(\phi)\right)\,,$$ and Raychaudhuri equation $$\label{effRay} \frac{\ddot{a}}{a}= -\frac{8\pi G}{3}\left( a^{-3}d(a)_{\rm eff}^{-1}\dot{\phi}^2 \left(1-{\textstyle\frac{1}{4}}a\frac{{{\rm d}}\log(a^3d(a)_{\rm eff})}{{{\rm d}}a}\right) -V(\phi)\right)$$ for the scale factor and the effective Klein–Gordon equation $$\label{effKG} \ddot{\phi}=\dot{\phi}\,\dot{a}\frac{{{\rm d}}\log d(a)_{\rm eff}}{{{\rm d}}a}-a^3d(a)_{\rm eff}V'(\phi)$$ for the scalar. Other matter types have been discussed in [@Metamorph]. These modifications from $d(a)$ at small scales lead to diverse effects which give us a new picture of the very early universe. Before discussing these effects we note that even though small scales behave very differently from large ones, there is an interesting duality in the effective equations which can be helpful in analyzing solutions [@JimDual]. Inflation --------- At small $a$, the effective density $d(a)_{\rm eff}\sim a^{3/(1-l)}$ is increasing since $0<l<1$. Thus, in contrast to the classically decreasing $a^{-3}$ the effective density implies a matter energy on the right hand side of the Friedmann equation (\[effFried\]) which increases with the scale factor. Since the negative change of energy with volume defines pressure, this quantum geometry effect naturally implies an inflationary phase in early stages [@Inflation]. As demonstrated in Fig. \[Infl\], inflation automatically ends when the peak of the effective density is reached such that there is no graceful exit problem. ![The effective density (left) implying early inflation (right), which is not realized with the classical density (dotted). \[Infl\]](Infl.eps){width="12cm"} Since the modification is present in the kinetic term of any matter field, we do not need to assume $\phi$ to be an inflaton with special properties. Thus, there are several different scenarios depending on whether or not we assume an inflaton field to drive the expansion. It is, of course, more attractive to work without a special field, but it also leads to complications since many of the techniques to evolve inhomogeneities are not available. Nevertheless, recent results [@GenericInfl; @PowerLoop] suggest that this phase can generate a nearly scale invariant spectrum which is consistent with observations but also provides characteristic signatures compared to other inflation models. However, this phase alone cannot lead to a large enough universe (unless one assumes the parameter $j$ to be unnaturally large) such that we need a second phase of accelerated expansion whose properties are not restricted so tightly since it would not give rise to observable anisotropies. Such a second phase of accelerated expansion also follows naturally from loop quantum cosmology with matter fields: The effective Klein–Gordon equation (\[effKG\]) has a $\dot{\phi}$-term which changes sign at small scales. This means that the usual friction in an expanding universe turns into antifriction very early on [@Closed]. Matter fields are then driven away from their minima in the first inflationary phase and, after this phase stops and the classical equations become valid, slowly roll down their potentials (Fig. \[Push\]). As usually, this will lead to a second (or more) inflationary phase which makes the universe big. ![ During the modified phase, $a(t)$ is accelerated and $\phi$ moves up its potential (quadratic in this example) if it starts in a minimum. At the peak value of the effective density, indicated by the dashed lines, this first phase of inflation stops, but there will be a second phase (right hand side) when $\phi$ rolls down its potential before it oscillates around the minimum. Left hand side plots are in slow motion: each tic mark on the right $t$-axis stands at an increase of $t$ by 100. The upper right data are rescaled so as to fit on the same plot. Units for $a$ and $\phi$ are Planck units, and parameters are not necessarily realistic but chosen for plotting purposes. \[Push\]](Push.eps){width="12cm"} This effect also applies if we do have an inflaton field. It will then be driven to large values in the loop inflationary phase, providing the necessary large initial values for the phase of slow-roll inflation. Moreover, since around the turning point of the scalar slow-roll conditions are violated, there are deviations at very early stages of the slow-roll phase which can explain the observed loss of power on large scales of the anisotropy spectrum [@InflationWMAP; @Robust]. Bounces ------- Both effects, the modified matter energy in the Friedmann equation and the antifriction term in the Klein–Gordon equation, can also lead to bounces in systems which would be singular classically. This provides intuitive explanations for the absence of singularities [@BounceClosed; @BounceQualitative; @GenericBounce] and can be used for the construction of universe models [@Oscill; @Cyclic; @InflOsc]. For a bounce we need $\dot{a}=0$, $\ddot{a}>0$ which is possible only if we have a negative contribution to the matter energy in the Friedmann equation. This can come from a curvature term or from a negative potential. Classically, the second condition would then be impossible to satisfy generically as a consequence of the singularity theorems. In the isotropic case with a scalar, this can also be seen from the Raychaudhuri equation (\[effRay\]) whose right hand side is negative in both cases: it is strictly negative with a negative potential and, if we have a curvature term, $\dot{\phi}$ will diverge at small $a$ and dominate over the potential. However, when the modification becomes effective, the $\dot{\phi}^2$-term in the Raychaudhuri equation can become positive [@Cyclic]. Moreover, due to antifriction $\dot{\phi}$ will not diverge in the closed case such that the potential can generically lead to a positive $\ddot{a}$ [@BounceClosed] Quantum degrees of freedom -------------------------- The modifications used so far only relied on non-perturbative corrections coming from the finiteness of density operators. (In this context also perturbative corrections from the inverse scale factor appear above the peak, but so far they do not seem significant for cosmology [@PowerPert].) In addition, there are also perturbative corrections which are analogous to higher order or derivative terms in an effective action. Since the methods underlying loop quantum gravity are canonical, deriving an effective action is not possible in a direct manner. Nevertheless, there are methods to derive the corresponding terms in Hamiltonian equations of motion [@Perturb; @Josh], and the appearance of quantum degrees of freedom, as would be the case for effective actions with higher derivative terms, can also be seen here. Since there are usually many different correction terms, it is not always easy to tell which one is dominant, if any at all. This is different from the non-perturbative effects in the density which can be studied in isolation by choosing a large value for the ambiguity parameter $j$. This is not always possible for other corrections, but one can test them numerically as demonstrated in [@Time] where a numerical evolution of the wave function under the Hamiltonian constraint in coordinate time has been compared with solutions to the effective classical equations with non-perturbative as well as perturbative corrections. One example is a term resulting from the spread of the wave packet which can give another explanation for bounces (Fig. \[Bounce\]). ![Contour lines of a wave packet starting at the bottom and moving toward the classical singularity (dotted line) where it bounces off, compared to a solution to the effective classical equation (thick dashed line) derived under the assumption of a Gaussian. After the bounce, the wave function departs rapidly from a Gaussian and deviates from the effective classical solution. \[Bounce\]](Bounce.eps){width="10cm"} A different method to derive effective classical equations is obtained from WKB techniques [@EffHam; @DiscCorr]. Here, it is more difficult to deal with new degrees of freedom arising in higher order approximations. Less symmetric models ===================== All the techniques described so far for the isotropic model are now also available for homogeneous but not necessarily isotropic models [@HomCosmo; @Spin]. Again, this leads to a picture at small scales very different from the classical one. For instance, the Bianchi IX model looses its classical chaos when modifications from effective densities in its curvature potential are taken into account [@NonChaos; @ChaosLQC]. This also allows conclusions for the general situation of inhomogeneous singularities where the Bianchi IX model plays an important role in the BKL scenario [@BKL]. Here, space is imagined as being composed of almost homogeneous patches each of which evolves according to the Bianchi IX model. Thus, if the Bianchi IX model becomes non-singular and the BKL picture remains valid, one can expect that singularities in general are removed. However, these assumption of a BKL picture available even at the quantum level is very strong such that a definite conclusion can only be reached by studying inhomogeneous midi-superspace models and eventually the full theory. Some inhomogeneities are now under control, most importantly in spherical symmetry [@SphSymm; @SphSymmVol; @SphSymmHam] which also allows conclusions for black holes [@Horizon; @Collapse]. The singularity issue, however, is more complicated in such a situation and remains to be resolved. Coming back to the BKL picture, we can see that not only the structure of classical singularities but also the approach to them is changed dramatically in effective loop cosmology. The classical Bianchi IX chaos implies that patches in the BKL picture have to be subdivided rapidly if the almost homogeneous approximation is to be maintained. This goes on without limit to the patch size which implies unlimited fragmentation and a complicated initial geometry classically. Without the chaos in the loop model, the fragmentation stops eventually giving rise to a minimal patch size. As can be seen [@NonChaos], the minimal patch size is not smaller than the scale of discreteness in loop quantum gravity thus providing a consistent picture. Conclusions =========== We have reviewed the basic ingredients of loop quantum cosmology and discussed phenomenological consequences. Here we focused mainly on new developments after the last report [@ICGC]. These are in the context of structure formation from loop effects even without an inflaton, new terms in effective classical equations, and better techniques for inhomogeneous models. In particular the latter will be developed further in the near future which will not only bring new ingredients for cosmological investigations but also new applications in the context of black holes. References {#references .unnumbered} ========== Rovelli C 1998 [*Liv. Rev. Rel.*]{} [ **1**]{} 1 http://www.livingreviews.org/Articles/Volume1/1998-1rovelli Thiemann T Introduction to Modern Canonical Quantum General Relativity [*Preprint*]{} gr-qc/0110034 Ashtekar A and Lewandowski J 2004 [*Class. Quantum Grav.*]{} [**21**]{} R53–R152 Rovelli C 2004 [*Quantum Gravity*]{} (Cambridge University Press, Cambridge, UK) Barbero G, J F 1995 [*Phys. Rev. D*]{} [**51**]{} 5507–5510 Immirzi G 1997 [*Class. Quantum Grav.*]{} [**14**]{} L177–L181 Ashtekar A, Baez J C, Corichi A, and Krasnov K 1998 [*Phys. Rev. Lett.*]{} [**80**]{} 904–907 Ashtekar A, Baez J C, and Krasnov K 2001 [*Adv. Theor. Math. Phys.*]{} [**4**]{} 1–94 Domagala M and Lewandowski J 2004 [*Class. Quantum Grav.*]{} [**21**]{} 5233–5243 Meissner K A 2004 [*Class. Quantum Grav.*]{} [**21**]{} 5245–5251 DeWitt B S 1967 [*Phys. Rev.*]{} [**160**]{} 1113–1148 Wiltshire D L 1996 In Robson B. Visvanathan N. and Woolcock W. S. editors, [ *Cosmology: The Physics of the Universe*]{} pages 473–531 (World Scientific, Singapore) Bojowald M and Kastrup H A 2000 [*Class. Quantum Grav.*]{} [**17**]{} 3009–3043 Bojowald M 2000 [*Class. Quantum Grav.*]{} [**17**]{} 1509–1526 Bojowald M and Morales-Técotl H A 2004 In [ *Proceedings of the Fifth Mexican School (DGFM): The Early Universe and Observational Cosmology*]{}. [*Lect. Notes Phys.*]{} [**646**]{} 421–462 (Springer-Verlag, Berlin) Ashtekar A, Bojowald M, and Lewandowski J 2003 [*Adv. Theor. Math. Phys.*]{} [**7**]{} 233–268 Sahlmann H 2002 Some Comments on the Representation Theory of the Algebra Underlying Loop Quantum Gravity [*Preprint*]{} gr-qc/0207111 Sahlmann H 2002 When Do Measures on the Space of Connections Support the Triad Operators of Loop Quantum Gravity? [*Preprint*]{} gr-qc/0207112 Okolow A and Lewandowski J 2003 [*Class. Quantum Grav.*]{} [**20**]{} 3543–3568 Sahlmann H and Thiemann T 2003 On the superselection theory of the Weyl algebra for diffeomorphism invariant quantum gauge theories [*Preprint*]{} gr-qc/0302090 Fleischhack C 2004 Representations of the Weyl Algebra in Quantum Geometry [*Preprint*]{} math-ph/0407006 Bojowald M 2001 [*Phys. Rev. Lett.*]{} [**86**]{} 5227–5230 Bojowald M 2001 [*Phys. Rev. Lett.*]{} [**87**]{} 121301 Bojowald M 2003 [*Gen. Rel. Grav.*]{} [**35**]{} 1877–1883 Bojowald M 2002 [*Phys. Rev. Lett.*]{} [ **89**]{} 261301 Thiemann T 1998 [*Class. Quantum Grav.*]{} [**15**]{} 1281–1314 Bojowald M 2001 [*Phys. Rev. D*]{} [**64**]{} 084018 Bojowald M 2002 [*Class. Quantum Grav.*]{} [**19**]{} 5113–5130 Bojowald M 2004 In [*Proceedings of the International Conference on Gravitation and Cosmology (ICGC 2004), Cochin, India*]{}. [*Pramana*]{} [**63**]{} 765–776 Bojowald M 2002 [*Class. Quantum Grav.*]{} [**19**]{} 2717–2741 Vandersloot K 2005 On the Hamiltonian Constraint of Loop Quantum Cosmology [*Preprint*]{} gr-qc/0502082 Singh P 2005 Effective State Metamorphosis in Semi-Classical Loop Quantum Cosmology [*Preprint*]{} gr-qc/0502086 Lidsey J E 2004 [*JCAP*]{} [**0412**]{} 007 Date G and Hossain G M 2005 [*Phys. Rev. Lett.*]{} [**94**]{} 011301 Hossain G M 2004 Primordial Density Perturbation in Effective Loop Quantum Cosmology [*Preprint*]{} gr-qc/0411012 Bojowald M and Vandersloot K 2003 [*Phys. Rev. D*]{} [**67**]{} 124023 Tsujikawa S, Singh P, and Maartens R 2004 [*Class. Quantum Grav.*]{} [**21**]{} 5767–5775 Bojowald M, Lidsey J E, Mulryne D J, Singh P, and Tavakol R 2004 [*Phys. Rev. D*]{} [**70**]{} 043530 Singh P and Toporensky A 2004 [*Phys. Rev. D*]{} [**69**]{} 104008 Vereshchagin G V 2004 [*JCAP*]{} [**07**]{} 013 Date G and Hossain G M 2005 [*Phys. Rev. Lett.*]{} [**94**]{} 011302 Lidsey J E, Mulryne D J, Nunes N J, and Tavakol R 2004 [*Phys. Rev. D*]{} [**70**]{} 063521 Bojowald M, Maartens R, and Singh P 2004 [*Phys. Rev. D*]{} [**70**]{} 083517 Mulryne D J, Nunes N J, Tavakol R, and Lidsey J 2004 Inflationary Cosmology and Oscillating Universes in Loop Quantum Cosmology [*Int. J. Mod. Phys. A*]{} to appear Hofmann S and Winkler O 2004 The Spectrum of Fluctuations in Inflationary Quantum Cosmology [*Preprint*]{} astro-ph/0411124 Ashtekar A, Bojowald M, and Willis J [*Preprint*]{} in preparation Willis J 2004 PhD thesis The Pennsylvania State University Bojowald M, Singh P, and Skirzewski A 2004 [*Phys. Rev. D*]{} [**70**]{} 124022 Date G and Hossain G M 2004 [*Class. Quantum Grav.*]{} [**21**]{} 4941–4953 Banerjee K and Date S 2005 Discreteness Corrections to the Effective Hamiltonian of Isotropic Loop Quantum Cosmology [*Preprint*]{} gr-qc/0501102 Bojowald M 2003 [*Class. Quantum Grav.*]{} [**20**]{} 2595–2615 Bojowald M, Date G, and Vandersloot K 2004 [*Class. Quantum Grav.*]{} [ **21**]{} 1253–1278 Bojowald M and Date G 2004 [*Phys. Rev. Lett.*]{} [ **92**]{} 071302 Bojowald M, Date G, and Hossain G M 2004 [*Class. Quantum Grav.*]{} [**21**]{} 3541–3569 Belinskii V A, Khalatnikov I M, and Lifschitz E M 1982 [*Adv. Phys.*]{} [**13**]{} 639–667 Bojowald M 2004 [*Class. Quantum Grav.*]{} [**21**]{} 3733–3753 Bojowald M and Swiderski R 2004 [*Class. Quantum Grav.*]{} [**21**]{} 4881–4900 Bojowald M and Swiderski R Spherically Symmetric Quantum Geometry: Hamiltonian Constraint [*Preprint*]{} in preparation Bojowald M. and Swiderski R. 2004 Spherically Symmetric Quantum Horizons [*Preprint*]{} gr-qc/0410147 Bojowald M, Goswami R, Maartens R, and Singh P Non-singular gravitational collapse [*Preprint*]{} in preparation
--- abstract: 'ZnSe nanowire heterostructures were grown by molecular beam epitaxy in the vapour-liquid-solid growth mode assisted by gold catalysts. Size, shape and crystal structure are found to strongly depend on the growth conditions. Both, zinc-blende and wurtzite crystal structures are observed using transmission electron microscopy. At low growth temperature, cone-shaped nano-needles are formed. For higher growth temperature, the nanowires are uniform and have a high aspect ratio with sizes of 1–2 $\mu$m in length and 20–50 nm in width as observed by scanning electron microscopy. Growing a nanowire on top of a nano-needle allows us to obtain very narrow nanorods with a diameter less than 10 nm and a low density of stacking fault defects. These results allow us the insertion of CdSe quantum dots in a ZnSe nanowire. An efficient photon anti-bunching was observed up to 220 K, demonstrating a high-temperature single-photon source.' address: 'Nanophysics and Semiconductors Group, CEA/CNRS/University J. Fourier, 17 rue des Martyrs, 38000 Grenoble, France' author: - Thomas Aichele - Adrien Tribu - Gregory Sallen - Juanita Bocquel - 'Edith Bellet-Amalric' - Catherine Bougerol - 'Jean-Philippe Poizat' - Kuntheak Kheng - Régis André - Serge Tatarenko title: | CdSe quantum dots in ZnSe nanowires as\ efficient source for single photons up to 220 K --- An appealing application for semiconductor nanowires (NWs) is the inclusion of quantum dots (QDs) into the NW. Due to the narrow lateral size, NW QD heterostructures can be directly grown on defined positions and without the necessity of self-assembly. This is especially important for for II-VI materials, where self-assembled QD formation occurs only within narrow windows of growth conditions [@RobinAPL]. Recently, II-VI compound semiconductor NWs have been synthesized by Au-catalysed metal-organic chemical vapour deposition (MOCVD) and molecular-beam epitaxy (MBE) methods [@NW-MOCVD; @NW-MBE; @Colli]. QD devices can be utilized as emitters for the effective and controlled generation of single-photon states. Single-photon emission from a GaAsP/GaP NW QD was reported before at cryogenic temperature in ref. [@Zwiller_NW_AB]. High-temperature experiments from individual Stransky-Krastanov (SK) grown QDs were reported from CdSe/ZnSe QDs [@Sebald] and from GaN/AlN QDs [@Kako]. Both experiments showed photon anti-bunching up to 200 K with normalized dip values of 81% and 53%, respectively. Other systems have demonstrated room temperature single photon emission: nanocrystals [@MichlerNC] have the drawback that they suffer from blinking effect [@BrokmannBlinking]; colour centres in diamond [@Grangier; @Kurtsiefer] have shown very reliable operation but with a very broad spectrum. Anyways, neither nanocrystals nor colour centres in diamond offer the possibility of electrical excitation, which is a very realistic and very promising perspective for semiconducting nanowires [@MinotNWLED]. In this paper, we report MBE growth of ZnSe NWs on Si and GaAs substrates and insertion of CdSe QDs in these NWs. The growth process is based on the Au-catalyzed VLS method. The morphology of the NWs depending on the growth parameters was examined by scanning electron microscopy (SEM) and high-resolution transmission electron microscopy (HRTEM). We found two different growth regimes, resulting in either narrow and uniform NWs, or cone-shape nano-needles. When combining these growth regimes, NWs with a very low density of defects can be achieved. As an important application of these NW QD structures, we present the generation of single photons from a single QD with a deep anti-bunching of the photon correlation even up to high a temperature of 220 K. ![SEM image of a 0.5 nm thick gold layer dewetted on a Si(100) substrate. The numbers indicate diameters of a few selected gold particles.](fig1){width="\linewidth"} Prior to the growth, the substrates, (100)-, (111)-GaAs and (100)-Si, were annealed in an ultra-high vacuum (UHV) chamber to 580$^\circ$ C in order to degas the surfaces. Additionally, the GaAs surface deoxidizes at this temperature, while the Si substrate remains oxidized. A thin Au layer (0.2–1 nm thick) was then deposited on the substrate at room temperature by e-beam metal deposition. The samples were then introduced in a II-VI MBE chamber. The transfer between the MBE and metal deposition chamber were performed under UHV. In order to generate Au nanoparticles, the gold film was dewetted by annealing the substrate to 450$^\circ$ C for 10 min. At this temperature, the gold forms nanoparticles, as observed by SEM and AFM (Figure 1(a)). Next, growth of ZnSe wires at different sample temperatures was performed. All samples were grown by solid-source MBE. The growth parameters like growth temperature, Zn:Se flux ratio or gold thickness were varied independently. The growth rate is in the 0.5 nm/s range. ![ZnSe NW grown at 350$^\circ$ C in excess of Se. (a) SEM image of the as-grown sample; (b) TEM image of a single NW.](fig2){width="\linewidth"} First we studied the effect of the sample temperature on the NW structure, while the beam pressure where kept constant (Zn (Se) flux: 2.5 (7.5) $\times10^{-7}$ torr). When growing at a substrate temperature within 350-450$^\circ$ C, a dense carpet of uniform NWs covers the substrate (fig. 2(a). They have a diameter of about 20–50 nm and a length of 1–2 $\mu$m. The structure is predominantly a wurtzite structure with \[0001\]-axis as growth direction. As seen in the TEM image in fig 2(b), stacking faults due to zinc blende insertions are repeatedly observed along the NW. Additionally to the NWs, the substrate is covered with highly irregular nano-structures. The formation of stacking faults and irregular nano-structures are explained by non-ideal growth conditions at the initial stages of the growth process. Possible reasons are the presence of non-uniform gold agglomerations instead of small gold beads and the insertion of impurities during the gold deposition process. The presence of both wurtzite and Zn-blende shows, that at the utilized growth conditions both phases are allowed. ![ZnSe nano-needle grown at 300$^\circ$ C in excess of Se. (a) SEM image of the as-grown sample; (b) TEM image of a single NW.](fig3){width="\linewidth"} When growing at low temperature (300$^\circ$ C), HRTEM and SEM images reveal the formation of nano-needles with a wide base (80 nm diameter) and a sharp tip (5–10 nm diameter) covered by a gold particle as seen in fig. 3. Similar results are obtained at a growth temperature of 450$^\circ$ C but with inverted Zn:Se flux ratio. The structure is mainly a wurtzite type with \[11-20\]-axis as growth direction. Towards the base, the structures are again repeatedly intersected by zinc blende domains, while the tip has a pure wurtzite structure. The formation of nano-needles instead of NWs is well accounted for by the slower adatom mobility expected at low temperature, which promotes nucleation on the sidewalls before reaching the gold catalyst at the nano-needle tip. A similar idea was proposed in ref. [@Colli] for the formation of nanosaws. In contrast to the NWs, the defect planes are here disoriented with respect to the nano-needle axis. It seems that this disorientation hinders the propagation of defects in the growth direction, especially for lower diameters. Defects zones are rapidly blocked on the side walls, providing a higher structural quality towards the nano-needle tip. Interestingly the nanowire structure and shape do depend only little on the underlying substrate. ![Two-step growth of a ZnSe NW on a nano-needle. (a) SEM image of the as-grown sample; (b) collage of TEM images of two close-by NWs.](fig4){width="\linewidth"} The observation of a decreasing defect density from the base towards the top in the nano-needles motivated us to modify the growth recipe in the following way: In the first part, the sample is grown with excess of Zn for 30 min, leading to the formation of nano-needles. Next, the Zn- and Se-flux was inverted and NWs were grown for another 30 min on top of the nano-needles. Thus, the growth at the side-walls was aborted and re-growth started on defect-free and strain-relaxed nano-needle tips, where the high structural quality of the crystal lattice can be preserved along the narrow NW that is now formed in this second growth step. Fig. 4 shows results obtained from this sample. The structures have a broad base that tapers after a few ten nanometers to thin NWs with thickness of 10–15 nm. When studying single NWs by HRTEM, we see that the stacking faults indeed strongly reduce towards the thin part of the NW. ![Ensemble spectrum from (a) the nano-needle sample in fig. 3, and (b) the combined NW/nano-needle sample in fig. 4.](fig5){width="\linewidth"} The suppression of such stacking faults becomes important, when observing the spectral properties of a single NW QD. The spectra were taken at a sample temperature of 5 K. The samples were excited by a cw laser at 405 nm via a microscope objective. Fig. 5(a) shows the spectra from an ensemble of nano-needles. The peak at 440 nm belongs to ZnSe bandedge emission. This is mostly due to bulk ZnSe that is grown in parallel to the nano-needles at the sample floor. Single nano-needles and NWs (not shown here) only show a very weak contribution of the ZnSe bandedge. More strikingly, we observed a broad emission within 500–600 nm. This can be attributed to excitons localized at the defect zones in the nano-needle [@Philipose2]. In contrast, for the combined NW/nano-needle structure, fig. 5(b), this emission is mostly suppressed leaving behind a only a small bunch of spectral lines between 450–500 nm. These are presumably from the broader base of these structures. When isolating single structures (see below), it will most likely break at the narrow NW part, which contains much less defects, so that we find single NWs, in which this broad emission is completely suppressed. With these conditions, it finally becomes possible to grow and study single NW samples with an inserted CdSe QD. In order to prepare QDs, a small region of CdSe has been inserted in this high quality part of the ZnSe NW. This is done by interrupting the ZnSe growth after 30 min and changing to CdSe for 30 s. Next, the ZnSe growth is continued for another 15 min. The diameter size is of the order of the bulk exciton Bohr diameter for CdSe (11 nm). This means that the carriers in the CdSe QD are in the strong confinement regime. For the study of single NWs, the sample is put in a methanol ultra-sonic bath for 30 s, in which some NW broke off the substrate into the solution. This process allows also to detach mainly the high quality part of the NW from the nano-needles where many stacking faults are present. Droplets of this solution were next placed on a new substrate, leaving behind a low density of individual NWs. Metal markers have been made on the substrate using optical lithography technique in order to locate the NWs. ![(a) Second-order correlation taken under pulsed excitation at temperatures 4 and 220 K, respectively. The numbers in the graphs are measured values of $g^{(2)}(0)$ (i.e. the area under the peak at $\tau=0$ relative to the peaks at $|\tau|>0$). The numbers in parentheses are the reconstructed values of the [*pure*]{} spectral lines taking into account a spectral background. (b) Spectra from the same NW QD taken in the same experimental run as in (a).](fig6a){width="\linewidth"} Fig. 6(b) shows the spectra from a NW QD at different temperatures. The spectral line can be attributed to a trion transition. With higher excitation intensity, also an exciton and biexciton transition become visible. Above 150 K, the spectral lines significantly broaden. We found that the light emission is highly polarized with a contrast of 80–90% (while exciting the NW with circularly polarized laser light). This striking polarization anisotropy of absorption can be explained by the dielectric contrast between the NW material and the surrounding environment, and the orientation of the dipole within the QD [@LieberPolNW; @NiquetPolNW]. We have carried out photon correlation measurements using a Hanbury Brown and Twiss setup. The sample (few NWs on a silicon substrate with micro-markers) was mounted in a He-flow cryostat. The QD’s were optically excited with a frequency doubled Ti:sapphire laser emitting 200-fs pulses with a repetition rate of 80 MHz. The laser was focused on the sample by using a microscope objective and the luminescence was collected with the same objective. The QD emission passed through a spectrometer and is then sent to a Hanbury Brown-Twiss correlator based on two silicon avalanche photodiodes (APDs) and a coincidence counter. The graphs in fig. 6(a) are the raw histograms of measured coincidences without any correction for background count events. The area under each peak at time $\tau=0$, was normalized with respect to the average area under the peaks at $|\tau|>0$. Each peak area was calculated by integrating the coincidences within 12-ns windows (repetition time of the excitation laser). The correlation functions were taken at different temperatures between 4 K and 220 K. At 4 K, the peak at $\tau=0$ is suppressed to a normalized value of 7%, showing the high quality of the single-photon generation. With increasing temperature, this value only slightly increases to finally reach 36% at 220 K. This value is far below 50%, the emitted light field is thus clearly distinguished from states with 2 or more photons. Thus, even without correcting for background events, these emitters can be directly used as a high-quality single-photon device even when operating at high temperature, with a strongly suppressed probability for two-photon events. In the correlation measurement obtained at 220 K, a low resolution grating was used in the spectrometer so that all the photons coming from the broader line could be counted. This broader spectral window of integration leads to larger background, which is the main origin for the rise of the $\tau=0$-peak above 150 K. In contrast to SK QDs which often grow with a high density on the substrate, the density of NWs in the microscope focus was much smaller and can be even reduced to only one within the microscope focus, which avoids contributions from neighbouring emitters that spectrally overlap with the transition under observation. In summary, we have performed optical studies on single CdSe QDs in ZnSe NWs. The NWs were developed by MBE in a two-step growth recipe, where narrow, mostly defect-free NWs are grown on top of broader, cone-shaped NWs. The single-NW PL features narrow, isolated spectral lines. When filtering individual transitions, non-classical single-photon statistics were retrieved, indicated by strong anti-bunching, where the [*raw*]{} correlation function was reduced down to a normalized value of 7%. This behaviour remains even up to a temperature of 220 K, where this correlation peak is only slightly increased to 36%. For non-blinking QDs, this is the highest reported temperature for single-photon emission and for an anti-bunching-dip below 50%. At this temperature, Peltier cooling becomes an alternative to liquid helium or nitrogen cooling. Together with the possibility of integrating NWs into electro-optical circuits [@MinotNWLED], these emitters become an interesting candidate for developing compact, stable and cost-efficient quantum devices operating near room temperature. Acknowledgements {#acknowledgements .unnumbered} ================ T.A. acknowledges support by Deutscher Akademischer Austauschdienst (DAAD). Part of this work was supported by European project QAP (Contract No. 15848). [00]{} I.-C. Robin, R. Andr[é]{}, C. Bougerol, T. Aichele, and S. Tatarenko, Appl. Phys. Lett. [**88**]{} (2006), 233103. R. Solanki, J. Huo, J. L. Freeouf, and B. Miner, Appl. Phys. Lett. [**81**]{} (2002), 3864. Y. F. Chan, X. F. Duan, S. K. Chan, I. K. Sou, X. X. Zhang, and N. Wang, Appl. Phys. Lett. [**83**]{} (2003), 2665; A. Colli, S. Hofmann, A. C. Ferrari, C. Ducati, F. Martelli, S. Rubini, S. Cabrini, A. Franciosi, and J. Robertson, Appl. Phys. Lett. [**86**]{} (2005), 153103. . M. T. Borgström, V. Zwiller, E. Müller, A. Imamoglu, Nano Lett. [**5**]{} 5 (2005), 1439. K. Sebald, P. Michler, T. Passow, D. Hommel, G. Bacher, and A. Forchel, Appl. Phys. Lett. [**81**]{} (2002), 2920. S. Kako, C. Santori, K. Hoshino, S. G[ö]{}tzinger, Y. Yamamoto, and Y. Arakawa, Nature Mat. [**5**]{} (2006), 887. P. Michler, A. Imamoglu, M. D. Mason, P. J. Carson, G. F. Strouse, and S. K. Buratto, Nature [**406**]{} (2000), 968. X. Brokmann, J.-P. Hermier, G. Messin, P. Desbiolles, J.-P. Bouchaud, and M. Dahan, Phys. Rev. Lett. [**90**]{} (2003), 120601. R. Brouri, A. Beveratos, J.-P. Poizat, and P. Grangier, Opt. Lett. [**25**]{} (2000) 1294. C. Kurtsiefer, S. Mayer, P. Zarda, and H. Weinfurter, Phys. Rev. Lett. [****]{} 85(2000), 290. E. D. Minot, F. Kelkensberg, M. van Kouwen, J. A. van Dam, L. P. Kouwenhoven, V. Zwiller, M. T. Borgstr[ö]{}m, O. Wunnicke, M. A. Verheijen, and E. P. A. M. Bakkers, Nano Lett. [**7**]{} (2007), 367. U. Philipose, S. Yang, T. Xu, and H. E. Ruda, Appl. Phys. Lett. [**90**]{} (2007), 063103. J. Wang, M. S. Gudiksen, X. Duan, Y. Cui, and C. M. Lieber, Science [**293**]{} (2001), 1455. Y. M. Niquet and D. C. Mojica, Phys. Rev. B [**77**]{} (2008), 115316.
--- abstract: | We present optical and near infrared spectroscopy obtained at Keck, VLT, and Gemini for a sample of 36 secure strong gravitational lens systems and 17 candidates identified as part of the SL2S survey. The deflectors are massive early-type galaxies in the redshift range $z_d=0.2-0.8$, while the lensed sources are at $z_s=1-3.5$. We combine this data with photometric and lensing measurements presented in the companion paper III and with lenses from the SLACS and LSD surveys to investigate the cosmic evolution of the internal structure of massive early-type galaxies over half the age of the universe. We study the dependence of the slope of the total mass density profile $\gamma'$ ($\rho(r)\propto r^{-\gamma'}$) on stellar mass, size, and redshift. We find that two parameters are sufficent to determine $\gamma'$ with less than 6% residual scatter. At fixed redshift, $\gamma'$ depends solely on the surface stellar mass density $\partial \gamma'/ \partial \Sigma_*=0.38\pm 0.07$, i.e. galaxies with denser stars also have steeper slopes. At fixed $M_*$ and $\reff$, $\gamma'$ depends on redshift, in the sense that galaxies at a lower redshift have steeper slopes ($\partial \gamma' / \partial z = -0.31\pm 0.10$). However, the mean redshift evolution of $\gamma'$ for an individual galaxy is consistent with zero $\mathrm{d}\gamma'/\mathrm{d}z=-0.10\pm0.12$. This result is obtained by combining our measured dependencies of $\gamma'$ on $z,M_*$,$\reff$ with the evolution of the $\reff$-$M_*$ taken from the literature, and is broadly consistent with current models of the formation and evolution of massive early-type galaxies. Detailed quantitative comparisons of our results with theory will provide qualitatively new information on the detailed physical processes at work. author: - 'Alessandro Sonnenfeld$^{*}$' - 'Tommaso Treu$^{\dag}$' - Raphaël Gavazzi - 'Sherry H. Suyu' - 'Philip J. Marshall' - 'Matthew W. Auger' - Carlo Nipoti bibliography: - 'references.bib' title: 'The SL2S Galaxy-scale Lens Sample. IV. The dependence of the total mass density profile of early-type galaxies on redshift, stellar mass, and size' --- Introduction {#sect:intro} ============ The formation and evolution of early-type galaxies (ETGs) is still an open question. Though frequently labeled as “red and dead” and traditionally thought to form in a “monolithic collapse” followed by “passive” pure luminosity evolution, in the past decade a far more complicated history has emerged [e.g., @Ren06 and references therein]. ETGs are thought to harbor supermassive black holes at their centers [@F+M00; @Geb++00] which regulate the conversion of gas into stars [@DeL++06]. Traces of recent star formation are ubiquitously found when sensitive diagnostics are applied [@Tre++02; @Kav10]. Episodes of tidal disturbances and interactions with other systems occur with remarkable frequency even at recent times [e.g. @m+c83; @pvd05; @Tal++09; @Atk++13]. Their structural properties evolve in the sense that their sizes appear to grow with time at fixed stellar mass [@vDo++08; @Dam++11; @New++12; @Hue++13; @Car++13]. The mode of star formation seems to be different from that found in spiral galaxies, resulting in a different stellar initial mass function [@Tre++10; @v+C10; @Aug++10b; @Bre++12; @Cap++13]. Finally, from a demographic point of view, their number density has been found to have evolved significantly since $z\sim2$ [e.g., @Ilb++13]. Reproducing these observations is an enormous challenge for theoretical models. Major and minor mergers are thought to be the main processes driving their structural and morphological evolution, but it is not clear if they can account for the observed evolution while reproducing all the observables [@Nip++09; @Hop++10d; @Ose++12; @Rem++13]. Gravitational lensing, by itself and in combination with other probes, can be used to great effect to measure the mass profiles of early-type galaxies, both in the nearby universe and at cosmological distances [@T+K02a; @T+K02b; @RKK03; @T+K04; @R+K05; @Koo++06; @J+K07; @Gav++07; @Aug++10; @Lag++10]. Until recently, however, this approach was severely limited by the small size of the known samples of strong gravitational lenses. This has motivated a number of dedicated searches which have, in the past decade, increased the sample of known strong gravitational lens systems by more than an order of magnitude [e.g., @Bro++03; @Bol++08; @Fau++08; @Tre++11]. In spite of all this progress the number of known lenses at $z\sim0.5$ and above is still a severe limitation. Increasing this sample and using it as a tool to understand the formation and evolution of massive galaxies is the main goal of our SL2S galaxy-scale lens search [@Gav++12] and other independent searches based on a variety of methods [@Bro++12; @Mar++09; @Neg++10; @Paw++12; @Ina++12; @Gon++12; @War++13; @Vie++13]. In our pilot SL2S paper [@Ruf++11] we measured the evolution of the density slope of massive early-type galaxies by combining lensing and dynamics measurements of a sample of just 11 SL2S lenses with similar measurements taken from the literature [@T+K04; @Koo++09; @Aug++10b], finding tentative evidence that the density profile of massive ETGs steepens with cosmic time on average. This trend was later confirmed qualitatively by an independent study of @Bol++12 and agrees with the theoretical work by @Dub++13. However, the picture is not clear: the observed trend is tentative at best, while different theoretical studies find contrasting evolutionary trends [@JNO++12; @Rem++13]. More data and better models are needed to make progress. In order to clarify the observational picture, we have collected a much larger sample of objects, more than tripling the sample of secure lenses with all the necessary measurements, with respect to our pilot study. Photometric and strong lensing measurements for this expanded sample are presented in a companion paper [@PaperIII hereafter Paper III]. In this paper we present spectroscopic data for the same objects. Deflector and source redshifts are used to convert the geometry of the lens system into measurements of a physical mass within a physical aperture. Stellar velocity dispersions are used as an independent constraint on the gravitational potential of the lens, allowing for more diagnostic power on the structure of our targets. The combination of the photometric, lensing, and spectroscopic data is used in this paper to study the cosmic evolution of the slope of the average mass density profile of massive early-type galaxies. This is achieved by fitting power law density profiles ($\rho(r) \propto r^{-\gamma'}$; $\gamma'\approx2$ in the local universe) to the measured Einstein radii and velocity dispersions of our lenses. Such a measurement of $\gamma'$ is a good proxy for the mean density slope within the effective radius. The goal of this paper is to measure trends of $\gamma'$ with redshift, in continuity with our previous work [@Ruf++11], as well as with other structural properties of massive ETGs, such as stellar mass and size. Such measurements will help us understand the structural evolution of ETGs from $z=0.8$ to present times. This paper is organized as follows. We briefly summarize the relevant features of the SL2S galaxy scale lens sample in , and show in detail the spectroscopic data set and the measurements of redshifts and velocity dispersions of our lenses in . In we discuss the properties of SL2S lenses in relation with lenses from independent surveys. In we briefly explain how lensing and kinematics measurements are combined to infer the density slope $\gamma'$ and discuss the physical meaning of such measurements. In we combine individual $\gamma'$ measurements to infer trends of this parameter across the population of ETGs. After a discussion of our results in we conclude in . Throughout this paper magnitudes are given in the AB system. We assume a concordance cosmology with matter and dark energy density $\Omega_m=0.3$, $\Omega_{\Lambda}=0.7$, and Hubble constant $H_0$=70 km s$^{-1} $Mpc$^{-1}$. The sample {#sect:sample} ========== The gravitational lenses studied in this paper were discovered as part of the Strong Lensing Legacy Survey [@Cab++07 SL2S] with a procedure described in detail in @Gav++12. Lens candidates are identified in imaging data from the CFHT Legacy Survey and then followed up with *Hubble Space Telescope* (*HST*) high resolution imaging and/or spectroscopy. In Paper III we ranked the candidates, assigning them a grade indicating their likelihood of being strong lenses, with the following scheme: grade A for definite lenses, grade B for probable lenses, grade C for possible lenses and grade X for non-lenses. A summary with the number of systems in each category is given in . In this paper we analyze all lenses with spectroscopic data that have not been ruled out as grade X systems. [lccccc]{} Spectroscopic observations {#sect:spec} ========================== The SL2S spectroscopic campaign was started in 2006. The goal of our spectroscopic observations is to measure the lens and source redshifts and lens velocity dispersion for all our systems. Different telescopes (Keck, VLT and Gemini), instruments (LRIS, DEIMOS, X-Shooter, GNIRS) and setups have been used to achieve this goal, reflecting technical advances during the years and the optimization of our strategy. In what follows we describe the procedure used to measure the three key spectroscopic observables. A summary of all measurements is given in . Deflector redshifts and velocity dispersions {#ssect:zd} -------------------------------------------- The typical brightness of our lenses is around $i\sim20$. With an 8m class telescope, their redshift can be measured from their optical absorption lines with $\sim10$ minutes of exposure time, while a measurement of their velocity dispersion typically takes from 30 to 120 minutes. Optical spectroscopy data come from three different instruments. For most of the systems we have data obtained with the LRIS spectrograph at Keck [@Oke++95]. The wavelength coverage of LRIS is typically in the range $3500-8000$ Å for data taken before 2009 and extends up to $10000$ Å for later data, after the installation of the new detector with much reduced fringing patterns [@Roc++10]. The spectral resolution is about 140 km s$^{-1}$ FWHM on the red side of the spectrograph. Data reduction for LRIS spectra was performed with a pipeline written by M.W. Auger. For a set of 13 systems we have VLT observations with the instrument X-Shooter[^1]. X-Shooter has both a higher resolution ($\sim50{\, {\rm km\, s}^{-1}}$) and a longer wavelength coverage (from 3500 Å up to 25000 Å) than LRIS. X-Shooter spectra were reduced with the default ESO pipeline[^2]. The observations were done by nodding along a long slit of width $0\farcs9$ for the UVB and VIS arms and $1\farcs0$ for the NIR arm. Finally, six systems presented here have data obtained with DEIMOS at Keck [@Fab++03]. The grating used in all DEIMOS observations is the 600ZD, with a wavelength range between 4500 Å and 9500 Å and a spectral resolution of about $160{\, {\rm km\, s}^{-1}}$. DEIMOS data were reduced with the DEEP2 pipeline [@Coo++12; @New++12]. Both redshifts and velocity dispersions are measured by fitting stellar templates, broadened with a velocity kernel, to the observed spectra. This is done in practice with a Monte Carlo Markov Chain adaptation of the velocity dispersion fitting code by @vandermarel1994, written by M. W. Auger and described by @Suy++10. We used 7 different templates of G and F stars, which should provide an adequate description of the stars in red passive galaxies, taken from the Indo US stellar library. The code also fits for an additive polynomial continuum, to accomodate for template mismatch effects or imperfections in the instrumental response correction. In most cases, a polynomial of order five is used. The rest-frame wavelength range typically used in our fits is $3850\AA-5250\AA$, which brackets important absorption lines such as Ca K,H at $3934,3967\AA$, the G-band absorption complex around $4300\AA$ and Mgb at $5175\AA$. Depending on the redshift of the target and the instrument used, this is not always allowed as part of the wavelength region can fall outside the spectral coverage allowed by the detector, or because of Telluric absorption. In those cases the fitted rest-frame region is extended. Systematic uncertainties in the velocity dispersion measurements are estimated by varying the fitted wavelength region and order of the polynomial continuum. These are typically on the order of $20{\, {\rm km\, s}^{-1}}$ and are then summed in quadrature to the statistical uncertainty. All the optical spectra of our systems are shown in . Source spectroscopy {#ssect:zs} ------------------- Measuring the redshift of a lensed background source is important not only for determining the geometry of the gravitational lens system, but also to confirm that the arc is actually in the background relative to the lens. The arcs of the lensed sources are relatively faint in broad band photometry ($g\sim24$), implying that their continuum radiation cannot be detected in most cases. However the sources are selected to be blue [@Gav++12] and are often associated with emission lines from star formation and/or nuclear activity. The typical redshifts of our arcs are in the range $1<z<3$. This means that optical spectroscopy can effectively detect emission from the \[OII\] doublet at $3727-3729\AA$, for the lowest redshift sources, or Ly-$\alpha$ for objects at $z>2.5$ or so. This is the case for roughly half of the systems observed. The remaining half does not show detectable emission line in the observed optical part of the spectrum, either because the most important lines fall in the near-infrared, or because emission is too weak. Emission lines from the arcs can be easily distinguished by features in the lens because they are spatially offset from the lens light. X-Shooter observations proved to be remarkably efficient in measuring source redshifts. This is in virtue of its wavelength range that extends through the near infrared up to $25000\AA$ and its medium resolution that limits the degrading effect of emission lines from the atmosphere. Of 13 systems observed with X-Shooter, 12 of them yielded a redshift of the background source, all of which with at least two identified lines. In addition, for four systems we have near infrared spectroscopic observations with the instrument GNIRS on Gemini North (PI Marshall, GN-2012B-Q-78, PI Sonnenfeld, GN-2013A-Q-91), used in cross-dispersed mode, covering the wavelength range $10000\AA-25000\AA$ at once. Of the four systems observed, two of them show two emission lines from the background source. In most cases when only optical spectroscopy is available, only one emission line is detected over the whole spectrum. The \[OII\] doublet can be easily identified even with relatively low resolution spectrographs. The identification of the Ly-$\alpha$ line is less trivial. Ly-$\alpha$ is typically the brightest emission line in the rest frame wavelength range $1000-3000\AA$ when present, but other emission lines like CIV 1546Å, OIII 1666Åor CIII 1908Å can sometimes be seen. When we detect an emission line close to the blue end of the spectrum it could in principle be any of those lines. However a detection of one of the above lines and a non-detection of the other ones is quite unlikely, unless CIII 1908Å falls right at the blue edge of the observed spectrum. In that case though we should expect to observe the OII doublet at redder wavelengths. This case is never encountered, therefore in all cases when we detect an unresolved emission line bluer than $6000\AA$, and no other lines, we can safely assume it is Ly-$\alpha$. The system SL2SJ022357-065142 is a particular case: we detected an emission line spatially associated with the background source at $9065\AA$, with a $5-\sigma$ significance. Given the low S/N the line is both compatible with being the OII doublet or an individual line. Possible other lines are OIII 5007Åand H-$\beta$, which cannot be ruled out. Therefore we do not claim redshift measurements for that source: deeper data is needed to establish whether the line is the OII doublet or not. The 2d spectra around all the detected emission lines for all the systems are shown in . Note that for some systems the line emission is multiply imaged on both sides of the foreground object. This provides a decisive clue on the lens nature of those systems, important when ranking our targets by their likelihood of being lenses (Paper III). Finally, six background sources are bright enough to be visible with continuum radiation and several absorption/emission features can be identified in their spectra. The absorption line spectra of these sources are plotted in . Despite our efforts in acquiring spectroscopic data for our lenses, seven of the 36 grade A lenses with spectroscopic follow-up have no measured source redshifts. In Paper II @Ruf++11 made use of photometric data together with lensing cross-section arguments to estimate source redshifts, with a technique called [*photogeometric redshift*]{}. Here the fraction of lenses with no source redshift is small compared to the sample size, therefore it is not essential to include them in the analysis through the use of this method. [lccccccccccc]{} [lccccccccccc]{} ![image](vdspec_1.eps){width="90.00000%"} ![image](vdspec_2.eps){width="90.00000%"} ![image](vdspec_3.eps){width="90.00000%"} ![image](vdspec_4.eps){width="90.00000%"} ![image](sourcespec.eps){width="90.00000%"} ![image](allemlines.eps){width="90.00000%"} Sample characterization {#sect:class} ======================= In Paper III we presented effective radii, magnitudes, stellar masses and Einstein radii of our lenses. Here we complement this information with lens and source redshifts, and lens velocity dispersions. It is possible at this point to look at the distribution of our lenses in the parameter space defined by these quantities. Since our scientific goal is to measure the evolution in the mean density slope with time, it is very important to assess whether other observables appear to evolve in our sample. In we plot the effective radii, stellar masses and velocity dispersions as a function of redshift for all our objects, and also for lenses from other surveys. Throughout this paper, when dealing with stellar masses we refer to values measured from stellar population synthesis fitting based on a Salpeter initial mass function (IMF). For a fair comparison, all velocity dispersions, which are measured within rectangular apertures of arbitrary sizes, are transformed into velocity dispersions within a circular aperture, $\sigmae2$, with radius $R_{\mathrm{eff}}/2$ following the prescription of @jorgensen1995. The values of $\sigmae2$ for individual SL2S lenses are reported in . SL2S lenses do not appear to differ from objects from independent lensing surveys in the average values of $R_{\mathrm{eff}}$, $M_*$ and $\sigmae2$. As far as trends with redshift within the SL2S sample are concerned, there is a mild increase of the stellar mass with $z$ that will need to be taken into account when discussing the evolution of the mass profile of these objects. As an additional test, we examine the correlation between mass and effective radius for SL2S, SLACS and LSD lenses and check it against non-lens galaxies. The goal is to make sure that these surveys do not preferentially select lenses with a larger or smaller size than typical ETGs of their mass. The mass-radius relation is seen to evolve with time [e.g. @Dam++11; @New++12; @Cim++12]. We correct for this evolution by considering effective radii evolved to $z=0$ assuming the trend measured by @New++12: $\log{\reff} (z=0) = \log{\reff} + 0.26z$. Effective radii defined in this way are plotted against measured stellar masses in , together with the mass-radius relation measured by @New++12 for low-redshift SDSS galaxies. Points in the plot of should not be considered as evolutionary tracks of individual objects, as galaxies grow in mass as well as in size. For a given object, its redshift-evolved size $\reff (z=0)$ is equivalent to its measured effective radius rescaled by the average size of galaxies at its redshift and at a reference mass. This allows us to promptly display in a single plot how our lenses compare, in terms of size, to other galaxies of the same mass, regardless of redshift. We see from that lenses from all surveys lie nicely around the relation found for non-lenses, indicating that our sample of lenses does not appear special when compared to the more general population of galaxies of their redshift. ![\[fig:classevol\] Effective radius, stellar mass and velocity dispersion of lenses as a function of redshift. ](classevol.eps){width="\columnwidth"} ![\[fig:mreffz0\] Effective radius vs. stellar mass, where $\reff$ values have been corrected for the evolution in the mass-size relation measured by @New++12: $\log{\reff} (z=0) = \log{\reff} + 0.26z$. The dashed line indicates the mass-radius relation for SDSS galaxies measured by @New++12.](mreff_z0.eps){width="\columnwidth"} Power law models {#sect:gammap} ================ We now proceed to combine lensing measurements with stellar kinematics information to infer the total mass density profile of each lens galaxy. We follow the now standard procedure in lensing and dynamics studies [@T+K02a], as used by @Ruf++11. We model the total (dark matter + stars) mass profile as a spherical power law $\rho(r) \propto r^{-\gamma'}$ in the kinematic analysis. The free parameters of the model are the slope $\gamma'$, and the mass normalization. For a given model we calculate the line of sight velocity dispersion within the rectangular aperture of our observation, broadened by the seeing, through the spherical Jeans equation. We assume isotropic orbits and a de Vaucouleurs profile for the distribution of tracers [@deV48], with effective radius fixed to the observed one. We then compare the model to the observed velocity dispersion and Einstein radius to derive posterior probability densities for the free parameters. In spite of the clear approximations, the method has been shown to be very robust when compared to results of more sophisticated models [e.g. @Bar++11a]. The data required for this inference are the Einstein radius of the lens, the redshift of both the deflector galaxy and the lensed source, and the velocity dispersion of the lens. Of the 39 grade A lenses of the SL2S sample, 25 have all the required data. For the few systems with two or more independent measurements of the velocity dispersion, we use the weighted average. The inferred values of $\gamma'$ are reported in Table \[table:lensing\]. [lccccccc]{} The meaning of $\gamma'$ ------------------------ Before analyzing the measurements in a statistical sense we need to understand what physical properties the quantity $\gamma'$ is most sensitive to. Observations [@Son++12] and simple arguments (galaxies have a finite mass) suggest that the true density profile deviates from a pure power law, particularly at large radii. Thus our power law fits to the lensing and kinematics data must be interpreted as an approximation of the average density slope over a radial range explored by our data. Since for a typical lens both the Einstein radius and the velocity dispersion probe the region within the effective radius, we expect that the inferred $\gamma'$ will be close to the mean density slope within $R_{\mathrm{eff}}$, as suggested by @D+T13. However we would like to be more quantitative and explore the two following questions: what kind of average over the true density profile $\rho(r)$ best reproduces the lensing+dynamics $\gamma'$? How sensitive to the ratio $\rein/\reff$ is the measured $\gamma'$ for a fixed galaxy mass profile? The former issue is relevant when comparing theoretical models to lensing and dynamics measurements. The latter is important when trying to measure trends of $\gamma'$ with redshift: the ratio $\rein/\reff$ typically increases for purely geometrical reasons, and a dependence of $\gamma'$ on $\rein/\reff$ could in principle bias the inference on the evolution of the slope. In order to answer these questions we simulate $\gamma'$ measurements on a broad range of model mass profiles and compare these with the true density slopes. We consider a pure de Vaucouleurs profile, a sum of a de Vaucouleurs profile with a Navarro, Frenk & White [@NFW97] profile with two values of the dark matter mass fraction $f_{\mathrm{DM}}$ within the 3d effective radius, and the most probable total density profile from the bulge + halo decomposition of the gravitational lens SDSSJ0946+1006 by @Son++12. None of these model profiles is a pure power law. We emphasize that the range of models is chosen to be broader than what is likely to be found in real galaxies based on the detailed analysis of SLACS systems by [@Bar++11a]. We again use the spherical Jeans equation to calculate the central velocity dispersion for each of these model galaxies and then fit power law density profiles with fixed total projected mass within different Einstein radii. These simulated measurements of $\gamma'$ are plotted in as a function of $\rein/\reff$ for each model profile. In the same plot we show the local logarithmic density slope $-\mathrm{d}\log{\rho}/\mathrm{d}\log{r}$ as a function of $r$, and also the [*mass-weighted density slope within radius $r$*]{} $$\left<\gamma'(r)\right>_M = \frac{1}{M(r)}\int_0^{r} \gamma'(r')4\pi r^2 \rho(r')dr',$$ which has been suggested by @D+T13 to be a good proxy for the lensing + dynamics $\gamma'$. ![\[fig:model\_gammap\] [*Solid lines:*]{} Local logarithmic density slope as a function of 3d radius, in units of the effective radius. [*Dashed lines:*]{} mass-weighted density slope within radius $r$. [*Triangles:*]{} lensing+dynamics $\gamma'$ for $\rein=r$. Different colors indicate the different model mass profiles listed in the body text. ](what_is_gammaprime.eps){width="\columnwidth"} shows that measurements of $\gamma'$ (triangles) are remarkably independent of the ratio of the Einstein radius to the effective radius, for all models. This is an important result: it means that the physical interpretation of $\gamma'$ measurements will be stable against different lenses having different values of $\rein/\reff$. Excluding the pure de Vaucouleurs model, which is ruled out on many grounds [mass-follows light models fail to reproduce lensing and dynamical data, for example @K+T03], the difference between the mass-weighted slope and the lensing and dynamics slope is generally smaller than the typical measurement errors on $\gamma'$ of $\sim0.1$, particularly in the region $0.5\reff < r < \reff$. However the radius at which $\gamma'$ and the mass-weighted slope are closest is slightly different for different mass profiles, and so it is difficult to interpret $\gamma'$ precisely in terms of a mass-weighted slope within a fixed radius. For very accurate comparisons with lensing and dynamical data, we recommend simulating a lensing and dynamics measurement of the models. Dependence of the mass density profile slope $\gamma'$ on redshift, stellar mass, and effective radius {#sect:euler} ====================================================================================================== The main goal of this work is to establish whether, and to what extent, $\gamma'$ varies with redshift across the population of ETGs. It is useful to first study the trends of $\gamma'$ on basic parameters () in order to gain insights about the ingredients that will have to be considered in to carry out a rigorous statistical analysis. Qualitative exploration of the dependency of $\gamma'$ on other parameters {#ssec:euler_qual} -------------------------------------------------------------------------- shows the individual lens $\gamma'$ values as a function of $z$ for SL2S galaxies, as well as lenses from the SLACS [@Aug++10] and LSD [@T+K04] surveys. A trend of $\gamma'$ with $z$ is clearly visible, with lower redshift objects having a systematically steeper slope than higher redshift ones, as previously found by @Ruf++11 and @Bol++12. Before making more quantitative statements on the time evolution of $\gamma'$ we would like to check whether the density slope correlates with quantities other than redshift. Galaxies grow in mass and size during their evolution, and a variation of $\gamma'$ with time might be the result of a more fundamental dependence of the slope on structural properties of ETGs. Dependences of $\gamma'$ on the effective radius and the stellar velocity dispersion were explored by @Aug++10, finding an anticorrelation with the former and no significant correlation with the latter. Here we consider the stellar mass, plotted against $\gamma'$ in . A weak trend is visible, with more massive galaxies having a shallower slope. However the stellar mass is a rather steep function of redshift in our sample (see ) and the trend seen in might just be the result of this selection function. In fact, if we fit for a linear dependence of $\gamma'$ on both $z$ and $M_*$ we find that our data are consistent with $\gamma'$ being independent of $M_*$ at fixed $z$. A quantity that is expected to correlate with $\gamma'$ is the stellar mass density, $\Sigma_* = M_*/(2\pi \reff^2)$: galaxies with a more concentrated stellar distribution should have a steeper overall density profile. This was pointed out by @Aug++10 and @D+T13 and is seen in our data, as shown in . It is therefore important to account for a dependence of $\gamma'$ on $\Sigma_*$, or on the two independent variables on which this quantity depends, $\reff$ and $M_*$, when fitting for the time dependence of the density slope. This is done in the next Section. ![\[fig:gammaprime\] Density slope as a function of redshift for SL2S, SLACS and LSD galaxies.](gammap_vs_z.eps){width="\columnwidth"} ![\[fig:gammap\_mstar\] Density slope as a function of stellar mass. A Salpeter IMF is assumed.](gammap_vs_mstar.eps){width="\columnwidth"} ![\[fig:gammap\_sstar\] Density slope as a function of stellar mass density.](gammap_vs_sstar.eps){width="\columnwidth"} Quantitative Inference {#ssec:euler_quant} ---------------------- In this Section we aim to quantify how the mean density slope $\meangamma$ depends on galaxy properties, and on lookback time. The population of ETGs is known to be well-described by two parameters, as revealed by the existence of the Fundamental Plane relation [@D+D87; @Dre++87]. Two parameters are then probably sufficient to capture the variation of $\gamma'$ across the population of ETGs. For our analysis we focus on stellar mass and effective radius (this includes also dependencies on stellar mass density, which is believed to be an important parameter driving $\gamma'$, as discussed above). Our objective is then to measure the trends in $\gamma'$ across the three-dimensional space defined by $(z,M_*,\reff)$. This is done with a simple but rigorous Bayesian inference method. We assume that the values of the slope $\gamma'$ of our lenses are drawn from a Gaussian distribution with mean given by $$\label{eq:linear} \meangamma = \gamma'_0 + \alpha(z-0.3) + \beta(\log{M_*} - 11.5) + \xi\log{(\reff/5)}$$ and dispersion $\sigma_{\gamma'}$. The stellar mass is in solar units and the effective radius in kpc. We also assume that individual stellar masses $M_{*,i}$ are drawn from a parent distribution that we approximate as a Gaussian: $$\pr(M_{*,i}) = \frac{1}{\sigma_{M_{*}}\sqrt{2\pi}} \exp{\left[- \frac{\left(\log{M_{*,\mathit{i}}} - \mu_{M_*}^{\mathrm{(Samp)}}(z_{\mathit{i}})\right)} {2\sigma_{M_{*}}^{2\mathrm{(Samp)}}} \right]}.$$ To account for selection effects, we allow for a different mean stellar mass and dispersion for lenses of different surveys. We also let the mean stellar mass be a function of redshift. This choice reflects the clear trend of stellar mass with redshift seen in for both the SLACS and the SL2S samples, which in turn is determined by SLACS and SL2S both being magnitude-limited samples. The parameter describing the mean stellar mass is then $$\mu_{M_*}^{\mathrm{(SLACS)}} = \zeta^{\mathrm{(SLACS)}} (z_i - 0.2) + \log{M_{*,0}}^{\mathrm{(SLACS)}}$$ for SLACS galaxies and $$\mu_{M_*}^{\mathrm{(SL2S)}} = \zeta^{\mathrm{(SL2S)}}(z_i - 0.5) + \log{M_{*,0}}^{\mathrm{(SL2S)}}$$ for SL2S and LSD galaxies. We assume flat priors on all the model parameters and fit for them with a Markov chain Monte Carlo following @Kelly07. The stellar masses considered in this model are those measured in Paper III assuming a Salpeter IMF. The full posterior probability distribution function is shown in and the median, 16th and 84th percentile of the probability distribution for the individual parameters, obtained by marginalizing over the remaining parameters, is given in . The fit is done first with SL2S galaxies only and then repeated by adding SLACS and LSD lenses. For six lenses of the SLACS sample @Aug++10 warn that their velocity dispersions might be significantly incorrect, and we conservatively exclude them from our fit. These are SSDSJ0029$-$0055, SDSSJ0737$+$3216, SDSSJ0819$+$4534, SDSSJ0935$-$0003, SDSSJ1213$+$6708 and SDSSJ1614$+$4522. ![image](gammap_cornerplot.eps){width="\textwidth"} [cccl]{} By using only the 25 SL2S lenses for which $\gamma'$ measurements are possible, we are able to detect a trend of $\meangamma$ with $\reff$ at the 3-sigma level and a dependence on $M_*$ at the 1-sigma level: at fixed $z$ and $M_*$, galaxies with a smaller effective radius have a steeper density profile. Similarly, at fixed $\reff$, galaxies with a larger stellar mass have a marginally larger $\gamma'$. If we add 53 lenses from SLACS and 4 lenses from the LSD survey, the trends with $M_*$ and $\reff$ are confirmed at a higher significance, and we detect a dependence of $\meangamma$ on redshift at the 3-sigma level. Lower redshift objects appear to have a steeper slope than higher redshift counterparts at fixed mass and size. Incidentally, the median value of $\xi$, the parameter describing the linear dependence of $\meangamma$ on $\log{\reff}$, is nearly $-2$ times $\beta$, the parameter describing the dependence on $\log{M_*}$. This suggests that $\meangamma$ grows roughly as $\beta\log{(M_*/\reff^2)}$, which is equivalent to the stellar mass density. It appears then that the dependence of $\gamma'$ on the structure of ETGs can be well summarized with a dependence on stellar mass density, leaving little dependence on $M_*$ or $\reff$ individually. This confirms and extends the trend with surface mass density seen by @Aug++10 and @D+T13. We then repeated the fit allowing only for a dependence of $\meangamma$ on redshift and stellar mass density: $$\meangamma = \gamma_0 + \alpha(z - 0.3) + \eta(\log{\Sigma_*} - 9.0).$$ This model has one less free parameter with respect to . Our inference on the parameter describing the dependence on $\Sigma_*$ is $\eta = 0.38\pm0.07$, and the scatter in $\gamma'$ is $\sigma_{\gamma'} = 0.12\pm0.02$, the same value measured for the more general model of . This is again suggesting that the dependence of $\gamma'$ on the stellar mass density might be of a more fundamental nature than dependences on mass and size separately. Discussion {#sect:discuss} ========== The main result of the previous section is that the ensemble average total mass density slope of galaxies of a fixed stellar mass increases with cosmic time (i.e. decreases with redshift). This trend with redshift is detected at the $3-\sigma$ confidence level and is in good agreement with previous results from @Ruf++11 and @Bol++12. Before discussing the physical interpretation of this result, however, it is important to emphasize that what we are measuring is how the population mean density slope changes in the $(z,M_*,\reff)$ space within the population of early-type galaxies, and not how $\gamma'$ changes along the lifetime of an individual galaxy, $\mathrm{d}\gamma'/\mathrm{d}z$. In order to infer the latter quantity we need to evaluate the variation of $\gamma'$ along the evolutionary track of the galaxy as this moves in the $(z,M_*,\reff)$ space. This requires to know how both mass and size of the galaxy change with time, since the slope depends on these parameters. More formally, $$\label{eq:lagrange} \begin{split} \frac{\mathrm{d}\gamma'(z,\log{M_*},\log{\reff})}{\mathrm{d} z} = \\ =\frac{\partial \gamma'}{\partial z} + \frac{\partial \gamma'}{\partial \log M_*}\frac{\mathrm{d}\log M_*}{\mathrm{d}z} + \frac{\partial\gamma'}{\partial \log{\reff}}\frac{\mathrm{d}\log{\reff}}{\mathrm{d}z}. \end{split}$$ In a parallel with fluid mechanics, our description of the population of galaxies of is Eulerian, while is a Lagrangian specification of the change in time of the mean slope of an individual galaxy, providing a more straightforward way to physically understand the evolution of ETGs. With all these terms entering , it is no longer clear if the density slope is indeed getting steeper with time for individual objects. In particular, we have observed that $\gamma'$ depends significantly on stellar mass density (and thus effective radius). It is then crucial to consider all the terms of the equation before reaching a conclusion. Fortunately this can be done by combining our measurements with results from the literature. In the context of our model specified in , the partial derivatives introduced above can be identified and evaluated as follows: $$\frac{\partial \gamma'}{\partial z} = \alpha =-0.31\pm0.10,$$ $$\frac{\partial \gamma'}{\partial \log{M_*}} = \beta = 0.40\pm0.16,$$ $$\frac{\partial \gamma'}{\partial \log{\reff}} = \xi = -0.76\pm0.15.$$ Note that we are not considering the effects of scatter: we are assuming that the change in $\gamma'$ is the same as that of a galaxy that evolves while staying at the mean $\gamma'$ as it moves through the $(z,M_*,\reff)$ space. By doing so, the evolution in the slope that we derive from will be representative of the mean change in $\gamma'$ over the population, while individual objects can have different evolutionary tracks, within the limits allowed by our constraints on $\sigma_{\gamma'}$. The remaining quantities to be estimated are the rate of mass and size growth. In the hierarchical merging picture ETGs are expected to grow in stellar mass with time, therefore $\mathrm{d} M_*/\mathrm{d} z < 0$. Observationally, we know massive early-type galaxies grow at most by a factor of two in stellar mass since $z=1$ [see, e.g., @2013ApJ...771...61L and references therein]. Thus we can conservatively take the mean between zero and 2, even though we will show below that our conclusion are virtually insensitive to this choice: $$\label{eq:mevol} \frac{\mathrm{d}\log{M_*}}{\mathrm{d}z} =-0.15\pm0.15.$$ The effective radius grows as a result of the growth in mass, but is itself an evolving quantity at fixed $M_*$ [@Dam++11; @New++12; @Cim++12; @Pog++13]: $\reff = \reff(z,M_*(z))$. [*We assume that ETGs grow while staying on the observed $M_*-\reff$ relation at all times.*]{} Then we can write $$\label{eq:reffevol} \frac{\mathrm{d}\log{\reff}}{\mathrm{d}z} = \frac{\partial\log{\reff}}{\partial z} + \frac{\partial\log{\reff}}{\partial\log{M_*}}\frac{\mathrm{d}\log{M_*}}{\mathrm{d}z}$$ and use the values measured by @New++12, $\partial\log{\reff}/\partial z = -0.26\pm0.02$ and $\partial\log{\reff}/\partial{\log{M_*}} = 0.59\pm0.07$. Plugging these values into we find that $$\begin{split} \frac{\mathrm{d}\gamma'}{\mathrm{d}z} = (-0.31\pm0.10) + (0.40\pm0.15)(-0.15\pm0.15) \\ + (-0.76\pm0.15)[(-0.26\pm0.02) \\ + (-0.15\pm0.15)(0.59\pm0.07)] =-0.10\pm0.12 \end{split}$$ Note that $\mathrm{d}\gamma'/\mathrm{d}z$ has little dependence on the mass growth rate $\mathrm{d}\log{M_*}/\mathrm{d}z$, which is the most poorly known quantity in this model. To be more quantitative we plot in the total derivative $\mathrm{d}\gamma'/\mathrm{d}z$ as a function of $\mathrm{d}\log{M_*}/\mathrm{d}z$, and show that for any plausible value, spanning over an order of magnitude, the answer is unchanged. Different assumptions on the evolution of the size-mass relation do not change significantly our result. For instance, @Dam++11 find a more rapid evolution of $\reff$ than @New++12, leading to $\mathrm{d}\gamma'/\mathrm{d}z = 0.06\pm0.15$, consistent with no change of the total mass density profile with time. ![\[fig:total\_derivative\] Mean intrinsic change of the density slope with redshift of a massive ETG, as a function of its mass growth rate. ](total_derivative.eps){width="\columnwidth"} Thus, the key result is that, when considering all the terms of , we find that, on average, individual ETGs grow at approximately constant density slope. The observed redshift dependence of $\gamma'$ [*at fixed mass and size*]{} can then be understood as the result of the evolution of the size-mass relation and by the dependency of $\gamma'$ on the stellar mass density. Qualitatively, in this picture an individual galaxy grows in stellar mass and size so as to decrease its central stellar mass density. During this process, the slope of its total mass density profile does not vary significantly. However the other galaxies that now find themselves to have the original stellar mass and effective radius of this galaxy had originally a steeper mass density profile, thus giving rise to the observed trend in $\partial\gamma'/\partial z$. This is illustrated in , where we show a possible scenario consistent with the observations. The evolutionary tracks of two representative galaxies between $z=1$ and $z=0$ are shown as solid black arrows, in the multi-dimensional parameter space of stellar mass, effective radius, effective density, and slope of the mass density profile $\gamma'$. The two galaxies are chosen so that one has at $z=1$ the same mass and effective radius that the other has at $z=0$. Mass and size are evolved following and . We then assign $\gamma'$ at $z=0$ based on the observed correlation with size and stellar mass (effectively with effective stellar mass density, since $\beta\approx-2\xi$) [*and assume it does not evolve for an individual galaxy*]{}. The apparent evolution of $\gamma'$ at fixed $M_*$ and $\reff$ is consistent with the measured value $\partial \gamma'/\partial z = -0.31\pm0.10$, and is dictated by a difference in the initial stellar density of their progenitors, being larger for the more massive object. In the context of simple one-parameter stellar profiles (e.g. de Vaucouleurs), this difference in $\gamma'$ at fixed mass and size for galaxies at different redshift must be ascribed to corresponding differences in the underlying dark matter distribution. The implications of our results for the dark matter profiles of ETGs will be explored in an upcoming paper (Sonnenfeld et al., in prep.). ![image](illus.eps){width="90.00000%"} An important assumption at the basis of our analysis is that scaling relations of $\gamma'$ with mass and size measured at low redshift can be used to predict the evolution of the slope for higher redshift objects. This assumption holds well if the evolutionary tracks of higher redshift galaxies stay on parts of the parameter space probed by the lower redshift systems. To first approximation this seems to be the case for the galaxies in our sample. shows the positions of our lenses in the $M_*-\reff$ space, where the effective radius of each object is renormalized by the average $\reff$ of galaxies at its redshift. Under our assumptions, objects evolve along lines parallel to the mass-size relation (dashed line) towards higher masses. There is significant overlap between the high-$z$ SL2S-LSD sample and the lower redshift SLACS sample, implying that SLACS galaxies are informative on the evolution in $\gamma'$ of SL2S-LSD objects. Differently, one could rely on extrapolations of the scaling relations for $\gamma'$. A more quantitative explanation of our findings would require a detailed comparison with theoretical model and is beyond the scope of this work. However, we can check at least qualitatively how our result compares with published predictions. @NTB09 studied the impact of dissipationless (dry) mergers on $\gamma'$ finding that for an individual galaxy the slope tends to get shallower with time. @JNO++12 looked at the evolution in the slope on nine ETGs in cosmological simulations, finding no clear trend in the redshift range explored by our data. Their simulations include both dry and dissipational (wet) mergers. @Rem++13 examined simulated ETGs in a cosmological framework and in binary mergers. They found slopes that become shallower in time, asymptotically approaching the value $\gamma'\approx2.1$ as observed in our data. They also detected a correlation between the amount of in-situ star formation and slope, with $\gamma'$ being larger in systems that experienced more star formation events. Finally, @Dub++13 produced zoomed cosmological simulations of ETGs with or without AGN feedback. They found that the total density slope becomes steeper with time. They also observed that galaxies with strong AGN feedback have a shallower profile than systems with no AGN feedback and interpreted this result with the AGN shutting off in-situ star formation. Qualitatively, our data is not in stark contrast with any of these models. A more quantitative comparison is required to find out whether the models work in detail. This is left for future work. The combination of constraints from the evolution of the size stellar mass relation obtained via traditional studies of large samples of ETGs, and our own detailed measurements of the evolution of their internal structure should provide a stringent test for evolutionary models of ETGs, and thus help us improve our understanding of the baryonic and dark matter physics relevant at kpc scales. Summary and Conclusions {#sect:concl} ======================= We have presented spectroscopic observations from the Keck, VLT, and Gemini Telescopes of a sample of 53 lenses and lens candidates from the SL2S survey. We measured stellar velocity dispersions for 47 of them, and redshifts of both lens and background source for 35 of them. 36 systems are confirmed grade A lenses and 25 of these were able to be used for a lensing and stellar dynamics analysis. We have shown how spectroscopic observations can be used in combination with ground-based imaging with good seeing ($\sim 0.7''$) to confirm gravitational lens candidates by the presence of multiply imaged emission lines from the lensed background source. We have also shown how SL2S lenses are comparable with lenses from other surveys in terms of their size, mass and velocity dispersion, and lie on the same $M_*-\reff$ relation as non-lens galaxies. By fitting a power-law density profile ($\rho(r) \propto r^{-\gamma'}$) to the lensing and stellar kinematics data of SL2S, SLACS and LSD lenses we measured the dependence of $\gamma'$ on redshift, stellar mass and galaxy size, over the ranges $z \approx 0.1-1.0$, $\log M_*/M_\odot \approx 11 - 12$, R$_{\rm eff}=1-20$kpc. Our main results can be summarized as follows: 1. In the context of power-law models for the density profile $\rho_{\rm tot}\propto r^{-\gamma'}$, the (logarithmic) density slope $\gamma'$ of the SL2S lenses is approximately – but not exactly – that of a single isothermal sphere ($\gamma'=2$), consistent with previous studies of lenses in different samples. This can be understood as the result of the combination of a stellar mass density profile that falls off more steeply than the dark matter halo. The relative scaling of the two conspires to produce the power law index close to isothermal (“bulge-halo” conspiracy). 2. At a given redshift, the mass density slope $\gamma'$ depends on the surface stellar mass density $\Sigma_*=M_*/2\reff^2$, in the sense that galaxies with denser stars also have steeper total mass density profiles ($\partial \gamma' / \partial \log \Sigma_* = 0.38\pm0.07$). 3. At fixed $M_*$ and $\reff$, $\meangamma$ depends on redshift, in the sense that galaxies at a lower redshifts have on average a steeper average slope ($\partial \gamma'/ \partial z = -0.31\pm 0.10$). 4. Once the dependencies of $\gamma'$ on redshift and surface stellar mass density are taken into account, less than 6% intrinsic scatter is left ($\sigma_\gamma'=0.12\pm0.02$). 5. The average redshift evolution of $\gamma'$ for an individual galaxy is consistent with zero: $\mathrm{d}\gamma'/\mathrm{d}z=-0.10\pm0.12$. This result is obtained by combining our measured dependencies of $\meangamma$ on redshift stellar mass and effective radius with the observed evolution of the size stellar mass relation taken from the literature. The key result of this work is that the dependency of $\meangamma$ on redshift and stellar mass density does not imply that massive early-type galaxies change their mass density profile over the second half of the lifetime. In fact, at least qualitatively, the observed dependencies can be understood as the results of two effects. Individual galaxies grow in stellar mass and decrease in density over the redshift range 1 to 0, while apparently largely preserving their total mass density profiles. This could be explained by the addition of stellar mass in the outer part of the galaxies in quantities that are sufficient to explain the decrease in stellar mass density but insufficient to alter the total mass density profile, since the regions are already dark matter dominated. As shown by @Nip++12, the growth in size during this period is slow enough that it could perhaps be explained by the the infall of dark matter and stars via a drizzle of minor mergers, with material of decreasing density, tracking the decreasing cosmic density. This process needs to happen while substantially preserving the total mass density profile. Alternatively, the evolution at constant slope can be interpreted as the combined effect of the decrease in stellar mass density and a variation in the dark matter profile (either a steepening or a decrease of the central dark matter distribution). The latter effect would be responsible for the term $\partial\gamma'/\partial z$. Checking whether these scenarios can work quantitatively requires detailed comparisons with theoretical calculations, which are beyond the scope of this paper. The second important result of this work is that the total mass density profile of early-type galaxies depends on their stellar mass density, with very little scatter. Qualitatively this makes sense, as we expect that the more concentrated stellar distributions should have been able to contract the overall profile the most. Presumably this difference may trace back to differences in past star formation efficiency or merger history. Therefore, the tightness of the observed correlation should provide interesting constraints on these crucial ingredients of our understanding of early-type galaxies. [^1]: ESO/VLT programs 086.B-0407(A) and 089.B-0057(A), PI Gavazzi [^2]: <http://www.eso.org/sci/facilities/paranal/instruments/xshooter/>
--- abstract: 'The Letter takes up a question of what radio emission is produced by electrons at the very acceleration site of a solar flare. Specifically, we calculate incoherent radio emission produced within two competing acceleration models—stochastic acceleration by cascading MHD turbulence and regular acceleration in collapsing magnetic traps. Our analysis clearly demonstrates that the radio emission from the acceleration sites: (i) has sufficiently strong intensity to be observed by currently available radio instruments and (ii) has spectra and light curves which are distinctly different in these two competing models, which makes them observationally distinguishable. In particular, we suggest that some of the narrowband microwave and decimeter continuum bursts may be a signature of the stochastic acceleration in solar flares.' author: - 'Yixuan Li, Gregory D. Fleishman' title: Radio emission from acceleration sites of solar flares --- Introduction ============ Acceleration of charged particles is an internal property of energy release in solar flares, which has not yet been fully understood in spite of a significant progress achieved recently [e.g., @Aschw_2002; @Vilmer_MacKinnon_2003]. A traditional way of getting information on the accelerated electrons in flares is the analysis of the hard X-ray (HXR) emission produced by nonthermal bremsstrahlung. However, because the bremsstrahlung intensity increases with the density of the ambient plasma, it is likely that in most cases the acceleration site and HXR emission site are spatially separated; therefore, the HXR emission does not carry direct information on the acceleration site. This concept of distinct acceleration, propagation, and emission regions was then inherited by solar radio astronomy [e.g., Fig. 9 in @BBG], which looks relevant to relatively weak events of electron acceleration visualized by coherent emission of type III groups and of accompanying metric spikes [e.g., Fig. 10 in @BBG]. However, it is well known that a charged particle produces electromagnetic emission as it moves with acceleration. Stated another way, fast electrons must produce radiation immediately at the acceleration region with intensity and other characteristics defined by type of the acceleration, or more precisely, by the type of fast electron trajectories in the acceleration region. We show in this Letter that typically this emission has a spectral peak at the microwave range, which makes the radio observation the most suitable to study the acceleration region in flares. By now, a huge variety of acceleration mechanisms and models has been proposed and developed for the solar flares. Acceleration by DC electric fields, both sub-Dreicer and super-Dreicer, [@Holman85; @Tsuneta85; @HolmanBenka; @Litvinenko96; @Litvinenko_2000; @Litvinenko_2003a]; stochastic acceleration by turbulent waves [@Petrosian92; @Petrosian94; @Miller96; @Miller97; @Petrosian97; @Petrosian98; @Byk_Fl_2009], the classical diffusive shock acceleration [@Aschw_2002]; the regular (betatron plus Fermi) acceleration in collapsing magnetic traps [@Somov_Kosugi_1997; @Somov_Bogachev_2003; @Karlicky_Kosugi_2004; @Bogachev_Somov_2005; @Bogachev_Somov_2007; @Bogachev_Somov_2009]; all are currently considered in the context of solar flares. To illustrate potential ability of radio observations to detect the radiation from the flare acceleration site and to distinguish then between competing acceleration mechanisms, we calculate here radio emission generated within two distinct acceleration models—stochastic acceleration by a turbulence spectrum and regular acceleration in collapsing traps. Radio emission of flares is known to be produced by a variety of emission mechanisms including gyrosynchrotron (GS) emission, bremsstrahlung, transition radiation, and a number of coherent radiative processes [@BBG; @Nindos_etal_2008]. Some of the observed emission types can in fact originate from acceleration sites, while others—from electrons trapped in closed magnetic loops or from electrons propagating along open field lines. Based on our analysis, we suggest that some of the narrowband microwave and decimeter continuum bursts may be a signature of the stochastic acceleration in solar flares, while the collapsing trap acceleration must reveal itself in drifting to higher frequencies microwave GS bursts. Radio Emission from a Region of Stochastic Acceleration ======================================================= Basically, various models of stochastic acceleration differ from each other by the accelerating agent (the plasma or MHD eigen-mode responsible for the wave-particle energy exchange) and presence or absence of some pre-acceleration (injection) process. To be specific, we assume a ’pure’ stochastic acceleration process when the electrons are accelerated directly from the thermal pool [@Petrosian92; @Miller96], perhaps as a result of MHD turbulence cascading towards the smallest scales involved into resonant interaction of the waves with thermal or weakly superthermal electrons. Within this model the MHD turbulence is created at some large scale and then a broad spectrum of the turbulence develops due to the turbulence cascading. As soon as small-scale waves capable of resonant interaction with electrons from Maxwellian tail are produced they start to accelerate those electrons. This process can be modeled by growing a power-law tail [sf., e.g. spectra of accelerated electrons presented by @Petrosian92; @Miller96; @Byk_Fl_2009] $$\label{el_spectrum_st_acc} N(E)=(\delta(t)-1)\frac{n(>E_0)\cdot E_0^{\delta(t)-1}}{E^{\delta(t)}}\exp\left(-\frac{E}{E_{br}(t)}\right),$$ where the time-dependent acceleration is modeled by increasing the break energy $E_{br}(t)$ and hardening the energy spectrum (decreasing spectral index $\delta(t)$). This nonthermal distribution of accelerated electrons is assumed to match the original Maxwellian distribution at a certain energy $E_0$; $n(>E_0)$ is evidently defined by the matching condition: $$\label{el_matching_st_acc} n(>E_0)=\frac{2n_e}{\delta(t)-1}\sqrt{\frac{E_0^3}{\pi(kT_e)^3}}\cdot \exp\left(\frac{-E_0}{kT_e}\right) \exp\left(\frac{E_0}{E_{br}(t)}\right), $$ where $n_e$ and $T_e$ are the number density and temperature of the thermal electrons, $k$ is the Boltzman constant. Figure \[FIG01\] shows a sequence of the electron spectra as the electron acceleration progresses. Let us consider the radio emission produced by accelerated electrons with the spectrum (\[el\_spectrum\_st\_acc\]) at the acceleration region. We note that gyrosynchrotron (GS) emission by nonrelativistic and weakly relativistic electrons available during an initial phase of the acceleration modeled by Eq. (\[el\_spectrum\_st\_acc\]) is inefficient; the flux of the GS emission remains typically very small until sufficient number of electrons is accelerated up to a few hundred keV[^1] [@BBG]. However, along with the regular magnetic field, there is a spectrum of turbulent waves (those accelerating the electrons) at the acceleration site. The nonthermal electrons, interacting with those random waves, experience spatial diffusion and so produce so called Diffusive Synchrotron Radiation [DSR, @Fl_2006a], which we calculate here. Neglecting for simplicity the plasma gyrotropy we can take the refractive index of the radio waves in the form $$\begin{aligned} \label{ref_ind} n_\sigma =\sqrt{\varepsilon}, \qquad \varepsilon&=&1-\frac{\omega_{pe}^2}{\omega^2}, \qquad \omega_{pe}=\sqrt{\frac{4\pi e^2n_e}{m}}.\end{aligned}$$ Then, spectral and angular distribution of the energy radiated by a relativistic charged particle with a given Fourier transformed acceleration $\mathbf{w}_{\omega'}$ during time $T$ of the particle motion in an external field is given by [@LL_1971] $$\label{cal_E_w_rel} {\cal E}_{{\bf n},\omega}=\sqrt{\varepsilon}\frac{Q^2}{c^3} \left(\frac{\omega}{\omega '}\right)^4 \left| \left[{\bf n}\left[\left({\bf n}-\frac{\bf v}{c}\right){\bf w}_{\omega '}\right]\right] \right|^2 ,$$ where $$\label{omega_prime} \omega '= \omega \left(1-\frac{{\bf nv}}{c}\sqrt{\varepsilon(\omega)}\right).$$ In the nonrelativistic case $v/c \ll 1$ ($\gamma\equiv\sqrt{1-v^2/c^2} \approx 1$) and $\omega' \approx \omega$, Eq. (\[cal\_E\_w\_rel\]) reduces to $$\label{cal_E_w} {\cal E}_{{\bf n},\omega}=\sqrt{\varepsilon}\frac{Q^2}{c^3} \left| \left[{\bf n}\times{\bf w}_{\omega }\right] \right|^2 ,$$ where $Q$ is the particle charge and ${\bf n}$ is the unit wave vector of the radiation. Eq. (\[cal\_E\_w\]) shows that the radiation in a given direction ${\bf n}$ is defined by the acceleration component $\left| {\bf w}_{\omega\bot } \right|^2=\left| \left[{\bf n}\times{\bf w}_{\omega }\right] \right|^2$ transverse to ${\bf n}$. Similarly to the derivation in ultrarelativistic case [@Fl_2006a], the transverse component of the acceleration can be expressed via temporal and spatial Fourier transform of the external Lorentz force, $F^\alpha_{q_0, {\bf q}}$ $$\label{w_perp_3} \mid {\bf w}_{\omega\bot} \mid^2=\frac{(2\pi)^3}{M^2 V} \int dq_0 d{\bf q} \delta(\omega-q_0+{\bf qv})(\delta_{\alpha\beta}-n_\alpha n_\beta) F^\alpha_{q_0, {\bf q}}F^{\beta *}_{q_0, {\bf q}},$$ where $M$ is the mass of emitting particle and $V$ is the source volume. For electric component of the Lorenz force $\mathbf{F}=Q\mathbf{E}$ we have $$\label{E_field} (\delta_{\alpha\beta}-n_\alpha n_\beta) F^\alpha_{q_0, {\bf q}}F^{\beta *}_{q_0, {\bf q}}= Q^2(\delta_{\alpha\beta}-n_\alpha n_\beta) E^\alpha_{q_0, {\bf q}}E^{\beta *}_{q_0, {\bf q}},$$ where $$\label{E_corr} E^\alpha_{q_0, {\bf q}}E^{\beta *}_{q_0, {\bf q}} = \frac{TV}{(2\pi)^4} K_{\alpha \beta}(q_0,{\bf q}),$$ $K_{\alpha \beta}(q_0,{\bf q})$ is the correlation tensor of the random electric field, such as $\int dq_0 d{\bf q} K_{\alpha \alpha}(q_0,{\bf q})=\left<E_{st}^2\right>$ [@Toptygin_1985]. For magnetic component of the Lorenz force the corresponding expression is different $$\label{B_field} \begin{array}{l} (\delta_{\alpha\beta}-n_\alpha n_\beta) F^\alpha_{q_0, {\bf q}}F^{\beta *}_{q_0, {\bf q}}= \frac{Q^2}{c^2}\left(v^2 \delta_{\alpha\beta} - v_\alpha v_\beta -[\mathbf{n}\times \mathbf{v}]_\alpha [\mathbf{n}\times \mathbf{v}]_\beta \right) B^\alpha_{q_0, {\bf q}}B^{\beta *}_{q_0, {\bf q}}=\\\\ Q^2\frac{v^2}{c^2}\left(n_\alpha n_\beta+\frac{({\bf nv})^2}{v^2}\delta_{\alpha\beta}- ({\bf nv})\frac{v_\alpha n_\beta+n_\alpha v_\beta}{v^2}\right) B^\alpha_{q_0, {\bf q}}B^{\beta *}_{q_0, {\bf q}}. \end{array}$$ Similarly to Eq. (\[E\_corr\]) we have $$\label{B_corr} B^\alpha_{q_0, {\bf q}}B^{\beta *}_{q_0, {\bf q}} = \frac{TV}{(2\pi)^4} K_{\alpha \beta}(q_0,{\bf q}),$$ where $K_{\alpha \beta}(q_0,{\bf q})$ is the correlation tensor of the random magnetic field, such as $\int dq_0 d{\bf q} K_{\alpha \alpha}(q_0,{\bf q})=\left<B_{st}^2\right>$. Thus, the DSR intensity, $I_{\textbf{n},\omega}={\cal E}_{{\bf n},\omega}/T$, of a nonrelativistic particle in the presence of random magnetic field is $$\label{I_DSR_nw_gen} I_{\textbf{n},\omega} =\sqrt{\varepsilon} \frac{Q^4 v^2}{2\pi M^2 c^5} \int dq_0 d{\bf q} \delta(\omega-q_0+\textbf{qv}) \left(n_\alpha n_\beta+\frac{({\bf nv})^2}{v^2}\delta_{\alpha\beta}- ({\bf nv})\frac{v_\alpha n_\beta+n_\alpha v_\beta}{v^2}\right)K_{\alpha \beta}(q_0,{\bf q}).$$ This expression is valid for arbitrary spectrum of magnetic turbulence including anisotropic distributions. We consider here the DSR produced by accelerated nonrelativistic electrons interacting with the MHD turbulence. In MHD waves $E \sim (v_a/c) B$, where $v_a$ is the Alfvén speed, therefore the magnetic part of the Lorenz force is larger than the electric part for all electrons with $v>v_a$. Assuming this condition to be fulfilled, we calculate only the DSR related to the magnetic field of the MHD turbulence; inclusion of electric field effect will further increase the DSR intensity. Since we are interested in overall spectral shapes and flux level of the DSR, rather than model-dependent details of the emission, we consider here the simplest case of the isotropic MHD turbulence: $$\label{B_corr_iso} K_{\alpha\beta}=\frac{1}{2}\left({\delta}_{\alpha\beta}-\frac{q_{\alpha}q_{\beta}} {q^{2}}\right)K(\textbf{q})\delta(q_{0}-q_{0}(\textbf{q})).$$ As we assumed $v>v_a$, i.e., the electrons move faster than the waves, we can adopt the MHD turbulence to be quasi static, $q_{0}(\textbf{q})=0$. When the MHD turbulence is isotropic, the accelerated electrons are isotropic as well, and so the radiation produced is also isotropic. Thus, we consider further the radiation produced into the full solid angle $$\label{I_w_def} I_{\omega} = \int I_{\textbf{n},\omega}d\Omega = \sqrt{\varepsilon} \frac{8 Q^2}{3\pi c}\cdot q(\omega) ,$$ where, like in the ultrarelativistic case [@Fl_2006a], we introduce the scattering rate of the nonrelativistic particle by MHD turbulence $q(\omega)$: $$\label{q_w_iso} q(\omega)=\frac{\pi}{4}\left(\frac{Q}{Mc}\right)^{2} \frac{v^2}{c^2}\int K(\textbf{q})\delta(\omega+\textbf{qv})\,{d\textbf{q}}.$$ To proceed further we have to specify the shape of the turbulence spectrum $K(\textbf{q})$; we adopt a single power-law down to the smallest (resonant to thermal electrons) scales: $$\label{K_pow_law} K(\textbf{q})=\frac{A_{\nu}}{q^{\nu+2}}\qquad A_\nu=\frac{\nu-1}{4\pi}k_0^{\nu-1}\langle B_{st}^2\rangle,$$ where $k_0=2\pi/L_0$ with $L_0$ the largest turbulence scale, $\langle B_{st}^2\rangle$ is the mean square of the turbulent magnetic field, and $\nu$ is the turbulence spectral index. Then, substituting (\[K\_pow\_law\]) into (\[q\_w\_iso\]), integrating over $d\textbf{q}$, $$\begin{array}l \int\,{d\textbf{q}}K(\textbf{q})\delta(\omega+\textbf{qv}) =2\pi\int\,{d}\cos\theta\cdot\,{d}q\frac{A_\nu}{q^\nu}\delta(\omega+qv\cos\theta)=\frac{2\pi A_\nu}{v}\int\limits_{\frac{\omega}{v}}^{\frac{\omega_{pe}}{v_{pe}}}\frac{\,d q}{q^{\nu+1}} =\\ \frac{2\pi}{\nu}\frac{A_\nu}{v}\left(\frac{v}{\omega}\right)^\nu \left(1-\left(\frac{\omega v_{pe}}{\omega_{pe} v}\right)^\nu\right) \Theta\left(\frac{\omega_{pe}}{v_{pe}}-\frac{\omega}{v}\right) \end{array}$$ where $$\begin{aligned} \label{v_therm} v_{pe}&=&6.74\times10^5\sqrt{T_e}\end{aligned}$$ is the thermal velocity of the plasma electrons, $\Theta(x)$ is the step function, and using the electron charge $e$ and mass $m$ for $Q$ and $M$, we find $$\label{q_w_PLW} q(\omega)=\frac{\pi^2A_\nu}{2\nu} \frac{e^2 v}{m^2c^4}\left(\frac{v}{\omega}\right)^\nu \left(1-\left(\frac{\omega v_{pe}}{\omega_{pe} v}\right)^\nu\right) \Theta\left(\frac{\omega_{pe}}{v_{pe}}-\frac{\omega}{v}\right), $$ so the DSR spectrum produced by accelerated electrons reads $$\begin{aligned} \label{DSR_w_eps} I_{\omega}&=& \frac{8e^2}{3\pi c}\sqrt{\varepsilon}\cdot q(\omega). $$ Now we calculate the DSR power from $N$ electrons with the spectrum described by Eq. (\[el\_spectrum\_st\_acc\]) $$\label{DSR_ensemble} P_\omega=\int\limits_{E_0}^\infty I_\omega N(E)\,dE.$$ In fact, we are interested in the radio flux observed at the Earth. To transform this radiation power into the flux observed at the Earth, we change the variable $\omega=2\pi f$, so that $I_f=2\pi I_\omega$. Then, the flux is $$\label{DSR_flux} F_f=\frac{2\pi P_\omega V}{4\pi R_{au}^2}=\frac{P_\omega L^3}{2R_{au}^2}\cdot10^{19} \quad {\rm sfu},$$ where $R_{au}=1$ au$=1.49\times10^{13}$ cm is the distance from the Earth to the Sun. To evaluate the DSR from the acceleration region of a solar flare, we adopt some typically assumed parameters of the acceleration site as follows: (a) the size of the site $L\sim 10^8$ cm; (b) the thermal electron number density $n_e\sim 10^{10}$ cm$^{-3}$; (c) the electron temperature $T_e\sim 10^6$ K; (d) the energy density of the magnetic turbulence $W_{st}=\frac{\langle B_{st}^2\rangle}{8\pi}\sim10^3$ erg/cm$^{3}$. Accordingly, the total energy, $W_{tot}\sim W_{st}L^3 \sim 10^{27}$ ergs, corresponds to a very modest solar flare. We assume that the power-law tail of the accelerated electrons grows from $E_0=(4-6) kT_e$ with $n(>E_0)$ specified by matching condition (\[el\_matching\_st\_acc\]) and as the acceleration has started, the power law index $\delta$ changes from $8\sim3$ while the break energy $E_{br}$ increases from $50\sim500$ keV. Figure \[FIG03\] presents the sequence of calculated DSR spectra for 11 different $\delta$ values from $7\sim3$; the spectra are calculated for three different $\nu$ values and for two different $E_0$ values. The blue curves indicate larger $\delta$, while the red ones show smaller $\delta$. Then, Figure \[FIG04\](a) presents the DSR spectra for three different temperature values, $T= (1,3,10)\cdot 10^6 K$. In addition to spectrum shapes, light curves of the radiation at different frequencies can be informative. To estimate the light curve behavior we adopt a soft-hard-soft spectrum evolution as follows from theory of spectrum evolution for the stochastic acceleration [@Byk_Fl_2009], and which is typical for impulsive flares, with the electron energy spectral index $\delta(t)$ changing from 8 to 3 and then back to 8, while the break energy $E_{br}$ increasing all the way from 50 keV to $\sim 1$ MeV. Figure \[FIG04\](b) shows the corresponding model light curves at a few frequencies around the spectrum peak. One can note from the figure that higher frequency light curves have a somewhat shorter duration, although peaking at the same time; so no appreciable time delay between the light curves is expected. \ We note that the DSR spectra are very narrowband, much narrower that typical gyrosynchrotron spectra. The high frequency slope of the DSR spectrum can easily be evaluated from Eqns. (\[DSR\_flux\]), (\[DSR\_ensemble\]), (\[q\_w\_PLW\]), and (\[DSR\_w\_eps\]), $F_f \propto f^{3-2\delta}$. Thus, the DSR high frequency spectral index varies from 11 to 3 as the spectral index of accelerated electrons changes from 7 to 3, while the GS spectral index would vary from 5 to 1 for the same range of $\delta$ variation. The peak flux of the DSR is highly sensitive to the turbulence spectral index (specified eventually by the MHD cascading law), while less sensitive to the plasma temperature and electron spectral index. The peak flux can be very large (up to a few hundred sfu), which makes it easily observable even by full sun radio instruments. If so, the corresponding radio emission must have been widely observed by available radio spectrometers working in the decimetric and/or microwave range. Indeed, there is a class of radio bursts with the properties resembling the DSR properties described here—it is the class of narrowband decimetric and microwave continuum bursts (including type IVdm), which, we suggest, may contain burst-candidates to the radio emission from the regions of stochastic acceleration in solar flares. Although this interpretation is tempting, spatially resolved radio observations of the DSR will be needed to confirm it, to locate the region of stochastic acceleration, and study it in detail. Another plausible candidate for radio emission from stochastic acceleration episodes is so-called transient brightenings, whose radio spectra are often narrowband [@Gary_etal_1997]. Gyrosynchrotron Radio Emission from a Collapsing Trap ===================================================== Let us consider another model, a collapsing magnetic trap, which can efficiently accelerate charged particles. Unlike the stochastic acceleration models, no turbulence spectrum is essential to accelerate particles in the collapsing trap model; however, some spectrum of ’pre-accelerated’ particles is needed, otherwise, the collapse of the trap will only give rise to plasma heating without nonthermal particle generation. Therefore, we assume that just before collapsing the trap contained both thermal plasma and nonthermal electron population with a power-law spectrum. To be specific, we adopt the initial conditions as follows: (a) the magnetic field strength $B_0=30$ G; (b) the minimum and the maximum energy of the power-law spectrum $E_{\rm min}=0.01$ MeV, $E_{\rm max}=1$ MeV; (c) the thermal electron density $n_{th}=10^9$ cm$^{-3}$ and the non-thermal electron density $n_{rl}=10^7$ cm$^{-3}$; (d) the source size $L_0=10"$. During the trap contraction, the number of accelerated electrons evolves. For our modeling we adopt a solution obtained by @Bogachev_Somov_2005, see Figure \[FIG05\], which takes into account the betatron and Fermi acceleration and the particle escape from the trap via the loss cone: $$N=N_0\frac{l\sqrt{b_m-b}}{\sqrt{1+(b_m-1)l^2}}$$ where $$\begin{aligned} b=b(t)&=&B(t)/B_0 \\ l=l(t)&=&L(t)/L_0\end{aligned}$$ so $b(t)$ changes from $b(0)=1$ to $b_m=B_m/B_0$, $B_m$ is the largest magnetic field value at the end of the trap collapse, and $l(t)$ deceases from $l(0)=1$ to a very low value, say, $l(t_{\rm max})=0.1$. For the sake of simplicity we assume a self-similar contraction of the collapsing magnetic trap. In this case, evolution of all parameters of the trap is uniquely defined by their initial values and the dimensionless source scale $l(t)$. Thus, for any given contraction law, $l(t)$, we can easily calculate the corresponding time history of all other relevant source parameters, as the magnetic field, the thermal electron number density, the source volume and the projected area, and the evolution of the nonthermal electron spectrum [@Bogachev_Somov_2005; @Bogachev_Somov_2007]. For our modeling we adopt that the trap volume $V$ linearly decreases with time during the trap contraction from $10"^3$ to $1"^3$; we adopt 10 s for the trap collapse time, which is a few Alfven times ($\tau_a\sim L/v_a$) for the trap parameters used. \ Thus, we can straightforwardly calculate the GS spectra at different time frames and the radio light curves at different frequencies within the adopted collapsing trap model. Figure \[FIG05\] displays the GS spectra at different moments of the trap contraction. In agreement with a statement made in the previous section, at initial phase of acceleration the GS flux is small (less than 1 sfu), which can only be recorded by high sensitivity spatially resolved observations. However, during the trap contraction the magnetic field increases and the fast electrons are accelerated, which all together lead to a significant increase of the peak flux and the peak frequency of the radio emission produced at the acceleration site; thus the radio emission becomes easily detectable by available radio instrument soon after the trap starts to contract. \ Figure \[FIG07\] presents the light curves of the emission at a number of fixed frequencies, 5, 10, 17, 34 GHz. Within the adopted model the peak flux increases with frequency, see Figures \[FIG05\], \[FIG07\]; in fact, this increase may become less pronounced if the coulomb losses in the collapsing trap are taken into account [@Bogachev_Somov_2009]. A distinctive feature of the light curves, contrasting to that of DSR produced from the stochastic acceleration sites, is a noticeable time delay: the higher frequency light curves are delayed relative to lower frequency light curves; this time delay will be present even when the coulomb losses [@Bogachev_Somov_2009] are included. A time delay in the sense predicted by our modeling is frequently observed in solar flares, in particular, in those with quasiperiodic pulsations [@Fl_etal_2008]. Observationally, however, the GS emission from a collapsing trap can be contaminated by GS emission from trapped electrons produced by previous acceleration episodes, so unambiguous detection of the GS emission from a collapsing trap itself requires additional accurate analysis to separate the contributions, which as yet has not been performed. Discussion ========== There are many models in which electrons can be accelerated to nonthermal energies. Some mechanisms accelerate a tiny fraction of the electrons, which can only be observed via coherent radio emissions (e.g., type III bursts produced by electron beams, or accompanying metric spikes), others produce more powerful acceleration, sufficient to generate observable incoherent radio emission from either the acceleration site itself of from a remote ’radiation site’. The idea of using radio observations to probe energy release/acceleration regions in flares has been around for awhile [e.g., @BBG], however, the studies focused mainly on *coherent* decimeter radio bursts. For example, [@Benz_1986] argued that decimeter narrowband millisecond radio spike clusters can be a signature of electron acceleration in flares, and, if so, the flare energy release must have been highly fragmented with each spike indicating a single energy release/acceleration episode. However, it has been found [@Aschwanden_Guedel_1992] that the radio spikes are frequently delayed compared with associated hard X-ray emission, implying the spikes are a secondary phenomenon associated with flares. Moreover, spatially resolved observations [@Benz_etal_2002; @Battaglia_Benz_2009] show that the spike sources are typically far away from main flare locations. Even though higher frequency microwave radio spikes [@spikes; @Rozh_etal_2008] can be produced at or around the main flare location [@Gary_2009], it seems doubtful that the coherent radio burst originate from elementary acceleration episodes [@Fl_Meln_1998; @spikes; @Rozh_etal_2008; @Battaglia_Benz_2009]. In contrast, in this Letter we have calculated *incoherent* radio emission from the acceleration region of a solar flare within two distinct acceleration models—stochastic acceleration by cascading MHD turbulence and regular (betatron and Fermi) acceleration in a collapsing trap. We have demonstrated that the radio emissions produced within these two competing acceleration models are distinctly different, which potentially allows distinguishing between them by the radio observations. In particular, we have found that the stochastic acceleration process is accompanied by a very narrowband DSR continuum radio emission, whose predicted properties are generally consistent with observed properties of narrowband microwave or decimetric (type IVdm) continuum bursts, thus, we suggest that some of those bursts can be produced from the sites of stochastic acceleration. This work was supported in part by NSF grants AST-0607544, ATM-0707319, and ATM-0745744, and NASA grant NNG06GJ40G, NNX0-7AH78G, and NNX0-8AQ90G to New Jersey Institute of Technology, and by the Russian Foundation for Basic Research, grants 08-02-92228, 09-02-00226, and 09-02-00624. We have made use of NASA’s Astrophysics Data System Abstract Service. [38]{} natexlab\#1[\#1]{} , A., [Nakajima]{}, H., [Shimojo]{}, M., [White]{}, S. M., [Hudson]{}, H. S., & [Lin]{}, R. P. 2006, , 58, L1 , M. J. 2002, [Particle Acceleration and Kinematics in Solar Flares]{} (Particle Acceleration and Kinematics in Solar Flares, A Synthesis of Recent Observations and Theoretical Concepts, by Markus J. Aschwanden, Lockheed Martin, Advanced technology Center, palo Alto, California, U.S.A. Reprinted from SPACE SCIENCE REVIEWS, Volume 101, Nos. 1-2 Kluwer Academic Publishers, Dordrecht) , M. J. & [Güdel]{}, M. 1992, , 401, 736 , T. S., [Benz]{}, A. O., & [Gary]{}, D. E. 1998, , 36, 131 , M. & [Benz]{}, A. O. 2009, , 499, L33 , M., [Fletcher]{}, L., & [Benz]{}, A. O. 2009, , 498, 891 , A. O. 1986, , 104, 99 , A. O., [Saint-Hilaire]{}, P., & [Vilmer]{}, N. 2002, , 383, 678 , S. A. & [Somov]{}, B. V. 2005, Astronomy Letters, 31, 537 —. 2007, Astronomy Letters, 33, 54 —. 2009, Astronomy Letters, 35, 57 , A. M. & [Fleishman]{}, G. D. 2009, , 692, L45 , G. D. 2006, , 638, 348 , G. D., [Bastian]{}, T. S., & [Gary]{}, D. E. 2008, , 684, 1433 , G. D., [Gary]{}, D. E., & [Nita]{}, G. M. 2003, , 593, 571 , G. D. & [Melnikov]{}, V. F. 1998, Uspekhi Fizicheskikh Nauk, 41, 1157 , D. E. & [Naqvi]{}, M. 2009, AAS Bull., 41, 851 , D. E., [Hartl]{}, M. D., & [Shimizu]{}, T. 1997, , 477, 958 , R. J. & [Petrosian]{}, V. 1992, , 398, 350 , G. D. 1985, , 293, 584 , G. D. & [Benka]{}, S. G. 1992, , 400, L79 , M. & [Kosugi]{}, T. 2004, , 419, 1159 , L. D. & [Lifshitz]{}, E. M. 1971, [The classical theory of fields]{}, ed. L. D. [Landau]{} & E. M. [Lifshitz]{} , Y. E. 1996, , 462, 997 —. 2000, , 194, 327 , Y. E. 2003, in Lecture Notes in Physics, Berlin Springer Verlag, Vol. 612, Energy Conversion and Particle Acceleration in the Solar Corona, ed. L. [Klein]{}, 213–229 , J. A. 1997, , 491, 939 , J. A., [Larosa]{}, T. N., & [Moore]{}, R. L. 1996, , 461, 445 , A., [Aurass]{}, H., [Klein]{}, K.-L., & [Trottet]{}, G. 2008, , 253, 3 , B. T., [Petrosian]{}, V., & [Schwartz]{}, R. A. 1997, , 489, 358 , V., [McTiernan]{}, J. M., & [Marschhauser]{}, H. 1994, , 434, 747 , J. M. & [Petrosian]{}, V. 1998, , 495, 377 , I. V., [Fleishman]{}, G. D., & [Huang]{}, G.-L. 2008, , 681, 1688 , B. V. & [Bogachev]{}, S. A. 2003, Astronomy Letters, 29, 621 , B. V. & [Kosugi]{}, T. 1997, , 485, 859 , I. N. 1985, [Cosmic rays in interplanetary magnetic fields]{}, ed. I. N. [Toptygin]{} , S. 1985, , 290, 353 , N. & [MacKinnon]{}, A. L. 2003, in Lecture Notes in Physics, Berlin Springer Verlag, Vol. 612, Energy Conversion and Particle Acceleration in the Solar Corona, ed. L. [Klein]{}, 127–160 [^1]: We note that in case of big flares, large numbers of GS-producing electrons can already be generated during a preflare phase [@Asai_etal_2006]. In such cases we have in mind an even earlier stage of acceleration [e.g., @Battaglia_etal_2009], when the 100 keV electrons are not yet numerous.
--- author: - Pham Tien Lam - Hiori Kino - Kiyoyuki Terakura - Takashi Miyake - Ichigaku Takigawa - Koji Tsuda - Dam Hieu Chi title: Machine learning reveals orbital interaction in crystalline materials --- Introduction {#introduction .unnumbered} ============ Recently, an increasing volume of available experimental and quantum-computational material data along with the development of machine-learning techniques has opened up a new opportunity to develop methods for accelerating discoveries of new materials and physical chemistry phenomena. By using machine-learning algorithms, hidden information of materials, including patterns, features, chemical laws, and physical rules, can be automatically discovered from both first-principles-calculated data and experimental data [@data_mining_materials_science_PRB2012; @identifying_zeolite_framework_JPC_C2012; @find_missing_ternary_oxide_chem_mater_2012; @find_DFT_PRL_2012; @Materials_cartography_chem_mater_2015; @PhysRevLett_big_data_materials_descriptors_sheffer; @JCP_parallel_lasso_SMM; @JCP_LMM]. It is common knowledge that, in a material dataset, the most important information for identifying a material is its structure. Information on the structure of a material is usually described using a set of atoms with their coordinates and periodic unit-cell vectors, which are required for crystalline systems. From the viewpoint of data science, the material data using this primitive representation can be categorized as unstructured data, and the mathematical basis on such material data is only the algebra of sets. Therefore, advanced quantitative machine-learning algorithms can hardly be applied directly to conventional material data owing to the limitation of the algebra of the primitive data representation. In order to apply well-established machine-learning methods including predictive learning and descriptive learning, it is necessary to convert the primitive representation into vectors or matrices such that the comparison and calculations using the new representation reflect the nature of materials and the actuating mechanisms of chemical and physical phenomena. Various methods for encoding materials have been developed in the field of materials informatics. Behler and coworkers [@BehlerPRL; @BehlerJCP; @Nongnuch_Artrith_nanoparticles; @Eshet; @Eshet1; @Artrith; @Artrith1] utilized atom-distribution-based symmetry functions to represent the local chemical environment of atoms and employed a multilayer perceptron to map this representation to the associated atomic energy. The arrangement of structural fragments has also been used to encode materials to predict the physical properties of molecular and crystalline systems [@Pilania; @Materials_cartography_chem_mater_2015]. Isayev used band structure and density of states (DOS) fingerprint vectors as a representation of materials to visualize material space [@Materials_cartography_chem_mater_2015]. Rupps and coworkers developed a descriptor known as the Coulomb matrix (CM) for the prediction of atomization energies and formation energies [@Rupps; @Faber_Coulomb_matrix; @Rupp_tutorial]. Although the CM is very successful in predicting of properties of molecules, its performance with regard to the the formation energies of crystal systems is relatively poor[@Faber_Coulomb_matrix]. These representations do not include explicit information about the atomic orbitals or the nature of chemical bonding in materials, which is necessary for the determination of the electronic structure and the resulting physical properties, so the learning results have low interpretability in the language of physical chemistry. To study materials using machine-learning approaches, both the accuracy and the interpretability of the learnt models are important aspects. [@Merckt95aboutbreaking_accuracy_vs_interpretability]. To render data-driven approaches meaningful and useful for materials science studies, it is necessary to design material representations with which the results derived using machine-learning methods can be interpreted in the language of physical chemistry. Further, structural information on the materials should be included explicitly in the learning results for supporting the materials design processes. In this paper, with emphasis on the interpretability of the derived learning results, we propose a novel representation of materials by utilizing domain knowledge in encoding them. It has been well established in fundamental chemistry that certain important aspects of the electronic structure can be deduced from a simple description of the nearest valence electrons around an atom in a molecule or crystal system, e.g., the Lewis theory provides powerful tools for studying the structure of molecules [@chemistry_chemcial_reactivity]. The ligand field theory is another example of a theory developed based on this intuition, and several fruitful results have been obtained using this theory [@molecular_orbital_of_transition_metal_complexes]. In this work, we utilize this domain knowledge in encoding the materials to propose a novel representation of materials named orbital-field matrix (OFM) by using the coordination of valence orbitals (electrons). A material or a local structure is encoded by counting the valence orbitals of the nearest neighbors. We focus on magnetic materials based on rare earth–transition metal (RT) alloys and RT alloys including a light element X, which may be B, C, N, or O (RTX). For verifying the applicability of the proposed material representation, we first examine the decision tree for predicting the magnetic moment of Mn, Fe, Co, and Ni in RT alloys. The decision trees learnt from the RT alloy data show that the coordination numbers of the occupied $d$ orbitals of transition metals and occupied $f$ orbitals of rare-earth metals play an important role in determining the local magnetic moment of the transition metal sites. The obtained results confirm the interpretability of our OFM representation in terms of structural, physical chemistry. Kernel ridge regression (KRR) analyses using standard techniques and similarity measures are carried out in learning prediction models to predict the local magnetic moments and formation energies of the alloy materials. Our computational experiments show that the OFM representation can accurately reproduce the DFT-calculated local magnetic moments of transition-metal sites in RT alloys, formation energies of crystalline systems, and atomization energies of molecular systems. The high prediction accuracy confirms the practicability of our OFM representation. Methodology {#methodology .unnumbered} =========== Representation of materials {#representation-of-materials .unnumbered} --------------------------- For designing the representation for a material, we start with the representation for an atom as a building block of the material. We utilize the standard notation for electron configuration to develop the representation for an atom, e.g., the electron configurations of Na and Cl are \[Ne\]$3s^1$ and \[Ne\]$3s^23p^5$, respectively. In order to convert this standard notation into a numerical vector, we borrow the idea of one-hot-vector in the field of natural language processing, in which a word is represented by a bit vector having the dimension of the number of words in a dictionary. The vector consists of elements with the values of 0, with the exception of a single element used uniquely to identify the word. The representation of an atom is then converted from the standard notation into a one-hot-vector $\vec{O}_{atom}$ by using a dictionary of the valence subshell orbitals: $D = \{s^1, s^2, p^1, p^2, ..., p^6, d^1, d^2, ..., d^{10}, f^1, f^2, ..., f^{14}\}$ (e.g., $d^5$ indicates the electron configuration in which the valence $d$ orbital holds 5 electrons), which consists of 32 elements (Fig \[fig:NaCl-representation-1\]). ![\[fig:NaCl-representation-1\] OFM representation for a Na atom in an octahedral site surrounded by 6 Cl atoms: atomic one-hot-vector for Na (middle), representation for the 6 Cl atoms surrounding the Na atom (left), and representation for the Na atom surrounded by 6 Cl atoms (right). ](material-encode-NaCl){width="80.00000%"} Next, we design the representation of a local chemical environment by considering the sum of the weighted vector representations of all atoms in the environment: $$\vec{O}_{env} = \sum_k \vec{O}_k w_k,$$ where $\vec{O}_k$ is the representation vector of atom $k$ and $w_k$ is the weight of this atom, which measures the contribution of the atom. An atom at site $p$ in a chemical environment can be represented using the OFM as follows: $$\begin{aligned} \label{eq:eq2} X^{(p)} = &\sum_{k \in n_p} \vec{O}_k^T \times \vec{O}^{(p)}_{center} \times w_k, \notag\\ X_{ij}^{(p)} = &\sum_{k \in n_p }o^k_j o^{(p)}_i \frac{\theta_k^{(p)}}{\theta_{max}^{(p)}}\end{aligned}$$ where $i, j \in \{s^1, s^2, p^1, ..., p^6, d^1, ..., d^{10}, f^1, ..., f^{14}\}$, which is the set of electron configurations in valence orbitals; $k$ is the index of the nearest-neighbor atoms; $n_p$ is the number of nearest-neighbor atoms surrounding site $p$; $w_k$ is a weight that represents the contribution of atom $k$ to the coordination number of the center atom, $p$; $o^k_j$ and $o^p_i$ are elements of the one-hot-vectors of the $k^{th}$ neighboring atom and the center atom $p$ ($o^u_v$ equals 1 if the valence orbitals of the atom at site $u$ have electron configuration of type $v$, else it equals to 0) representing the electron configuration. The weight, $w_k = \theta_k^{(p)}/\theta_{max}^{(p)}$, is determined from a scheme employing the Voronoi polyhedron proposed by O’Keeffe [@Okeeffe_coordination_number] implemented in pymatgen code [@pymatgen]. In this expression, $\theta_k^p$ is the solid angle determined by the face of the Voronoi polyhedral separating the atom $k$ and the atom $p$, and $\theta_{max}^p$ is the maximum among all the solid angles determined by the face of the Voronoi polyhedral separating the atom $p$ and the nearest-neighbor atoms. Additionally, in order to incorporate the information on the size of the valence orbitals, the distance $r_{pk}$ between the center atom $p$ and the neighboring atom $k$ should be included in the weight, $w_k$. We propose the following form for the calculation of the OFM elements: $$\label{eq:3} X_{ij}^{(p)} = \sum_{k \in n_p }o^k_j o^{(p)}_i \frac{\theta_k^{(p)}}{\theta_{max}^{(p)}} w(r_{pk}),$$ where $w(r_{pk})$ is a function representing the contribution of the distance to the weight. In this work, we use the inverse of the distance as the distance-dependent weight function: $w(r_{pk}) = 1 / r_{pk}$. Composing the descriptor for a structure (a molecule or a crystal system) from its local structure representation requires careful consideration so that as much information as possible is included. In this work, for the atomization energy, we simply take the sum of the descriptors of the local structures as the descriptor for the entire structure. For the average formation energy (per atom), the descriptor for the entire structure is composed averaging the descriptors of the local structures. [0.40]{} ![image](Mn-DecisionTree_1){width="1.\textwidth"} [0.38]{} ![image](Fe-DecisionTree_1){width="1.\textwidth"} \ [0.38]{} ![image](Co-DecisionTree_1){width="1.\textwidth"} [0.38]{} ![image](Ni-DecisionTree_1){width="1.\textwidth"} Results and discussion {#results-and-discussion .unnumbered} ====================== Prediction of local atomic properties {#prediction-of-local-atomic-properties .unnumbered} ------------------------------------- We now examine how the OFM can be employed to predict the local atomic properties of materials. In this work, we focus on the local magnetic moment of transition metals in RT alloys, whose dataset includes 658 structures collected from the Materials Project database [@MaterialsProject; @materialsAPI]. We select the structures by combining transition metals and rare-earth metals from $\{$Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Y, Zr, Nb, Mo, Tc, Ru, Rh, Pd, Ag, Cd, Rh, Pd, Ag, Cd, Hf, Ta, W, Re, Os, Ir, Pt, Au$\}$ and $\{$La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu$\}$. Since the local magnetic moment of a transition metal site is determined by the number of unpaired electrons of $d$-orbitals, our description of the local structure in terms of the coordination of valence electrons is expected to include a significant amount of information for predicting the local magnetic moment. We first examine which elements in the OFM determine the local magnetic moment of the Mn, Fe, Co, and Ni sites in the RT dataset through decision tree regression analyses. As explained above, the representation of atoms and surrounding environments is derived by converting the standard notation into an one-hot-vector, using a dictionary of the valence subshell orbitals (Fig \[fig:NaCl-representation-1\], Equation 2). The $(d^5,d^5)$, $(d^6,d^6)$, $(d^7,d^7)$, and $(d^8,d^8)$ elements in the OFM correspond, respectively, to the coordination numbers of Mn atoms surrounding a Mn site, Fe atoms surrounding an Fe site, Co atoms surrounding a Co site, and Ni atoms surrounding a Ni site. All the atoms in an RT alloy have valence subshell $s$ orbitals occupied by 2 electrons; therefore, the $(d^5,s^2)$, $(d^6,s^2)$, $(d^7,s^2)$, and $(d^8,s^2)$ elements correspond to the total coordination number of a Mn site, an Fe site, a Co site, and a Ni site, respectively. The $(d^5,f^n)$, $(d^6,f^n)$, $(d^7,f^n)$, and $(d^8,f^n)$ elements correspond to the coordination numbers of rare-earth atoms surrounding a Mn site, an Fe site, a Co site, and a Ni site, respectively. Moreover, $(d^n ,d^1)$ corresponds to the coordination number of either La, Ce, Ga, or Lu (among which the valence subshell $d$ orbitals are occupied by 1 electron) surrounding a transition metal site. The decision tree regressions for the local magnetic moment for Mn, Fe, Co, and Ni sites derived from the data are summarized in Figure 2. From these results, it is clear that $(d^n,d^n)$ elements, namely the coordination number of a transition metal site surrounded by transition metal atoms of the same kind, are important for the local magnetic moment of the Fe, Co, and Ni sites. It is interesting to note that, to obtain a high value of the magnetic moment, the $(d^n, d^n)$ elements of the OFM should be in a specific range. For instance, Fe sites tend to have a magnetic moment less than $2.2 \mu_B$ when the $(d^6,d^6)$ element is less than 6.6 or greater than 9.15. This implies that the Fe atom appears to have a smaller magnetic moment when surrounded by less than 7 Fe atoms or more than 9 Fe atoms. Further, the magnetic moment of Fe sites may be greater than $2.5 \mu_B$ when the $(d^6,d^6)$ element is greater than 6.6, but the $(d^6,s^2)$ element, namely the total coordination number including the contribution of rare-earth metal atoms, is less than 8.73. In contrast, Ni sites tend to have a small magnetic moment (less than $0.2 \mu_B$) when the $(d^8,d^8)$ element is less than 7.22, but a large magnetic moment (greater than $0.4 \mu_B$) can be obtained when the $(d^8,d^8)$ element is greater than 8.25. This implies that the Ni atom appears to have a large magnetic moment when surrounded by more than 9 Ni atoms. The magnetic moment of Ni sites may be greater than $0.4 \mu_B$ when the $(d^8,d^6)$ element is greater than 7.22, but the $(d^8,s^2)$ element, namely the total coordination number including the contribution of rare-earth metal atoms, is less than 9.15. For Co, the existence of rare-earth elements plays a significant role in determining the local magnetic moment, since the $(d^7,f^{12})$ and $(d^7,d^1)$ elements appear as nodes in the decision tree. This result means that a proper small amount of a rare-earth metal in which the valence subshell $d$ orbitals are occupied by 1 electron (La, Ce, Gd, Lu) may effectively increase the local magnetic moment of Co sites. The tree for Mn sites appears to be more complicated than for the other transition-metals, which can be attributed to the complicated magnetic properties of the $d^5$ configuration of Mn. The obtained decision trees obviously suggest that the coordination numbers of the occupied $d$ orbitals of transition metals and occupied $f$ orbitals of rare-earth metals play an important role in determining the local magnetic moment of the transition metal sites. This result agrees with the fact that in RT compounds, there are three types of interactions including the magnetic interaction between transition-metal (T) atoms in the T sublattices (T–T interaction), the magnetic interaction between rare-earth (R) atoms and the T sublattices (R–T interaction), and the magnetic interaction between R atoms in the R sublattices (R–R interaction). The T–T interaction dominates in RT compounds because the delocalization and spatial extent of the $3d$ electron wave functions of T atoms are much more pronounced than those of the $4f$ electrons. The R–T interaction is weak in comparison to the T–T interaction; however, the R–T interaction plays an important role in determining the magnetic structure of RT compounds. This confirms the interpretability in terms of structural, physical chemistry of the learning results from the data represented by the OMF descriptors. In the next step, we examine how the local magnetic moment can be represented by the OFM descriptors based on the fact that materials with higher similarity (as estimated by the descriptors) should possess similar local magnetic moments. For this purpose, we employ a simple nearest-neighbor regression method to predict the local magnetic moments, and the cross-validated RMSE is used to measure the performance of our descriptors. In the nearest-neighbor regression, a property of a data point is deduced from the properties of the nearest-neighbor points in the training data. In this work, we employ a nearest-neighbor regressor implemented in the scikit-learn package [@scikit-learn]. The number of nearest neighbors is fixed as 5, and the nearest neighbors are determined by a brute-force search. The prediction is weighted by the distance to the nearest neighbors. \[tab:prediction\_local\_magnetic\_moment\] Distance $d_{eucl}$ $d_{man}$ $d_{cos}$ $d_{bar}$ $d_{can}$ $d_{cor}$ ---------- ------------ ----------- ----------- ----------- ----------- ----------- RMSE 0.26 0.21 0.23 0.21 0.21 0.23 $R^2$ 0.86 0.90 0.89 0.90 0.90 0.90 : Cross-validation RMSE ($\mu_B$) and the coefficient of determination $R^2$ in the prediction of the local magnetic moments obtained by nearest-neighbor regression with selected distance measurements. Table \[tab:prediction\_local\_magnetic\_moment\] summarizes the cross-validation RMSE and the coefficient of determination $R^2$ between the observed and predicted values, obtained with our nearest-neighbor regression and different distance measurements. The results are obtained as the OFM weighted by distance (Eq. \[eq:3\]). It may be noted that, for the prediction of the local magnetic moment, the difference between the distance-weighted and non-distance-weighted (Eq. \[eq:eq2\]) OFM is negligible. We achieve a reasonable performance in the prediction of the local magnetic moments with an RMSE of approximately $0.2\ \mu_B$ and $R^2$ of 0.9. This result indicates that close materials in our description space of local structure yield similar local magnetic moments, which implies that our data representation includes significant information about the local magnetic moments. To further improve the prediction of the local magnetic moment, we apply KRR as the model for predicting the local magnetic moment. We obtain a cross-validated RMSE of $0.18 ~\mu_B$, a cross-validated MAE of $0.05 ~\mu_B$, and an $R^2$ value of $0.93$ as indicated in Table \[tab:krr\_local\_m\]. \[tab:krr\_local\_m\] Descriptor OFM CM ------------ ------ ------ RMSE 0.18 0.21 MAE 0.05 0.11 R$^2$ 0.93 0.90 : Cross-validation RMSE ($\mu_B$), cross-validation MAE ($\mu_B$), and coefficient of determination $R^2$ in the prediction of the local magnetic moments obtained by KRR regression with orbital-field-matrix (OFM) and Coulomb matrix (CM) descriptors. For comparison, we adopt the CM descriptor proposed by Rupp and coworkers [@Rupps] to represent the local structure of a center atom and its neighbors determined by the Voronoi polyhedra scheme. We treat the local structures in the same way as isolated molecules, and the calculated CM descriptors are used for predicting the local magnetic moments by using KRR regression. By using this descriptor, we obtain the a cross-validated RMSE of approximately 0.21 $\mu_B$, a cross-validated MAE of $0.11 ~\mu_B$, and an $R^2$ value of 0.90, as indicated in Table \[tab:krr\_local\_m\]. The obtained results clearly confirm that the OFM descriptor, which includes information on the coordination of valence electrons, is more informative and, consequently, yields a better prediction accuracy than the CM descriptor for the local magnetic moment of RT alloys. Prediction of material properties {#prediction-of-material-properties .unnumbered} --------------------------------- For predicting the properties of materials, it is required to develop descriptors for those materials. In this study, the descriptor for a material is built from the descriptors of its local structures. The prediction accuracy for a physical property will depend strongly on how well the descriptors for the material are composed from the descriptors of their local structures. With the aim of obtaining a prediction model with high prediction accuracy, the representation of materials is usually designed to include as much information as possible, with a large number of descriptors, without considering their interpretability. In this work, we focus more on developing descriptors taking into consideration of both the applicability and interpretability. Therefore, instead of designing a complicated representation for materials, we choose a simple approach in which the descriptor of a material is derived by simply averaging or summing the descriptors for the local structures of its constituent atoms. We implement the prediction models for the formation energies of crystalline systems and the atomization energies of molecular systems to examine the applicability of OFM descriptors. ![\[fig:FormationEnergies\] Comparison of the formation energies calculated using DFT and those predicted through machine-learning, using OMF.](prediction.eps){width="52.00000%"} For crystalline systems, we focus on transition-metal binary alloys, TT, and rare-earth–transition-metal alloys, RT, as well as RTX and TTX, which are RT and TT alloys that include a light element X = B (RTB), C (RTC), N (RTN), or O (RTO). We select the transition metals from $\{$Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Y, Zr, Nb, Mo, Tc, Ru, Rh, Pd, Ag, Cd, Rh, Pd, Ag, Cd, Hf, Ta, W, Re, Os, Ir, Pt, Au$\}$, the rare-earth metals from $\{$La, Ce, Pr, Nd, Pm, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu$\}$, and X from $\{$B, C, N, O$\}$. We collect the data of more than four thousand compounds including their structures and formation energies from the Materials Project repository: 1510 RTX compounds, 1311 TTX compounds, 692 RT compounds, and 707 TT compounds. We use the average of the descriptors for their local structures to build the descriptor for these materials. For comparison, we also implement the CM descriptors for these crystalline systems based on the Ewald sum developed by Faber and coworkers [@Faber_Coulomb_matrix]. We use a KRR model with a Laplacian kernel for both OFM and CM descriptors. The 10-fold cross-validated comparison between the DFT calculated formation energies and the ML predicted formation energies is shown in Fig. \[fig:FormationEnergies\]. The DFT-calculated and ML-predicted formation energies show good agreement with an $R^2$ value of 0.98, a cross-validated RMSE of 0.19 eV/atom, and cross-validated MAE of 0.11 eV/atom. This result is better than that obtained using CM with an $R^2$ value of 0.87, a cross-validated RMSE of 0.47 eV/atom, and a cross-validated MAE of 0.39 eV/atom, as summarized in Table. \[tab:FormationEnergies\]. Dataset ------------ ------- ---------------------------- ------- ------------- Descriptor OFM CM [@Faber_Coulomb_matrix] OFM CM [@Rupps] RMSE 0.190 0.470 0.043 0.040 MAE 0.112 0.390 0.027 0.020 $R^2$ 0.98 0.87 0.98 0.99 : Cross-validation RMSE (eV/atom), cross-validation MAE (eV/atom), and coefficient of determination $R^2$ for RTX and the QM7 dataset by using orbital-field matrix (OFM) and Coulomb matrix (CM) descriptors[]{data-label="tab:FormationEnergies"} For molecular systems, we focus on atomization energies of organic molecules. We use the QM7 dataset with 7195 organic molecules [@Rupps; @blum_qm7]. The descriptor of a molecule is built by summing over the descriptors of its local structures. By using our OFM representation and KRR regression, we obtain a cross-validated RMSE of 0.043 eV/atom, a cross-validated MAE of 0.027 eV/atom, and an $R^2$ value of 0.98, whereas the CM yields a cross-validated RMSE of 0.040 eV/atom, a cross-validated MAE of 0.020 eV/atom, and an $R^2$ value of 0.99 [@Rupps; @Faber_Coulomb_matrix; @Rupp_tutorial], as indicated in Table. \[tab:FormationEnergies\]. [0.34]{} ![image](qm7_std){width="1.\textwidth"} [0.4]{} ![image](rtx_std){width="1.\textwidth"} This result confirms that the construction of the OFM of a material, by averaging or summing the descriptors of all the local structures of the constituent atoms, yields a better prediction accuracy than the CM descriptor for the formation energy of RTX systems, and a comparable to the CM descriptor for the atomization energy of organic molecular systems in the QM7 dataset. It may be noted that for molecular systems (i.e., the QM7 dataset only contains the light elements such as C, H, O, N, and S), the CM descriptor yields a slightly better result than our OFM. However, for RTX systems with a variety of elements (i.e., the RTX dataset contains transition-metals, rare-earth metals, and light elements), our OFM shows a superior prediction ability. To capture the difference in the complexity of the QM7 and RTX datasets, we calculate the standard deviation of the OFM of all local structures for each dataset. Fig. \[fig:qm7\_rtx\_std\], shows a comparison between the QM7 and RTX datasets. It is clearly seen that the QM7 dataset contained the only a small number of non-zero OFM elements at the lower left of Fig. \[fig:qm7\_rtx\_std\] (a), whereas the RTX dataset exhibits a large variety of OFMs, \[fig:qm7\_rtx\_std\] (b). Moreover, the QM7 dataset presents a small deviation of the OFM, and the RTX dataset has a larger deviation. This implies that the RTX dataset has higher diversity in both composition and structure than the QM7 dataset. This result indicates that the OFM may be used not only for learning properties of both crystalline and molecular systems with large diversity in atomic composition and structure, but also for studying structural properties of materials. Conclusion {#conclusion .unnumbered} ========== We have proposed a novel representation of crystalline materials named orbital-field matrix (OFM) based on the distribution of valence shell electrons. We demonstrated that this new representation can be highly useful in describing and measuring the similarities of materials or local structures in transition metal–rare-earth metal bimetal alloys. Our experiments show that our OFM can accurately reproduce the DFT-calculated local magnetic moment of transition sites in RT alloys with a cross-validated RMSE of 0.18 $\mu_B$ and an $R^2$ value of 0.93. Moreover, it can be interpreted in the language of physical chemistry, that is the ligand field theory in the local magnetic moment. The decision tree regression shows the importance of the coordination numbers of the occupied $d$ orbitals of transition metals and occupied $f$ orbitals of rare-earth metals in determining the local magnetic moment of the transition metal sites. The formation energies of crystalline systems and atomization energies of molecular systems can be well predicted using our OFM. With KRR representation, the formation energies of the crystalline systems and atomization energies of molecular systems can be accurately reproduced with an $R^2$ of approximately $0.98$. With the information about the coordination of the atomic orbitals, the OFM shows a superior applicability for the systems with high diversity in atomic composition and structure. The acquired results suggest that OFM could be useful in mining chemical/physical information of materials from available datasets by using modern machine-learning algorithms. Computational details {#computational-details .unnumbered} ===================== First-principles calculation {#first-principles-calculation .unnumbered} ---------------------------- We employed VASP 5.4.1 [@vasp1; @vasp2; @vasp3; @vasp4] with the GGA/PBE exchange-correlation functional[@ggapbe1; @ggapbe2] to calculate the local magnetic moments of these structures. We followed the Materials Project database on the choice of the PAW projectors[@paw1; @paw2], and employed pymatgen 4.3.0 [@pymatgen] to prepare the VASP input files with the gaussian smearing o f 0.1 eV of MITRelaxSet and the k-point mesh density of 150 $\AA^{-3}$. The systematic simulations in this study were assisted by OACIS[@oacis]. Machine learning {#machine-learning .unnumbered} ---------------- Parameters $\gamma$ and $\lambda$ are determined in an inner loop of the 10-fold cross validation by using a logarithmic-scale grid to predict the local magnetic moment. We optimize the hyperparameters of the KRR model to predict formation energies, kernel width $\sigma$ and regularization parameter $\lambda$, by minimizing the 10-fold cross-validated RMSE. The optimized parameters are identified by searching over 2500 pairs of $\sigma$ and $\lambda$ on a 2D logarithmic grid. These procedures are routinely applied in machine-learning and statistics to avoid overfitting and overly optimistic error estimates. We employ a decision tree builder using the variance of explanatory variables and tree pruning using reduced-error pruning with back fitting (REPTree) implemented in the Weka package [@Weka]. [10]{} url \#1[`#1`]{}urlprefixdoiprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} *et al.* . ****, (). , , & . ****, (). , , , & . ****, (). , , , & . ****, (). *et al.* . ****, (). , , , & . ****, (). , , , & . ****, (). , , , & . ****, (). & . ****, (). . ****, (). & . ****, (). , , , & . ****, (). , , , & . ****, (). , & . ****, (). & . ****, (). , , , & . ****, (). , , & . ****, (). , , & . ****, (). . ****, (). & (). , & (eds.) ** (, ). & (eds.) ** (, ). . ****, (). *et al.* . ****, (). *et al.* . ****, (). *et al.* . ****, (). *et al.* . ****, (). & . ****, (). & . ****, (). & . ****, (). & . ****, (). & . ****, (). , & . ****, (). , & . ****, (). . ****, (). & . ****, (). , & . ****, (). . (). Acknowledgements {#acknowledgements .unnumbered} ================ This work was partly supported by PRESTO and by the “Materials Research by Information Integration” Initiative (MI$^2$I) project of the Support Program for Starting Up Innovation Hub, both from the Japan Science and Technology Agency (JST), Japan; by the Elements Strategy Initiative Project under the auspices of MEXT; and also by MEXT as a social and scientific priority issue (Creation of New Functional Devices and High-Performance Materials to Support Next-Generation Industries; CDMSI) to be tackled by using a post-K computer. We thank the Numerical Materials Simulator at NIMS to execute the systematic simulations in this study. Author contributions statement {#author-contributions-statement .unnumbered} ============================== T.L Pham and H. C. Dam developed the OFM descriptors and performed decision tree regression and KRR regression analyses. H. Kino performed DFT calculations. K. Terakura, T. Miyake, and H. Kino analyzed experiments on prediction of local magnetic moments. I. Takigawa and K. Tsuda performed analyses with coulomb matrix descriptors. All authors reviewed the manuscript.
--- author: - 'Weizhang Huang[^1]' date: 'April 10, 2019' title: An Introduction to MMPDElab --- Introduction ============ **MMPDElab** is a package written in MATLAB[^2] for adaptive mesh movement and adaptive moving mesh P1 finite element solution of second-order partial different equations (PDEs) having continuous solutions. The adaptive mesh movement is based on the new implementation [@HK2014; @HK2015] of the moving mesh partial differential equation (MMPDE) method [@BHR09; @CHR99b; @HRR94b; @HRR94a; @HR97b; @HR99; @HR11]. The mesh equation is integrated using either `ode45` (an explicit MATLAB ODE solver) or `ode15s` (an implicit MATLAB ODE solver) while physical PDEs are discretized in space using P1 conforming finite elements on moving meshes and integrated in time with the fifth-order Radau IIA method (an implicit Runge-Kutta method) with a two-step error estimator [@Montijano2004] for time step selection. More information on the moving mesh P1 finite element method can be found from recent applications such as those found in [@DHHLY2018; @HKS2015; @LHQ2018; @NH2017; @NH2019; @YH2018; @ZhangFei2017]. The source code of **MMPDElab** can be downloaded at - https://whuang.ku.edu/MMPDElab/mmpdelabv1.html - https://github.com/weizhanghuang/MMPDElab The functions in MMPDElab can be grouped into three categories: - Matrix operations (with names in the form `Matrix_xxx`) - Mesh movement (with names in the form `MovMesh_xxx`) - Moving mesh P1 finite element solution (with names in the form `MovFEM_xxx`) The functions in the first category `Matrix_xxx` perform vectorized computation of basic matrix operations such as multiplication, inversion, and finding transposes and determinants for arrays of matrices of small size (typically $3\times 3$ or smaller). These operations are used by functions in the other two categories which will be explained in the subsequent sections. We now introduce notation whose understanding is crucial to the use of the package. A mesh or a triangulation, $\mathcal{T}_h$, of $N$ elements and $N_v$ vertices in $d$-dimensions ($d = 1$, 2, or 3) is represented in MATLAB by the matrices $X$ and $tri$, where $X$ is a matrix of size $N_v \times d$ containing the coordinates of the vertices and $tri$ is a matrix of size $N \times (d+1)$ listing the connectivity of the mesh. More specifically, $X(i,:)$ gives the coordinates of the $i$th vertex ${\mbox{\boldmath $ x $}}_i$ while $tri(j,:)$ contains the global IDs of the vertices of the $j$th element. In **MMPDElab**, $npde$ components of the physical solution at the vertices are given by the matrix $u$ of size $N_v \times npde$, i.e., $u(i, :)$ contains the values of $u$ at the $i$th vertex. Its derivatives with respect to the physical coordinate ${\mbox{\boldmath $ x $}}$ are saved in the form $$du = \left [ (\nabla u^{(1)})^T, ..., (\nabla u^{(npde)})^T\right ]_{N_v \times (d \ast npde)},$$ where $u^{(k)}$ ($k = 1, ..., npde$) is the $k$th component of $u$ and $\nabla$ is the gradient operator. The metric tensor or the monitor function, $\mathbb{M}$, is calculated at the vertices and saved in the form $$M(i,:) = \left [\mathbb{M}_{11}, ..., \mathbb{M}_{d1}, ..., \mathbb{M}_{1d}, ..., \mathbb{M}_{dd}\right ]({\mbox{\boldmath $ x $}}_i), \quad i = 1, ..., N_v.$$ That is, $M$ has the size $N_v \times (d \ast d)$, with each row containing the entries of a matrix of size $d\times d$. $M$ is a good example of an array of matrices of small size. It is emphasized that when a moving mesh function is called, the mesh connectivity is kept fixed while the location of the vertices varies. The user can decide whether or not to change the connectivity in between the calls. To conclude this section, I am deeply thankful for many colleagues and former graduate students for their invaluable discussion and comments. I am particularly grateful to Dr. Lennard Kamenski who was involved in the project at the early stage. MMPDElab is a package written in MATLAB for adaptive mesh movement and adaptive moving mesh P1 finite element solution of partial different equations having continuous solutions. Copyright (C) 2019 Weizhang Huang (whuang@ku.edu) MMPDElab is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. MMPDElab is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License at <https://www.gnu.org/licenses/>. Adaptive mesh movement ====================== The adaptive mesh movement can be carried out by calling `MovMesh()` (based on the $\xi$-formulation of the MMPDE moving mesh method [@HK2014; @HK2015]), `MovMesh_XM()` (based on the $x$-formulation of the MMPDE moving mesh method), or `MovMesh_X()` (based on the $x$-formulation of the MMPDE moving mesh method with the metric tensor $\mathbb{M} = I$, i.e., without mesh adaptation). The corresponding MMPDE is defined as a gradient flow equation of the meshing functional developed in [@Hua01b] based on mesh equidistribution and alignment (with its parameters being chosen as $p = 1.5$ and $\theta = 1/3$). The headers of these functions read as [Xnew,Ih,Kmin] = MovMesh(tspan,Xi_ref,X,M,tau,tri,tri_bf,nodes_fixed, ... mmpde_int_method,dt0,abstol) [Xnew,Ih,Kmin] = MovMesh_XM(tspan,X,M,tau,tri,tri_bf,nodes_fixed, ... mmpde_int_method,dt0,abstol,Xi_ref) [Xnew,Ih,Kmin] = MovMesh_X(tspan,X,tau,tri,tri_bf,nodes_fixed, ... mmpde_int_method,dt0,abstol,Xi_ref) These functions integrate the corresponding moving mesh equations over a time period specified by [*tspan*]{}. All of the meshes, [*X*]{} (the current mesh), [*Xnew*]{} (the new mesh), and [*Xi\_ref*]{} (the reference computational mesh), are assumed to have the same number of vertices and elements and the same connectivity (specified by [*tri*]{}). The input and output variables are explained in the following. - [*tspan*]{} is a vector specifying the time interval for integration. - [*X*]{}, of size $N_v \times d$, contains the coordinates of vertices of the current mesh. - [*Xi\_ref*]{}, of size $N_v \times d$, contains the coordinates of vertices of the reference computational mesh. This mesh, typically chosen as the initial physical mesh, is a mandatory input for `MovMesh()` but is optional for `MovMesh_XM()` and `MovMesh_X()`. In the latter case, when [*Xi\_ref*]{} is not supplied, the uniformity of the new physical mesh measured in the metric $\mathbb{M}$ is made with reference to an equilateral simplex. - [*M*]{}, of size $N_v \times (d\ast d)$, contains the values of the metric tensor $\mathbb{M}$ at the vertices of [*X*]{}. More specifically, [*M(i,1:d$\ast$d)*]{} gives the metric tensor at the $i$th vertex, i.e., $ [\mathbb{M}_{11}, ..., \mathbb{M}_{d1}, ..., \mathbb{M}_{1d}, ..., \mathbb{M}_{dd}]({\mbox{\boldmath $ x $}}_i)$. - [*tau*]{} is the positive parameter used for adjusting the time scale of mesh movement. - [*tri*]{}, of size $N \times (d+1)$, lists the connectivity for all meshes. - [*tri\_bf*]{}, of size $N_{bf} \times d$, specifies the boundary facets for all meshes, with each row containing the IDs of the vertices of a facet on the boundary. A boundary facet consists of a point in 1D, a line segment (with two vertices) in 2D, or a triangle (with three vertices) in 3D. [*tri\_bf*]{} can be computed using the Matlab function [*freeBoundary*]{} in 2D and 3D. - [*nodes\_fixed*]{} is a vector containing the IDs of the vertices, such as corners, which are not allowed to move. - [*mmpde\_int\_method*]{} is an optional input variable, specifying that either `ode15s` (implicit) or `ode45` (explicit) is used to integrate the moving mesh equation. The default is `ode15s`. - [*dt0*]{} is an optional input variable specifying the initial time step that is used in the time integration of the mesh equation. The default is [*dt0*]{} = ([*tspan*]{}(end)-[*tspan*]{}(1))/10. - [*abstol*]{} is an optional input variable specifying the absolute tolerance used for time step selection in the time integration of the mesh equation. The default is [*asbstol*]{} = 1e-6 for `ode15s` and 1e-8 for `ode45`. - [*Xnew*]{}, of size $N_v \times d$, contains the coordinates of vertices of the new mesh. - [*Ih*]{} is an optional output variable giving the value of the meshing functional at the new mesh. - [*Kmin*]{} is an optional output variable giving the minimal element volume. In addition to `MovMesh()`, `MovMesh_XM()`, and `MovMesh_X()`, the following functions can also be used by the user. 1. `[X,tri] = MovMesh_circle2tri(jmax) ` This function creates a triangular mesh ([*X*]{}, [*tri*]{}) for the unit circle. 2. `[X,tri] = MovMesh_cube2tet(x,y,z) ` This function creates a tetrahedral mesh ([*X*]{}, [*tri*]{}) from the cuboid mesh specified by [*x*]{}, [*y*]{}, and [*z*]{} for a cuboid domain. Each subcuboid is divided into 6 tetrahedra. 3. `V = MovMesh_freeBoundary_faceNormal(X,tri,tri_bf) ` This function computes the unit outward normals for the boundary facets. [*V*]{} has the size of $N_{bf} \times d$. 4. `V = MovMesh_freeBoundary_vertexNormal(X,tri,tri_bf) ` This function computes the unit outward normals for the boundary vertices. [*V*]{} has the size of $N_{v} \times d$, with the normals for the interior vertices being set to be $[1, ..., 1]^T/\sqrt{d}$. 5. `[Grad,Hessian] = MovMesh_GradHessianRecovery(u,X,tri,tri_bf) ` This function computes the gradient and Hessian of function $u$ at the vertices using centroid-vortex-centroid-vertex volume-weighted average. 6. `Grad = MovMesh_GradKRecovery(u,X,tri,tri_bf) ` This function computes the gradient of function $u$ on the elements. 7. `Grad = MovMesh_GradRecovery(u,X,tri,tri_bf) ` This function computes the gradient of function $u$ at the vertices using volume averaging. 8. `fnew = MovMesh_LinInterp(f,X,QP,tri,tri_bf,useDelaunayTri) ` This function performs linear interpolation of [*f*]{} (defined on [*X*]{}) at query points QP using triangulation or Delaunay triangulation. [*useDelaunayTri*]{} is a logical variable with value [*true*]{} or [*false*]{}. 9. `[X,tri] = MovMesh_MeshMerge(X1,tri1,X2,tri2) ` This function merges two non-overlapping meshes ([*X1,tri1*]{}) and ([*X2,tri2*]{}) which may or may not have common boundary segments. 10. `[Qgeo,Qeq,Qali] = MovMesh_MeshQualMeasure(X,tri,M,Linf_norm,Xi_ref) ` This function computes the geometric, equidistribution, and alignment measures (in maximum norm or $L^2$ norm in $\xi$) for the mesh ([*X, tri*]{}) according to the metric tensor. Here, both [*Linf\_norm*]{} and [*Xi\_ref*]{} are optional input variables. 11. `[Qmax,Ql2] = MovMesh_MeshQualMeasure2(X,tri,M,Xi_ref) ` This function computes the maximum and $L^2$ norm of the mesh quality measure based on a single condition combining both equidistribution and alignment. [*Xi\_ref*]{} is an optional input variable. 12. `[X,tri] = MovMesh_MeshRemoveNodes(X1,tri1,ID) ` This function removes the nodes listed in [*ID*]{} from the existing mesh ([*X1,tri1*]{}). 13. `[XF,TriF,TriF_parent] = MovMesh_MeshUniformRefine(X,Tri,Level) ` This function uniformly refines a simplicial mesh ([*Level*]{}) times or levels. On each level, an element is refined into $2^d$ elements. 14. `M = MovMesh_metric_arclength(u,X,tri,tri_bf) ` This function computes the arclength metric tensor. 15. `MC = MovMesh_metric_F2C(M,Tri,Tri_parent,TriC) ` This function computes the metric tensor on a coarse mesh from the metric tensor defined on a fine mesh. 16. `M = MovMesh_metric_intersection(M1,M2) ` This function computes the intersection of two symmetric and positive definite matrices. When `M1` and `M2` are diagonal, i.e., $\verb|M1| = \text{diag}(\alpha_1, ..., \alpha_d)$ and $\verb|M2| = \text{diag}(\beta_1, ..., \beta_d)$, then $\verb|M| = \text{diag}(\max(\alpha_1, \beta_1), ..., \max(\alpha_d, \beta_d))$. The intersection of two general symmetric and positive definite matrices is defined similarly through simultaneous diagonalization. 17. `M = MovMesh_metric_iso(u,X,tri,tri_bf,alpha,m) ` This function computes the isotropic metric tensor based on the $L^2$ norm or the $H^1$ seminorm of linear interpolation error ($l = 2$ and $m = 0$ or $m = 1$). 18. `MM = MovMesh_metric_smoothing(M,ncycles,X,tri) ` This function smooths the metric tensor [*ncycles*]{} times by local averaging. 19. `M = MovMesh_metric(u,X,tri,tri_bf,alpha,m) ` This function computes the metric tensor based on the $L^2$ norm or the $H^1$ seminorm of linear interpolation error ($l = 2$ and $m = 0$ or $m = 1$). 20. `[X,tri] = MovMesh_rect2tri(x,y,job) ` This function creates a triangular mesh ([*X*]{}, [*tri*]{}) from the rectangular mesh specified by [*x*]{} and [*y*]{} for a rectangular domain. Each rectangle is divided into 2 (for [*job*]{} = 2 or 3) or 4 (for [*job*]{} = 1) triangles. 21. `M1 = Matrix_ceil(M,beta) ` This function puts a ceiling on the eigenvalues of symmetric and positive definite matrix [*M*]{} such that $\lambda_{max}(M1) \le \beta$. Examples using these functions include `ex1d_1.m`, `ex2d_1.m`, `ex2d_2_Lshape.m`, `ex2d_3_hole.m`, `ex2d_4_horseshoe.m`, and `ex3d_1.m` in the subdirectory `./examples`. **Troubleshooting.** Occasionally one may see an error message like Error using triangulation The coordinates of the input points must be finite values; Inf and NaN are not permitted. Error in MovMesh>MovMesh_rhs (line 296) TR = triangulation(tri2,XI2); when calling `MovMesh()`, `MovMesh_XM()`, or `MovMesh_X()`. Typically this is caused by a stability issue when integrating the MMPDE, and using a smaller initial time step [*dt0*]{} (e.g., [*dt0 = 1e-6*]{}) may solve the problem. Adaptive mesh movement P1 finite element solution of PDEs ========================================================= This package aims to solve the system of PDEs in the weak form: find $u = [u^{(1)}, ..., u^{(npde)}] \in H^1(\Omega) \otimes \cdots \otimes H^1(\Omega)$ such that $$\begin{aligned} \label{PDE-1} &\sum_{i=1}^{npde} \int_\Omega F_i(\nabla u, u, u_t, \nabla v^{(i)}, v^{(i)}, {\mbox{\boldmath $ x $}}, t) d {\mbox{\boldmath $ x $}} + \sum_{i=1}^{npde} \int_{\Gamma_N^{(i)}} G_i(\nabla u, u, v^{(i)}, {\mbox{\boldmath $ x $}}, t) d s = 0, \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \quad \forall v^{(i)} \in V^{(i)}, \quad i = 1, ..., npde, \quad 0 < t \le T \notag\end{aligned}$$ subject to the Dirichlet boundary conditions $$\begin{aligned} R_i(u, {\mbox{\boldmath $ x $}}, t) = 0, \qquad \text{ on } \Gamma_D^{(i)}, \quad i=1, ..., npde \label{BC-1}\end{aligned}$$ where for $i= 1, ..., npde$, $\Gamma_D^{(i)}$ and $\Gamma_N^{(i)}$ are the boundary segments corresponding to the Dirichlet and Neumann boundary conditions for $u^{(i)}$, respectively, $\Gamma_D^{(i)} \cup \Gamma_N^{(i)} = \partial \Omega$, and $V^{(i)} = \{ w \in H^1(\Omega), \; w = 0 \text{ on } \Gamma_D^{(i)} \}$. The headers of `MovFEM()` (Initial-Boundary-Value-Problem solver) and ` MovFEM_bvp()` (Boundary-Value-Problem solver) read as [Unew,dt0,dt1] = MovFEM(t,dt,U,X,Xdot,tri,tri_bf,pdedef, ... fixed_step,relTol,absTol,direct_ls,ControlWeights) Unew = MovFEM_bvp(U,X,tri,tri_bf,pdedef,nonlinearsolver,MaxIter,Tol) `MovFEM()` integrates the system of PDEs on a moving mesh over a time step. Its input and output variables are explained in the following. - [*t*]{} is the current time. - [*dt*]{} is the intended time step for integrating the physical PDEs. - [*U*]{}, of size $N_v \times npde$, is the current solution. - [*X*]{}, of size $N_v \times d$, contains the coordinates of vertices of the current mesh. - [*Xdot*]{}, of size $N_v \times d$, is the nodal mesh velocity. - [*tri*]{}, of size $N \times (d+1)$, lists the connectivity for all meshes. - [*tri\_bf*]{}, of size $N_{bf} \times d$, specifies the boundary facets for all meshes. - [*pdedef*]{} is a structure used to define the PDE system in the weak form. It has 5 fields. 1. [*pdedef.bfMark*]{}, of size $N_{bf} \times 1$, is used to mark the boundary segments (boundary facets). This marking is passed to the definitions of boundary integrals and Dirichlet boundary conditions. 2. [*pdedef.bftype*]{}, of size $N_{bf} \times npde$, specifies the types of boundary condition on boundary facets whose numbering is based on [*tri\_bf*]{}. [*pdedef.bftype*]{} = 0 for Neumann BCs and [*pdedef.bftype*]{} = 1 for Dirichlet BCs. For example, [*pdedef.bftype(3,2)*]{} = 1 means that variable $u^{(2)}$ has a Dirichlet BC on the 3rd boundary facet while [*pdedef.bftype(2,1) = 0*]{} specifies that variable $u^{(1)}$ has a Neumann BC on the 2nd boundary facet. 3. `F = pdedef.volumeInt(du, u, ut, dv, v, x, t, i) ` This function is used to define $F_i$ in the weak form (\[PDE-1\]), where $v$ and $dv$ are the test function $v^{(i)}$ and its gradient. 4. `G = pdedef.boundaryInt(du, u, v, x, t, i, bfMark) ` This function is used to define $G_i$ in the weak form (\[PDE-1\]), , where $v$ is the test function $v^{(i)}$. 5. `R = pdedef.dirichletRes(u, x, t, i, bfMark) ` This function is used to define $R_i$ in (\[BC-1\]). - [*fixed\_step*]{} is an optional input logical variable, indicating whether or not a fixed step is used in time integration. The default is [*false*]{}. - [*relTol*]{} and [*absTol*]{} are optional input variables for the relative and absolute tolerances used for time step selection. The defaults are [*relTol*]{} = 1e-4 and [*absTol*]{} = 1e-6. - [*direct\_ls*]{} is an optional input logical variable, indicating whether or not the direct sparse matrix solver is used for solving linear algebraic systems. When [*direct\_ls*]{} = [*false*]{}, the BiConjugate Gradients Stabilized Method `bicgstab` is used. The default is [*true*]{}. - [*ControlWeights*]{} is an optional input variable which is nonnegative vector of size $(N_v\ast npde) \times 1$ used to define the weights of the components of the solution for the error estimation used in time step selection. - [*Unew*]{}, of size $N_v \times npde$, is the new solution at time [*t + dt0*]{}. - [*dt0*]{} is the time step size actually used to integrate the physical PDEs. - [*dt1*]{} is the time step size predicted for the next step. The input and output variables for `MovFEM_bvp()` are similar to those of `MovFEM()`. The same weak form (\[PDE-1\]) and (\[BC-1\]) is used for both IBVPs and BVPs. In the latter case, $t$ is a parameter that is not used. Here we list the variables used only in the BVP solver. - [*nonlinearsolver*]{} is an optional input variable for the method used for solving nonlinear algebraic systems, with the choices being [*newtons*]{} and `fsolve`. The defacult is `fsolve`. - [*MaxIter*]{} is an optional input variable for the maximum number of iterations allowed for the solution of nonlinear algebraic systems. The default is [*MaxIter*]{} = 300. - [*Tol*]{} is an optional input variable for the tolerance used in the solution of nonlinear algebraic systems. The default is [*Tol*]{} = 1e-6. The following two functions are used to compute the error, where the exact solution is available in the form `U = uexact(t,x,varargin)`. 1. `err = MovFEM_Error_P1L2(uexact,t,X,U,tri,tri_bf,varargin) ` This function computes the $L^2$ norm of the error in P1 finite element approximation. 2. `err = MovFEM_Error_P1Linf(uexact,t,X,U,tri,tri_bf,varargin) ` This function computes the $L^\infty$ norm of the error in P1 finite element approximation. In the following we give several examples to explain how to define the PDE system through [*pdedef*]{}. More examples can be found in the subdirectory `./examples`. A typical flow chart for those examples is shown in Fig. \[fig:ibvp-solver-1\]. Burgers’ equation in 1D {#exam-bergers1d} ----------------------- This example, implemented in `ex1d_burgers.m`, is the IBVP of Burgers’ equation in 1D, $$\label{burgers1d-1} u_t = \epsilon u_{xx} - u u_x, \quad x \in \Omega \equiv (0,1), \quad t \in (0, 1]$$ subject to the Dirichlet boundary condition $$\label{burgers1d-1-BC} u(t,x) = u_{exact}(t,x), \quad x \text{ on } \partial \Omega, \quad t \in (0, 1]$$ and the initial condition $$\label{burgers1d-1-IC} u(0,x) = u_{exact}(0,x), \quad x \in \Omega$$ where $\epsilon = 10^{-3}$ and $$\label{burgers1d-1-exac} u_{exact}(t, x) = \frac{0.1 e^{\frac{-x+0.5-4.95t}{20 \epsilon}} + 0.5 e^{\frac{-x+0.5-0.75t}{4 \epsilon}} + e^{\frac{-x+0.375}{2 \epsilon}} } {e^{\frac{-x+0.5-4.95t}{20 \epsilon}} + e^{\frac{-x+0.5-0.75t}{4 \epsilon}} + e^{\frac{-x+0.375}{2 \epsilon}} } .$$ The weak formulation of this example reads as $$\label{burgers1d-2} \int_\Omega (u_t v + \epsilon u_x v_x + u u_x v) d x = 0,\quad \forall v \in V \equiv H^1_0(\Omega) .$$ The definition of this example in the code is given as % define PDE system and BCs % all bcs are dirichlet so no need for marking boundary segments pdedef.bfMark = ones(Nbf,1); pdedef.bftype = ones(Nbf,npde); pdedef.volumeInt = @pdedef_volumeInt; pdedef.boundaryInt = @pdedef_boundaryInt; pdedef.dirichletRes = @pdedef_dirichletRes; ... ... function F = pdedef_volumeInt(du, u, ut, dv, v, x, t, ipde) global epsilon; F = ut(:,1).*v(:) + epsilon*du(:,1).*dv(:,1) + u(:,1).*du(:,1).*v(:); function G = pdedef_boundaryInt(du, u, v, x, t, ipde, bfMark) G = zeros(size(x,1),1); function Res = pdedef_dirichletRes(u, x, t, ipde, bfMark) Res = u - uexact(t, x); The heat equation in 2D {#exam-heat2d} ----------------------- This example, implemented in `ex2d_heat.m`, is the IBVP for the heat equation in 2D, $$\label{heat2d-1} u_t = u_{xx} + u_{yy} + (13\pi^2-1) e^{-t} \sin(2\pi x) \sin(3 \pi y), \quad (x,y) \in \Omega \equiv (0,1)\times (0, 1), \quad t \in (0, 1]$$ subject to the boundary conditions $$\label{heat2d-1-BC} \begin{cases} u(t,x,y) = 0, & \quad (x,y) \text{ on } x = 0 \text{ and } y = 0, \quad t \in (0, 1] \\ \frac{\partial u}{\partial x} = 2\pi e^{-t} \sin(3 \pi y), &\quad (x,y) \text{ on } x = 1, \quad t \in (0, 1] \\ \frac{\partial u}{\partial y} = - 3\pi e^{-t} \sin(2\pi x), &\quad (x,y) \text{ on } y = 1, \quad t \in (0, 1] \end{cases}$$ and the initial condition $$\label{heat2d-1-IC} u(0,x,y) = \sin(2\pi x) \sin(3 \pi y), \quad (x,y) \in \Omega .$$ This problem has the exact solution $$\label{heat2d-1-exac} u_{exact}(t,x, y) = e^{-t} \sin(2\pi x) \sin(3 \pi y).$$ The weak formulation reads as $$\begin{aligned} \label{heat2d-2} \int_\Omega (u_t v + u_x v_x + u_y v_y ) d x dy & + \int_0^1 \left ( - 2\pi e^{-t} \sin(3 \pi y) \right ) v(1, y) d y \\ & + \int_0^1 \left ( 3\pi e^{-t} \sin(2 \pi x) \right ) v(x, 1) d x = 0,\quad \forall v \in V \notag\end{aligned}$$ where $V = \{ w \in H^1(\Omega), w = 0 \text{ on } x = 0 \text{ and } y = 0\}$. The definition of this example in the code is given as % define PDE system and BCs % mark boundary segments pdedef.bfMark = ones(Nbf,1); % for y = 0 (b1) Xbfm = (X(tri_bf(:,1),:)+X(tri_bf(:,2),:))*0.5; pdedef.bfMark(Xbfm(:,1)<1e-8) = 4; % for x = 0 (b4) pdedef.bfMark(Xbfm(:,1)>1-1e-8) = 2; % for x = 1 (b2) pdedef.bfMark(Xbfm(:,2)>1-1e-8) = 3; % for y = 1 (b3) % define boundary types pdedef.bftype = ones(Nbf,npde); % for neumann bcs: pdedef.bftype(pdedef.bfMark==2|pdedef.bfMark==3,npde) = 0; pdedef.volumeInt = @pdedef_volumeInt; pdedef.boundaryInt = @pdedef_boundaryInt; pdedef.dirichletRes = @pdedef_dirichletRes; ... ... function F = pdedef_volumeInt(du, u, ut, dv, v, x, t, ipde) F = (13*pi*pi-1)*uexact(t,x); F = ut(:,1).*v(:)+du(:,1).*dv(:,1)+du(:,2).*dv(:,2)-F.*v(:); function G = pdedef_boundaryInt(du, u, v, x, t, ipde, bfMark) G = zeros(size(x,1),1); ID = find(bfMark==2); G(ID) = -2*pi*exp(-t)*sin(3*pi*x(ID,2)).*v(ID); ID = find(bfMark==3); G(ID) = 3*pi*exp(-t)*sin(2*pi*x(ID,1)).*v(ID); function Res = pdedef_dirichletRes(u, x, t, ipde, bfMark) Res = zeros(size(x,1),1); ID = find(bfMark==1|bfMark==4); Res(ID) = u(ID,1)-0.0; A combustion model in 2D {#exam-combustion2d} ------------------------ This example, implemented in `ex2d_combustion.m`, is the IBVP for a combustion model (a system of two PDEs) in 2D (see [@Lang1998]), $$\label{combustion2d-1} \begin{cases} \theta_t = \theta_{xx} + \theta_{yy} + \frac{\beta^2}{2 Le} Y e^{-\frac{\beta (1-\theta)}{(1-\alpha (1-\theta))}}, &\quad (x,y) \in \Omega , \quad t \in (0, 60] \\ Y_t = \frac{1}{Le} (Y_{xx} + Y_{yy}) - \frac{\beta^2}{2 Le} Y e^{-\frac{\beta (1-\theta)}{(1-\alpha (1-\theta))}}, &\quad (x,y) \in \Omega , \quad t \in (0, 60] \end{cases}$$ subject to the boundary conditions $$\label{combustion2d-1-BC} \begin{cases} \theta = 1, \quad Y = 0, & \text{ on bfMark = 2}\\ \frac{\partial \theta}{\partial n} = 0, \quad \frac{\partial Y}{\partial n} = 0, & \text{ on bfMark = 1}\\ \frac{\partial \theta}{\partial n} = -k \theta , \quad \frac{\partial Y}{\partial n} = 0, & \text{ on bfMark = 3}\\ \end{cases}$$ and the initial condition $$\label{combustion2d-1-IC} \begin{cases} \theta= 1, \quad Y = 0, & \text{ for } x \le 7.5 \\ \theta = e^{7.5-x}, \quad Y = 1- e^{Le (7.5-x)}, & \text{ for } x > 7.5 \end{cases}$$ where $\Omega$ is shown the boundary segments are marked as in Fig. \[fig:combustion\_domain\] and $Le = 1$, $\alpha = 0.8$, $\beta = 10$, and $k=0.1$. The analytical expression of the exact solution is not available. The weak formulation reads as $$\begin{aligned} \label{combustion2d-2} & \int_\Omega \left ( \theta_t v^{(1)} + \theta_x v^{(1)}_x + \theta_y v^{(1)}_y - \frac{\beta^2}{2 Le} Y v^{(1)} e^{-\frac{\beta (1-\theta)}{(1-\alpha (1-\theta))}} \right ) d x dy \\ & \qquad + \int_\Omega \left (Y_t v^{(2)} + \frac{1}{Le} Y_x v^{(2)}_x + \frac{1}{Le} Y_y v^{(2)}_y + \frac{\beta^2}{2 Le} Y v^{(2)} e^{-\frac{\beta (1-\theta)}{(1-\alpha (1-\theta))}}\right ) d x dy \notag \\ & \qquad \qquad \qquad \qquad \qquad \qquad. \qquad \qquad \qquad = 0,\quad \forall v^{(1)}, v^{(2)} \in V \notag\end{aligned}$$ where $V = \{ v \in H^1(\Omega), v = 0 \text{ on bfMark = 2}\}$. The definition of this example in the code is given as % define PDE system and BCs pdedef.bfMark = ones(Nbf,1); Xbfm = (X(tri_bf(:,1),:)+X(tri_bf(:,2),:))*0.5; pdedef.bfMark(Xbfm(:,1) < 1e-8) = 2; pdedef.bfMark(abs(Xbfm(:,1)-15) < 1e-8) = 3; pdedef.bfMark(abs(Xbfm(:,1)-30) < 1e-8) = 3; pdedef.bfMark((abs(Xbfm(:,2)-4) < 1e-8) & ... (Xbfm(:,1) > 15 & Xbfm(:,1) < 30)) = 3; pdedef.bfMark((abs(Xbfm(:,2)-12) < 1e-8) & ... (Xbfm(:,1) > 15 & Xbfm(:,1) < 30)) = 3; pdedef.bftype = ones(Nbf,npde); pdedef.bftype(pdedef.bfMark==1|pdedef.bfMark==3,:) = 0; pdedef.volumeInt = @pdedef_volumeInt; pdedef.boundaryInt = @pdedef_boundaryInt; pdedef.dirichletRes = @pdedef_dirichletRes; ... ... function F = pdedef_volumeInt(du, u, ut, dv, v, x, t, ipde) beta = 10; alpha = 0.8; Le = 1; w = beta^2/(2*Le)*u(:,2).*exp(-beta*(1-u(:,1))./(1-alpha*(1-u(:,1)))); if (ipde==1) F = ut(:,1).*v+du(:,1).*dv(:,1)+du(:,2).*dv(:,2) - w.*v; else F = ut(:,2).*v+(du(:,3).*dv(:,1)+du(:,4).*dv(:,2))/Le + w.*v; end function G = pdedef_boundaryInt(du, u, v, x, t, ipde, bfMark) k = 0.1; G = zeros(size(x,1),1); if ipde==1 ID = find(bfMark==3); G(ID) = k*u(ID,1).*v(ID); end function Res = pdedef_dirichletRes(u, x, t, ipde, bfMark) Res = zeros(size(x,1),1); ID = find(bfMark==2); if (ipde==1) Res(ID) = u(ID,1)-1; else Res(ID) = u(ID,2)-0; end Poisson’s equation in 3D {#exam-poisson3d} ------------------------ This example, implemented in `ex3d_poisson.m`, is the BVP for Poisson’s equation in 3D, $$\label{poisson3d-1} - (u_{xx} + u_{yy} + u_{zz}) = f, \quad (x,y,z) \in \Omega \equiv (0,1)\times (0, 1) \times (0,1)$$ subject to the boundary conditions $$\label{poisson3d-1-BC} \begin{cases} \frac{\partial u}{\partial x} = 2\pi\sin(3 \pi y)\sin(\pi z), & \quad (x,y,z) \text{ on } \Gamma_N \\ u = u_{exact}(x,y,z), & \quad (x,y,z) \text{ on } \Gamma_D \end{cases}$$ where $\Gamma_N = \{ x = 1\}$, $\Gamma_D = \partial \Omega \setminus \Gamma_N$, and $f$ is chosen such that the exact solution of this example is $$\label{poisson3d-1-exac} u_{exact}(x, y, z) = \sin(2\pi x) \sin(3 \pi y) \sin(\pi z) .$$ The weak formulation of this example reads as $$\begin{aligned} \label{poisson3d-2} \int_\Omega ( u_x v_x + u_y v_y + u_z v_z) d x dy d z + \int_{\Gamma_N} (- 2\pi\sin(3 \pi y)\sin(\pi z)) v(1,y, z) dy d z = 0,\quad \forall v \in V \notag\end{aligned}$$ where $V = \{ w \in H^1(\Omega), w = 0 \text{ on } \Gamma_D \}$. The definition of this example in the code is given as % define PDE system and BCs pdedef.bfMark = ones(Nbf,1); Xbfm = (X(tri_bf(:,1),:)+X(tri_bf(:,2),:)+X(tri_bf(:,3),:))/3; pdedef.bfMark(Xbfm(:,1)>1-1e-8) = 2; % for x=1 pdedef.bftype = ones(Nbf,npde); pdedef.bftype(pdedef.bfMark==2,npde) = 0; % neumann bc for x=1 pdedef.volumeInt = @pdedef_volumeInt; pdedef.boundaryInt = @pdedef_boundaryInt; pdedef.dirichletRes = @pdedef_dirichletRes; ... ... function F = pdedef_volumeInt(du, u, ut, dv, v, x, t, ipde) F = 14*pi^2*sin(2*pi*x(:,1)).*sin(3*pi*x(:,2)).*sin(pi*x(:,3)); F = du(:,1).*dv(:,1)+du(:,2).*dv(:,2)+du(:,3).*dv(:,3)-F.*v(:); function G = pdedef_boundaryInt(du, u, v, x, t, ipde, bfMark) G = zeros(size(x,1),1); ID = find(bfMark==2); G(ID) = -2*pi*sin(3*pi*x(ID,2)).*sin(pi*x(ID,3)).*v(ID); function Res = pdedef_dirichletRes(u, x, t, ipde, bfMark) Res = u(:,1) - uexact(t,x); [10]{} C. J. Budd, W. Huang, and R. D. Russell. Adaptivity with moving grids. , 18:111–241, 2009. W. Cao, W. Huang, and R. D. Russell. An $r$-adaptive finite element method based upon moving mesh [PDEs]{}. , 149:221–244, 1999. K. DiPietro, R. Haynes, W. Huang, A. Lindsay, and Y. Yu. Moving mesh simulation of contact sets in two dimensional models of elastic-electrostatic deflection problems. , 375:763–782, 2018. J. Fröhlich and J. Lang. Two-dimensional cascadic finite element computations of combustion problems, [*Comput. Methods Appl. Mech. Engrg.*]{}, 158:255–267, 1998. S. Gonz[á]{}lez-Pinto, J. I. Montijano, and S. P[é]{}rez-Rodr[í]{}guez. Two-step error estimators for implicit [R]{}unge-[K]{}utta methods applied to stiff systems. , 30:1–18, 2004. W. Huang. Variational mesh adaptation: isotropy and equidistribution. , 174:903–924, 2001. W. Huang and L. Kamenski. A geometric discretization and a simple implementation for variational mesh generation and adaptation. , 301:322–337, 2015. W. Huang and L. Kamenski. On the mesh nonsingularity of the moving mesh [PDE]{} method. , 87:1887–1911, 2018. W. Huang, L. Kamenski, and H. Si. Mesh smoothing: An MMPDE approach. [*Procedia Engineering*]{} (2015) (Research Note of 24th International Meshing Roundtable (IMR24)). W. Huang, Y. Ren, and R. D. Russell. Moving mesh methods based on moving mesh partial differential equations. , 113:279–290, 1994. W. Huang, Y. Ren, and R. D. Russell. Moving mesh partial differential equations ([MMPDEs]{}) based upon the equidistribution principle. , 31:709–730, 1994. W. Huang and R. D. Russell. A high dimensional moving mesh strategy. , 26:63–76, 1998. W. Huang and R. D. Russell. Moving mesh strategy based upon a gradient flow equation for two dimensional problems. , 20:998–1015, 1999. W. Huang and R. D. Russell. . Springer, New York, 2011. Applied Mathematical Sciences Series, Vol. 174. C. Lu, W. Huang, and J. Qiu. An adaptive moving mesh finite element solution of the Regularized Long Wave equation. [*J. Sci. Comput.*]{} 74:122–144, 2017. C. Ngo and W. Huang. A study on moving mesh finite element solution of porous medium equation. , 331:357–380, 2017. C. Ngo and W. Huang. Adaptive finite element solution of the porous medium equation in pressure formulation. [*Numer. Meth. P.D.E.*]{} 35:1224–1242, 2019. Y. Yu and W. Huang. Selection of the regularization parameter in the Ambrosio-Tortorelli approximation of the Mumford-Shah functional for image segmentation. [*Numer. Math. Theor. Meth. Appl.*]{} 11:211-234, 2018. F. Zhang, W. Huang, X. Li, and S. Zhang. Moving mesh finite element simulation for phase-field modeling of brittle fracture and convergence of [N]{}ewton’s iteration. , 356:127–149, 2018. [^1]: Department of Mathematics, the University of Kansas, Lawrence, KS 66045, U.S.A. ([*whuang@ku.edu*]{}). [^2]: MATLAB$^{\tiny{\textregistered}}$ is a trademark of The MathWorks, Inc., Natick, MA 01760.
--- abstract: 'The adversarial wiretap channel (AWTC) model is a secure communication model that eavesdropper can directly read and write fractions of the transmitted bits in legitimate communication. In this paper we propose a secure polar coding scheme to provide secure and reliable communication over the AWTC model. For the adversarial reading and writing action, we present a $\rho$ equivalent channel block and apply the non-stationary polarization on it. By comparing the polarization result of $\rho$ equivalent channel block with a $\rho$ BEC block (channel block of BEC with erase probability $\rho$), we find that the polarized subsets of $\rho$ BEC block is fully contained by the polarized subsets of $\rho$ equivalent channel block by choosing the polarization parameter $\beta$ properly. Based on this observation, we construct a secure polar coding scheme on the $\rho$ BEC blocks for the AWTC model. We theoretically prove that the proposed scheme achieves the secrecy capacity of AWTC model under both reliability and strong security criterions with an infinite block length $N$. Further, by simulations, we prove that the proposed scheme can provide secure and reliable communication over AWTC model with a finite block length $N$.' author: - 'Yizhi Zhao  and Hongmei Chi[^1][^2]' title: Secure Polar Coding for Adversarial Wiretap Channel --- adversarial wiretap channels, polar codes, non-stationary polarization, secrecy capacity. Introduction ============ iretap channel (WTC) model, introduced by Wyner in 1975[@Wyner1975], is a primitive secure communication model in which two legitimate users Alice and Bob communicates through a noise main channel while an eavesdropper Eve wiretaps through a noise wiretap channel. Later in [@Ozarow1984], wiretap channel type II (WTC-II) model is introduced in which the main channel is noiseless and the eavesdropper Eve can arbitrarily choose and directly read a fixed fraction of transmitted bits. Further in [@Wang2015], WTC-II model is extended to adversarial wiretap channel (AWTC) model in which the eavesdropper Eve can directly read and write the transmitted bits with a pair of fix fractions. According to Wyner’s secure coding theory, channel noise of the WTC model can be used to provide perfect secrecy, and the goal for secure codes construction is achieving the secrecy capacity[@Wyner1975]. Polar codes[@Arikan2009], known as the first capacity achieving codes with low complexity, has presented excellent performance in providing the perfect secrecy over WTC models. Explicit secure polar codes have already achieved the secrecy capacities of Wyner’s WTC model[@Mahdavifar2011; @Vard2013strong] and several extended WTC models [@Wei2015; @Chou2016; @Si2016; @Gulcu2017; @Chou2018]. These success of polar codes on WTC models make polar codes a considerable good option for achieving the secrecy capacity of AWTC model. However, comparing with the WTC models, the secure codes construction of AWTC model is much more complicated. Because in AWTC model, both adversarial reading and writing directly act on the transmitted bits, and the corresponding bit index sets are arbitrarily chosen and unknown to legitimate parties. It is hard to construct the secure polar codes without precise information of eavesdropper’s actions or constant channel blocks. Our Contributions ----------------- In this work, we consider the secrecy capacity achieving problem of AWTC model and intent to construct a corresponding secure polar coding scheme to provide secure and reliable communication. Our contributions are summarized as follow: - For the adversarial reading and writing operations of AWTC model, we present an equivalent model as the *$\rho$ equivalent channel block* (Def. \[def\_rec\]) which is an $N$ length non-stationary channel block composed of full-noise BECs with fraction $\rho$ and noiseless BECs with fraction $1-\rho$. Then we apply the non-stationary channel polarization[@Zhao2020] on the $\rho$ equivalent channel block and find that it can be fully polarized by the channel polarization operation $\mathbf{G}_N$ into perfect full-noise channels with index set $\mathcal{H}_\rho$ and perfect noiseless channels with index set $\mathcal{L}_\rho$, even if $N$ is finite. - Next, we take a *$\rho$ BEC block* (Def. \[def\_rbec\]) which is an $N$ length stationary BEC block with erase probability $\rho$, and polarize it by the channel polarization operation $\mathbf{G}_N$ into almost full-noise channels with index set $\mathcal{H}_{\epsilon_\rho}$ and almost noiseless channels with index set $\mathcal{L}_{\epsilon_\rho}$. Then we analyze the relationship of $\rho$ equivalent channel block and $\rho$ BEC block on the channel polarization by calculating the *excluding rate* $\mathrm{R_e}$ (Def. \[def\_er\]) which is the sum rate of $\mathcal{H}_{\epsilon_\rho}$ not included in $\mathcal{H}_\rho$ and $\mathcal{L}_{\epsilon_\rho}$ not included in $\mathcal{L}_\rho$. We carry out a simulation to test $\mathrm{R_e}$ on different values of triple $(N,\beta,\rho)$. From the simulation results (Fig. \[fig\_er\_sim\]), we observe that by choosing a proper $\beta=\beta^*$ with given $(N,\rho)$, have $\mathrm{R_e}=0$, $\mathcal{H}_{\epsilon_\rho}\subseteq \mathcal{H}_\rho$ and $\mathcal{L}_{\epsilon_\rho}\subseteq \mathcal{L}_\rho$ (Prop. \[prop\_including\]). - Next, for the AWTC model, we have the actual index partition $(\mathcal{I},\mathcal{R},\mathcal{F},\mathcal{B})$ according to the polarization of $\rho$ equivalent channel block which is unknown to legitimate parties, and the index partition $(\mathcal{I}_\epsilon,\mathcal{R}_\epsilon,\mathcal{F}_\epsilon,\mathcal{B}_\epsilon)$ according to the polarization of $\rho$ BEC block which is constructed by legitimate parties. By comparing these two partitions (Fig. \[fig\_csd\]), we find that using $(\mathcal{I}_\epsilon,\mathcal{R}_\epsilon,\mathcal{F}_\epsilon,\mathcal{B}_\epsilon)$ as the substitution of $(\mathcal{I},\mathcal{R},\mathcal{F},\mathcal{B})$ with a proper $\beta^*$ will not compromise security or reliability. Thus we construct the secure polar coding scheme for AWTC model by apply the multi-block canning structure[@Vard2013strong] on the index subsets $(\mathcal{I}_\epsilon,\mathcal{R}_\epsilon,\mathcal{F}_\epsilon,\mathcal{B}_\epsilon)$. - At last, we theoretically analyze the performance of the proposed secure polar coding scheme and prove that the secrecy capacity of AWTC model can be achieved when $N$ goes infinity. We also run simulations to test the performance of the scheme in finite $N$ cases. The simulation results match our theoretically analysis and prove that the proposed scheme successfully provide a reliable and secure communication over AWTC model with finite block length $N$. Related Works ------------- The first secrecy capacity achieving secure polar codes for WTC model was proposed in [@Mahdavifar2011] which sets up a basic way of secure polar codes construction that use the differences of polarizations between main channel block and wiretap channel block to divide the channel index into subsets with unique reliability and security properties. Further in [@Vard2013strong], a multi-block chaining structure was constructed as a refinement of secure polar coding scheme in [@Mahdavifar2011], which is known as one of the standard method for strong security polar coding construction and widely applied in the follow-up works such as [@Wei2015; @Gulcu2017]. One remarkable advantage of this standard strong security polar coding method is the low computational complexity for both encoding and decoding process as $O(N\log N)$. When the AWTC model was proposed in [@Wang2015], an effective explicit secrecy capacity achieving codes named as the *capacity achieving AWTC code family* was also proposed which contains three building blocks: algebraic manipulation detection code (AMD code), subspace evasive sets, and folded reed-solomon code (FRS code). For capacity achieving AWTC code family in [@Wang2015], the encoding complexity is $O((N\log q)^2)$ where $q$ is a prime satisfying $q>Nu$ for a $u$-folded RS codes, the combined computational complexity of the FRS decoding algorithm and subspace evasive sets intersection algorithm is $\mathrm{poly}(1/\xi)^{D/\xi\log\log1/\xi}$, the AMD verification costs $O((N\log q)^2)$, thus the total complexity of the decoding is $\mathrm{poly}(N)$. Then for our proposed secure polar coding scheme, since we only apply the standard method of strong security polar coding construction on the $\rho$ BEC blocks, both encoding and decoding computational complexities are $O(N\log N)$, which is lower than the secure codes in [@Wang2015]. Paper Organizations ------------------- The rest of this paper is organized as follow. Section \[sec\_awtc\] presents the AWTC model. Section \[sec\_code\] presents our construction of secure polar coding scheme for AWTC model. Section \[sec\_performance\] presents the theoretical analysis of performance and the simulations. Finally section \[sec\_con\] concludes the paper. The Adversarial Wiretap Channel Model {#sec_awtc} ===================================== *Notations:* We define the integer interval $[\![a,b]\!]$ as the integer set between $\lfloor a\rfloor$ and $\lceil b\rceil$. For $n\in \mathbb{N}$, define $N\triangleq 2^n$. Denote $X$, $Y$, $Z$,... random variables (RVs) taking values in alphabets $\mathcal{X}$, $\mathcal{Y}$, $\mathcal{Z}$,... and the sample values of these RVs are denoted by $x$, $y$, $z$,... respectively. Then $p_{XY}$ denotes the joint probability of $X$ and $Y$, and $p_X$, $p_Y$ denotes the marginal probabilities. Especially for channel $W$, the transition probability is defined as $W_{Y|X}$ and $W$ for simplicity. Also we denote a $N$ size vector $X^{1:N}\triangleq (X^1,X^2,...,X^N)$. When the context makes clear that we are dealing with vectors, we write $X^N$ in place of $X^{1:N}$. And for any index set $\mathcal{A}\subseteqq [\![1,N]\!]$, we define $X^{\mathcal{A}}\triangleq \{X^i\}_{i\in \mathcal{A}}$. For the polar codes, we denote $\mathbf{G}_N$ the generator matrix , $\mathbf{R}$ the bit reverse matrix, $\mathbf{F}= \begin{bmatrix}\begin{smallmatrix} 1 & 0 \\ 1 & 1 \end{smallmatrix}\end{bmatrix}$ and $\otimes$ the Kronecker product, and we have $\mathbf{G}_N=\mathbf{R}\mathbf{F}^{\otimes n}$. Denote $\mathbb{A}[\cdot]$ as the average. Now we present the definition of adversarial wiretap channel model (AWTC)[@Wang2015]. \[def\_awt\] The adversarial wiretap channel model is defined as $(\mathcal{X},\mathcal{Y},\mathcal{Z}, \rho_r,\rho_w,\mathcal{S}_r,\mathcal{S}_w)$. In the model, legitimate parties are communicating through a noiseless channel with channel input alphabet $\mathcal{X}$. For $N$-length transmitted codewords $X^{1:N}\in \mathcal{X}^N$, there are two types of adversarial actions: reading and writing. - **Reading:** Eavesdropper can arbitrarily select an index subset $\mathcal{S}_r\subseteq[\![1,N]\!]$ with fixed fraction $\rho_r=\frac{|\mathcal{S}_r|}{N}$ and directly reads the corresponding transmitted codewords $X^{\mathcal{S}_r}$. For the obtained bits at eavesdropper, $Z^N$ with alphabet $\mathcal{Z}$, has $$Z^i= \begin{cases} X^i&\text{~if~}i\in\mathcal{S}_r\\ ?&\text{~if~}i\in\mathcal{S}_r^c \end{cases}$$ where “$?$" is the dump letter. - **Writing:** Eavesdropper can arbitrarily select an index subset $\mathcal{S}_w\subseteq[\![1,N]\!]$ with fixed fraction $\rho_w=\frac{|\mathcal{S}_w|}{N}$ and directly writes the corresponding transmitted codewords $X^{\mathcal{S}_w}$. For the channel output at legitimate receiver, $Y^N$ with alphabet $\mathcal{Y}$, has $$Y^i= \begin{cases} ? &\text{~if~}i\in\mathcal{S}_w\\ X^i &\text{~if~}i\in\mathcal{S}_w^c \end{cases}$$ where “$?$" is the dump letter. ![The adversarial wiretap channel model.[]{data-label="fig_awt"}](zhao1.pdf){width="12cm"} The communication process over AWTC is illustrated in Fig. \[fig\_awt\]. Legitimate user Alice want to send confidential message $M$ to legitimate user Bob with the existence of an active eavesdropper Eve. Alice encodes the message $M$ into channel input $X^N$ and transmits $X^N$ to Bob through a noiseless main channel. Eve arbitrarily reads $X^{\mathcal{S}_r}$ with fixed rate $\rho_r$ and obtains the corresponding $Z^N$. Eve also arbitrarily writes $X^{\mathcal{S}_w}$ with fixed rate $\rho_w$. Then Bob receives the modified channel output as $Y^N$ and decodes it into estimated confidential message $\hat{M}$. \[def\_criterion\] For any $(2^{NR},N)$ secure codes of AWTC, the performances can be measured as follow. - Reliability is measured by error probability $\mathrm{P_e}=\Pr(M\neq\hat{M})$. The reliability criterion is $\lim_{N\rightarrow\infty}\mathrm{P_e}=0$. - Security is measured by information leakage $\mathrm{L}=I(Z^N;M)$. The weak security criterion is $\lim_{N\rightarrow\infty}\frac{\mathrm{L}}{N}=0$; the strong security criterion is $\lim_{N\rightarrow\infty}\mathrm{L}=0$. The secrecy capacity of AWTC under reliability and security criterions has been characterized in [@Wang2015], which is presented as follow. \[theo\_cs\](Secrecy capacity[@Wang2015]) The perfect secrecy capacity of the AWTC with $(\rho_r,\rho_w)$ is $$\mathrm{C_s}=1-\rho_r-\rho_w.$$ In this paper, we intend to present a polar codes based solution for the problem of secure and reliable communication over AWTC model, and try to achieve the secrecy capacity of Theo. \[theo\_cs\]. Secure Polar Codes {#sec_code} ================== In this section, we present a secure coding scheme for the AWTC model by polar codes. BEC Based Equivalent AWTC Model ------------------------------- Unlike ordinary WTC model[@Wyner1975], adversarial actions in AWTC are not carried out through a wiretap channel but directly work on the transmitted bits in the communication channel. Since secure polar codes are built on polarized channels, without a wiretap channel, secure polar codes cannot be directly constructed on AWTC. Thus we first need to present the channel based expression for the reading and writing actions of AWTC. Denote $W_{\epsilon_1}:\{0,1\}\rightarrow\{0,1,?\}$ the full-noise binary erase channel (BEC) with erase probability $\epsilon=1$ and $I(W_{\epsilon_1})=0$. Denote $W_{\epsilon_0}:\{0,1\}\rightarrow\{0,1,?\}$ the noiseless BEC with erase probability $\epsilon=0$ and $I(W_{\epsilon_0})=0$. Without loss of generality, we consider the binary input case of AWTC model in Def. \[def\_awt\] that $\mathcal{X}=\{0,1\}$, $\mathcal{Y}=\{0,1,?\}$ and $\mathcal{Z}=\{0,1,?\}$. Comparing the reading and writing actions with the $W_{\epsilon_1}$ and $W_{\epsilon_0}$, we can observe that - For the reading action, the read bits equals to being transmitted through $W_{\epsilon_0}$ to Eve, the unread bits equals to being transmitted through $W_{\epsilon_1}$ to Eve. - For the writing action, the written bits equals to being transmitted through $W_{\epsilon_1}$ to Bob, the unwritten bits equals to being transmitted through $W_{\epsilon_0}$ to Bob. Thus we have the following BEC based equivalent model for AWTC model. \[def\_eqvawtc\] The BEC based equivalent AWTC model is defined as $(W_w^N,W_r^N):\mathcal{X}^N\rightarrow \mathcal{Y}^N, \mathcal{Z}^N$ that - $W_w^N:\mathcal{X}^N\rightarrow \mathcal{Y}^N$ is the writing-equivalent $N$-length main channel block. For arbitrarily chosen $\mathcal{S}_w\subseteq[\![1,N]\!]$ with fixed fraction $\rho_w$, have $$W_w^i= \begin{cases} W_{\epsilon_1}:W_w^i(y=?|x)=1&~\text{if}~i\in \mathcal{S}_w\\ W_{\epsilon_0}:W_w^i(y=x|x)=1&~\text{if}~i\in \mathcal{S}_w^c. \end{cases}$$ - $W_r^N:\mathcal{X}^N\rightarrow \mathcal{Z}^N$ is the reading-equivalent $N$-length wiretap channel block. For arbitrarily chosen $\mathcal{S}_r\subseteq[\![1,N]\!]$ with fixed fraction $\rho_r$, have $$W_r^i= \begin{cases} W_{\epsilon_0}:W_r^i(z=x|x)=1&~\text{if}~i\in \mathcal{S}_r\\ W_{\epsilon_1}:W_r^i(z=?|x)=1&~\text{if}~i\in \mathcal{S}_r^c. \end{cases}$$ Comparing the $W_w^N$ and $W_r^N$ in Def. \[def\_eqvawtc\], we can observe that both the equivalent channel blocks have a same formation as follow. \[def\_rec\] The $\rho$ equivalent channel block is defined as $W_\rho^N:\mathcal{X}^N\rightarrow \mathcal{Y}^N$ that for arbitrarily chosen $\mathcal{S}_\rho\subseteq[\![1,N]\!]$ with fixed fraction $\rho$, $$W_\rho^i= \begin{cases} W_{\epsilon_1}:W_\rho^i(y=?|x)=1&~\text{if}~i\in \mathcal{S}_\rho\\ W_{\epsilon_0}:W_\rho^i(y=x|x)=1&~\text{if}~i\in \mathcal{S}_\rho^c. \end{cases}$$ The average erase probability of the channel block is $\rho$. By setting $\rho=\rho_w$, $W_\rho^N=W_{\rho_w}^N$ which becomes the writing-equivalent channel block; by setting $\rho=1-\rho_r$, $W_\rho^N=W_{1-\rho_r}^N$ which becomes the reading-equivalent channel block. Therefore, by analyzing the channel polarization results of the channel operation $\mathbf{G}_N$ on $\rho$ channel block, we can find a way to construct the secure polar codes. Discussion of Polarization -------------------------- Now we discuss the polarization of the $\rho$ equivalent channel block. As defined, the $\rho$ equivalent channel block is an $N$-length *non-stationary channel sequence* consisted of full noise BEC $W_{\epsilon_1}$ with fraction $\rho$ and noiseless BEC $W_{\epsilon_0}$ with fraction $1-\rho$. The polarization theory of non-stationary channel sequence has been studied in [@Alsan2016; @Mahdavifar2018; @Zhao2020] that the channel operation $\mathbf{G}_N$ has a similar polarization effect on the non-stationary channel sequence. \[theo\_polarization\] (Non-stationary polarization[@Zhao2020 Theo. 2]) For any B-DMC $W^{1:N}$ with different transition probabilities, the generated channels ${W_N^{(i)}}$ form non-stationary channel transformation $\mathbf{G}_N$ are polarized in the sense that, for any fixed $\delta\in(0,1)$, as $N\rightarrow\infty$, the fraction of indices $i\in[\![1,N]\!]$ for which $I(W_N^{(i)})\in (1-\delta,1]$ goes to $\mathbb{A}[I(W^{1:N })]$ and the fraction for which $I(W_N^{(i)})\in [0,\delta)$ goes to $1-\mathbb{A}[I(W^{1:N })]$. Also can be write as $$I_\infty= \begin{cases} 1 &~\text{w.p.} ~\mathbb{A}[I(W^{1:N })]\\ 0 &~\text{w.p.}~1-\mathbb{A}[I(W^{1:N })], \end{cases}$$ where $\mathbb{A}[I(W^{1:N })]$ is the average of the initial $I(W^{(i)})$ for all the $i\in[\![1,N]\!]$. For non-stationary polarization theory, the $2\times2$ kernel transformation of the channel operation $\mathbf{G}_N$ is defined as follow. ![The non-stationary $2\times2$ kernel transformation.[]{data-label="fig_ikt"}](zhao2.pdf){width="4cm"} \[def\_ikt\] ([@Zhao2020])Define $(W^1,W^2)\mapsto(W^-,W^+)$ the non-stationary $2\times2$ kernel transformation, illustrated in Fig. \[fig\_ikt\], which contains a pair of channel operation $(\boxminus,\boxplus)$ that $$W^-=W^1\boxminus W^2\text{~and~}W^+=W^1\boxplus W^2,$$ respectively as $$\begin{split} &W^-(f(y^1,y^2)|u^1)=\sum_{u^2\in \mathcal{U}}\frac{1}{2}W^1(y^1|u^1\oplus u^2)W^2(y^2|u^2),\\ &W^+(f(y^1,y^2),u^1|u^2)=\frac{1}{2}W^1(y^1|u^1\oplus u^2)W^2(y^2|u^2). \end{split}$$ \[def\_z\]([@Arikan2009]) For any given B-DMC $W:\mathcal{X}\rightarrow\mathcal{Y}$, the Bhattacharyya parameter is defined as $$Z(W)\triangleq\sum_{y\in\mathcal{Y}}\sqrt{W(y|0)W(y|1)}.$$ For BEC, have $I(W)=1-Z(W)$. \[lem\_z\_irregular\]([@Zhao2020 Lem. 2])For non-stationary $2\times2$ kernel transformation $(W^1,W^2)\mapsto(W^-,W^+)$, have $$\begin{split} &Z(W^+)=Z(W^1)Z(W^2),\\ &Z(W^-)\leq Z(W^1)+Z(W^2)-Z(W^1)Z(W^2).\\ %&Z(W^-)\geq Z(W^1)\geq Z(W^+)\\ %&Z(W^-)\geq Z(W^2)\geq Z(W^+) \end{split}$$ The second equal holds when $W^1$ and $W^2$ are BECs. Now we consider the non-stationary channel transformation of the $\rho$ equivalent channel block $W_\rho^N$. Let $N=2^n$ and $W_{\rho N}^{(1:N)}$ be the channel sequence generated from $W_\rho^N$ by the non-stationary channel operation $\mathbf{G}_N$ with $n$ levels of recursively $2\times2$ kernel transformation $(W^1,W^2)\mapsto(W^-,W^+)$. Since $W_\rho^N$ is formed by $W_{\epsilon_1}$ and $W_{\epsilon_0}$ that $Z(W_{\epsilon_1})=1$ and $Z(W_{\epsilon_0})=0$, for $i\in[\![1,N]\!]$, $\rho$ fraction of $Z(W_{\rho}^i)$ is $1$ and $1-\rho$ fraction of $Z(W_{\rho}^i)$ is $0$. Then by Lem. \[lem\_z\_irregular\], for each recursion level of $2\times2$ kernel channel transformation, there are only three possible cases for the Bhattacharyya parameter values $\left[Z(W^1),Z(W^2)\right]\mapsto\left[ Z(W^-),Z(W^+)\right]$ that $(1,1)\mapsto(1,1)$, $(1,0)\mapsto(1,0)$ and $(0,0)\mapsto(0,0)$. Hence we can observe that after $N$ levels of kernel transformation, the fraction of $Z(W_{\rho N}^{(i)})=1$ remains $\rho$ and $Z(W_{\rho N}^{(i)})=0$ remains $1-\rho$. Thus for non-stationary channel polarization of $W_\rho^N$, we have the following polarized index sets of $[\![1,N]\!]$: $$\begin{split} &\mathcal{H}_\rho=\{ i\in [\![1,N]\!]: Z(W_{\rho N}^{(i)})=1\},\\ &\mathcal{L}_\rho=\{ i\in [\![1,N]\!]: Z(W_{\rho N}^{(i)})=0\}, \end{split}$$ where $\mathcal{H}_\rho$ is the polarized full-noise index set, and $\mathcal{L}_\rho$ is the polarized noiseless index set. The $\rho$ equivalent channel block $W_\rho^N$ can be fully polarized by non-stationary polarization operation $\mathbf{G}_N$ that $\mathcal{H}_\rho^c=\mathcal{L}_\rho$ and $\mathcal{H}_\rho=\mathcal{L}_\rho^c$, even if $N$ is finite. Note that from the perspective of legitimate parties, because the index set $\mathcal{S}_\rho$ it is arbitrarily chosen with fixed fraction $\rho$ by eavesdropper and different $\mathcal{S}_\rho$ can result in different $\mathcal{H}_\rho$ and $\mathcal{L}_\rho$, they cannot know the precise $\mathcal{H}_\rho$ and $\mathcal{L}_\rho$ for constructing the secure polar codes. \[def\_rbec\] Define a $\rho$ BEC block as $W_{\epsilon_\rho}^N$ which is an $N$-length stationary sequence of channel $W_{\epsilon_\rho}$, where $W_{\epsilon_\rho}:\{0,1\}\rightarrow\{0,1,?\}$ is the BEC with erase probability $\epsilon=\rho$. Comparing the $\rho$ BEC block $W_{\epsilon\rho}^N$ in Def. \[def\_rbec\] with the $\rho$ equivalent channel block $W_{\rho}^N$ in Def. \[def\_rec\], both channel models have a same average erase probability as $\rho$. From the perspective of transmission effect, $W_{\rho}^N$ is the expectation of $W_{\epsilon_\rho}^N$. Thus by the *Bernoulli’s law of large Numbers*, when $N$ goes infinity, the $W_{\epsilon_\rho}^N$ can be equivalent to the $W_{\rho}^N$. However when $N$ is finite, these two channel models are not equivalent. Next we analyze their relationship on the channel polarization. According to the channel polarization theory, by the channel operation $\mathbf{G}_N$, the $\rho$ BEC block $W_{\epsilon_\rho}^N$ can be polarized as follow. Let $W_{\epsilon_\rho N}^{(1:N)}$ be the channel sequence generated from $W_{\epsilon_\rho}^N$ by the channel polarization operation. Then for any $0<\beta<\frac{1}{2}$, $\delta_N=2^{-N^\beta}$, the polarized index sets of $[\![1,N]\!]$ are $$\begin{split} &\mathcal{H}_{\epsilon_\rho}=\{ i\in [\![1,N]\!]: Z(W_{\epsilon_\rho N}^{(i)})\geq1-\delta_{N}\},\\ &\mathcal{L}_{\epsilon_\rho}=\{ i\in [\![1,N]\!]: Z(W_{\epsilon_\rho N}^{(i)})\leq\delta_{N}\}, \end{split} \label{eq_polar_w}$$ where $\mathcal{H}_{\epsilon_\rho}$ is the polarized full-noise index set, and $\mathcal{L}_{\epsilon_\rho}$ is the polarized noiseless index set. For polar codes, only in case of infinite $N$, the channel block can be fully polarized by any $0<\beta<\frac{1}{2}$, which satisfies $\mathcal{H}_{\epsilon_\rho}^c=\mathcal{L}_{\epsilon_\rho}$ and $\mathcal{H}_{\epsilon_\rho}=\mathcal{L}_{\epsilon_\rho}^c$. However, in case of finite $N$, the polarization cannot be perfect but directly influenced by the value of $\beta$. If $\beta$ decreases, both transmission rate and upper bound of decoding bit error rate will increase. If $\beta$ increases, both code rate $|\mathcal{L}_{\epsilon_\rho}|/N$ and upper bound of decoding bit error rate will decrease. To analyze the polarization relationship between $W_{\epsilon_\rho}^N$ and $W_{\rho}^N$, we use the *excluding rate* $\mathrm{R_e}$ in Def. \[def\_er\] to measure the sum rate of $\mathcal{H}_{\epsilon_\rho}$ excluded from $\mathcal{H}_{\rho}$ and $\mathcal{L}_{\epsilon_\rho}$ excluded from $\mathcal{L}_{\rho}$. If $\mathrm{R_e}=0$, it means $\mathcal{H}_{\epsilon_\rho}\subseteq\mathcal{H}_{\rho}$ and $\mathcal{L}_{\epsilon_\rho}\subseteq \mathcal{L}_{\rho}$. \[def\_er\]The excluding rate $\mathrm{R_e}$ is defined as $$\mathrm{R_e}=\frac{1}{N}(|\mathcal{H}_{\epsilon_\rho}\setminus\mathcal{H}_{\rho}|+|\mathcal{L}_{\epsilon_\rho}\setminus\mathcal{L}_{\rho}|),$$ where $(\mathcal{H}_{\epsilon_\rho},\mathcal{L}_{\epsilon_\rho})$ are the polarized subsets of $\rho$ BEC block $W_{\epsilon_\rho}^N$, $(\mathcal{H}_{\rho},\mathcal{L}_{\rho})$ are the polarized subsets of $\rho$ equivalent channel block $W_{\rho}^N$. There are four factors can influence $\mathrm{R_e}$, which are block length $N$, polarization parameter $0<\beta<\frac{1}{2}$, fraction $0\leq\rho\leq1$ and index set $\mathcal{S}_\rho$. Thus we choose $N$ from $2^6$ to $2^{15}$, $\beta$ from $0.01$ to $0.49$ at $0.03$ intervals and $\rho$ from $0.1$ to $0.9$ at $0.1$ intervals. Then, in order to cover the randomness of index set $\mathcal{S}_\rho$, for each value of triple $(N,\beta,\rho)$, we calculate the corresponding excluding rates for $100$ arbitrarily chosen $\mathcal{S}_\rho$ and take an average rate. The calculation results of excluding rate is illustrated in Fig. \[fig\_er\_sim\]. \[prop\_including\] From Fig. \[fig\_er\_sim\], it can be observed that for the given $\rho$ and $N$ with arbitrarily chosen $\mathcal{S}_\rho$, there exists $\beta=\beta^*$ which makes $\mathrm{R_e}=0$, $\mathcal{H}_{\epsilon\rho}\subseteq\mathcal{H}_{\rho}$ and $\mathcal{L}_{\epsilon\rho}\subseteq \mathcal{L}_{\rho}$. With the increasing of block length $N$, the lower bound of $\beta^*$ is decreasing. Secure Polar Coding Scheme -------------------------- Based on Prop. \[prop\_including\], now we can construct the secure polar code for AWTC model. Remind that in AWTC model, for the adversarial writing and reading operations, the fraction pair $(\rho_w,\rho_r)$ of is publicly known and fixed, but the index pair $(\mathcal{S}_w,\mathcal{S}_r)$ is arbitrarily chosen by eavesdropper and unknown to legitimate parties. Also remind that for adversarial writing operation with fraction $\rho_w$, the equivalent channel block $W_w^N=W_{\rho_w}^N$ and the corresponding $\rho$ BEC block is $W_{\epsilon_{\rho_w} }^N$. For adversarial reading operation with fraction $\rho_r$, the equivalent channel block $W_r^N=W_{1-\rho_r}^N$ and the corresponding $\rho$ BEC block is $W_{\epsilon_{1-\rho_r}}^N$. For arbitrarily chosen $(\mathcal{S}_w,\mathcal{S}_r)$, the actual non-stationary polarization of equivalent channel block $W_w^N$ and $W_r^N$ are as follow. $$\label{eq_rec_polar} \begin{split} &\mathcal{H}_w=\{ i\in [\![1,N]\!]: Z(W_{w N}^{(i)})=1\},\\ &\mathcal{L}_w=\{ i\in [\![1,N]\!]: Z(W_{w N}^{(i)})=0\},\\ &\mathcal{H}_r=\{ i\in [\![1,N]\!]: Z(W_{r N}^{(i)})=1\},\\ &\mathcal{L}_r=\{ i\in [\![1,N]\!]: Z(W_{r N}^{(i)})=0\}, \end{split}$$ where $W_{w N}^{(i)}$ and $W_{r N}^{(i)}$ are the generated channel by non-stationary channel operation $\mathbf{G}_N$ respectively from $W_w^N$ and $W_r^N$. With these actual polarization results, the index $[\![1,N]\!]$ can be divided into following four subsets: $$\label{eq_dv1} \begin{split} \mathcal{I}=\mathcal{L}_w \cap \mathcal{H}_r,\\ \mathcal{R}=\mathcal{L}_w \cap \mathcal{H}_r^c,\\ \mathcal{F}=\mathcal{L}_w^c \cap \mathcal{H}_r,\\ \mathcal{B}=\mathcal{L}_w^c \cap \mathcal{H}_r^c, \end{split}$$ where $\mathcal{I}$ is secure and reliable, $\mathcal{R}$ is insecure but reliable, $\mathcal{F}$ is secure but unreliable, and $\mathcal{B}$ is insecure and unreliable. Note that the index sets $(\mathcal{S}_w,\mathcal{S}_r)$ is arbitrarily chosen by eavesdropper, thus legitimate parties cannot know the actual polarization results $(\mathcal{H}_w,\mathcal{L}_w,\mathcal{H}_r,\mathcal{L}_r)$ or the actual division $(\mathcal{I},\mathcal{R},\mathcal{F},\mathcal{B})$. For legitimate parties, because they know the fraction pair $(\rho_w,\rho_r)$, what they can do is to build the corresponding $\rho$ BEC blocks, as $W_{\epsilon_{\rho_w}}^N$ for writing operation and $W_{\epsilon_{1-\rho_r}}^N$ for reading operation. Then according to Prop. \[prop\_including\], for the given $(N,\rho_w,\rho_r)$, they can choose a proper $\beta^*$, $\delta_N=2^{-N^{\beta^*}}$ to polarize the block pair $(W_{\epsilon_{\rho_w}}^N,W_{\epsilon_{1-\rho_r}}^N)$ as $$\label{eq_rbec_polar} \begin{split} &\mathcal{H}_{\epsilon_{\rho_w}}=\{ i\in [\![1,N]\!]: Z(W_{\epsilon_{\rho_w} N}^{(i)})\geq1-\delta_{N}\},\\ &\mathcal{L}_{\epsilon_{\rho_w}}=\{ i\in [\![1,N]\!]: Z(W_{\epsilon_{\rho_w} N}^{(i)})\leq\delta_{N}\},\\ &\mathcal{H}_{\epsilon_{1-\rho_r}}=\{ i\in [\![1,N]\!]: Z(W_{\epsilon_{1-\rho_r} N}^{(i)})\geq1-\delta_{N}\},\\ &\mathcal{L}_{\epsilon_{1-\rho_r}}=\{ i\in [\![1,N]\!]: Z(W_{\epsilon_{1-\rho_r} N}^{(i)})\leq\delta_{N}\}, \end{split}$$ which satisfies $\mathcal{H}_{\epsilon_{\rho_w}}\subseteq\mathcal{H}_w$, $\mathcal{L}_{\epsilon_{\rho_w}}\subseteq\mathcal{L}_w$, $\mathcal{H}_{\epsilon_{1-\rho_r}}\subseteq\mathcal{H}_r$ and $\mathcal{L}_{\epsilon_{1-\rho_r}}\subseteq\mathcal{L}_r$. Further, they can divide the index $[\![1,N]\!]$ as $$\label{eq_dv2} \begin{split} \mathcal{I}_\epsilon=\mathcal{L}_{\epsilon_{\rho_w}}\cap\mathcal{H}_{\epsilon_{1-\rho_r}},\\ \mathcal{R}_\epsilon=\mathcal{L}_{\epsilon_{\rho_w}}\cap\mathcal{H}_{\epsilon_{1-\rho_r}}^c,\\ \mathcal{F}_\epsilon=\mathcal{L}_{\epsilon_{\rho_w}}^c\cap\mathcal{H}_{\epsilon_{1-\rho_r}},\\ \mathcal{B}_\epsilon=\mathcal{L}_{\epsilon_{\rho_w}}^c\cap\mathcal{H}_{\epsilon_{1-\rho_r}}^c. \end{split}$$ Now we analyze the actual properties of above divided subsets. Fig. \[fig\_csd\] illustrates the comparison of the two subsets divisions $(\mathcal{I},\mathcal{R},\mathcal{F},\mathcal{B})$ and $(\mathcal{I}_\epsilon,\mathcal{R}_\epsilon,\mathcal{F}_\epsilon,\mathcal{B}_\epsilon)$. ![Comparison of the two subsets divisions and .[]{data-label="fig_csd"}](zhao4.pdf){width="5.4cm"} - Subset $\mathcal{I}_\epsilon$: by choosing a proper $\beta^*$, have $\mathcal{L}_{\epsilon_{\rho_w}}\subseteq\mathcal{L}_w$ and $\mathcal{H}_{\epsilon_{1-\rho_r}}\subseteq\mathcal{H}_r$, thus $\mathcal{I}_\epsilon\subseteq\mathcal{I}$, which indicates that actual polarized channels in $\mathcal{I}_\epsilon$ are secure and reliable. - Subset $\mathcal{R}_\epsilon$: by choosing a proper $\beta^*$, have $\mathcal{R}_\epsilon\subseteq\mathcal{L}_{\epsilon_{\rho_w}}\subseteq\mathcal{L}_w$, thus there is no intersection of $\mathcal{R}_\epsilon$ and $(\mathcal{F},\mathcal{B})$, which indicates that actual polarized channels in $\mathcal{R}_\epsilon$ are reliable. As shown in Fig. \[fig\_csd\], the part of $\mathcal{R}_\epsilon$ included in $\mathcal{R}$ is reliable but insecure, the rest part of $\mathcal{R}_\epsilon$ is included in $\mathcal{I}$ which is secure and reliable. - Subset $\mathcal{F}_\epsilon$: by choosing a proper $\beta^*$, have $\mathcal{F}_\epsilon\subseteq\mathcal{H}_{\epsilon_{1-\rho_r}}\subseteq\mathcal{H}_r$, thus there is no intersection of $\mathcal{F}_\epsilon$ and $(\mathcal{R},\mathcal{B})$, which indicates that actual polarized channels in $\mathcal{F}_\epsilon$ are secure. As shown in Fig. \[fig\_csd\], the parts of $\mathcal{F}_\epsilon$ included in $\mathcal{F}$ is secure but unreliable, the rest part of $\mathcal{F}_\epsilon$ is included in $\mathcal{I}$ which is secure and reliable. - Subset $\mathcal{B}_\epsilon$: by choosing a proper $\beta^*$, have $\mathcal{L}_w^c\subseteq\mathcal{L}_{\epsilon_{\rho_w}}^c$ and $\mathcal{H}_r^c\subseteq\mathcal{H}_{\epsilon_{1-\rho_r}}^c$, thus $\mathcal{B}\subseteq\mathcal{B}_\epsilon$ that $\mathcal{B}_\epsilon$ has covered all the insecure and unreliable actual polarized channels. As shown in Fig. \[fig\_csd\], except for the entire subset $\mathcal{B}$, parts of $\mathcal{I}$,$\mathcal{R}$ and $\mathcal{F}$ are also included in subset $\mathcal{B}_\epsilon$. Since the divided subsets of $\rho$ BEC blocks $(\mathcal{I}_\epsilon,\mathcal{R}_\epsilon,\mathcal{F}_\epsilon,\mathcal{B}_\epsilon)$ are fixed and known, based on the actual properties of these divided subsets, we can apply the multi-block chaining structure[@Vard2013strong] as follow to construct a strong security polar coding scheme. - To build the chaining structure, separate a subset $\mathcal{E}_\epsilon$ form $\mathcal{I}_\epsilon$ that satisfies $|\mathcal{E}_\epsilon|=|\mathcal{B}_\epsilon|$. - For subset $\mathcal{I}_\epsilon\setminus\mathcal{E}_\epsilon$, since the corresponding actual polarized channels is secure and reliable, they can be used to transmit information bits. - For subset $\mathcal{R}_\epsilon$, since the corresponding actual polarized channels is reliable but not certainly secure, it is used to transmit uniformly distributed random bits, so that the security can be guaranteed. - For subset $\mathcal{F}_\epsilon$, since the corresponding actual polarized channels is secure but not certainly reliable, it is used to transmit publicly known frozen bits, so that the reliability can be guaranteed - For subset $\mathcal{E}_\epsilon$, since the corresponding actual polarized channels is secure and reliable, it is used to convey uniformly distributed random bits for the subset $\mathcal{B}_\epsilon$ of next block. - For subset $\mathcal{B}_\epsilon$, since it includes all the insecure and unreliable actual polarized channels of subset $\mathcal{B}$, we use the chaining structure for it. In the first block, $\mathcal{B}_\epsilon$ is used to transmit random bits pre-shared by legitimate parties; in the rest blocks, $\mathcal{B}_\epsilon$ is used to transmit bit conveyed in $\mathcal{E}_\epsilon$ of previous block. Therefore both reliability and security can be guaranteed. Finally, we propose the *secure polar coding scheme for AWTC model.* - *Preparing:* consider the multi-block case with block number $T$, block length $N=2^n$ and fixed fraction pair $(\rho_w,\rho_r)$; - build the $\rho$ BEC blocks $W_{\epsilon_{\rho_w}}^N$ for writing and $W_{\epsilon_{1-\rho_r}}^N$ for operation; - choose a proper $\beta^*$ and polarize the block pair $(W_{\epsilon_{\rho_w}}^N,W_{\epsilon_{1-\rho_r}}^N)$ into $(W_{\epsilon_{\rho_w}N}^{(1:N)},W_{\epsilon_{1-\rho_r}N}^{(1:N)})$ to obtain $\mathcal{H}_{\epsilon_{\rho_w}}$, $\mathcal{L}_{\epsilon_{\rho_w}}$, $\mathcal{H}_{\epsilon_{1-\rho_r}}$ and $\mathcal{L}_{\epsilon_{1-\rho_r}}$ by ; - divide the index into $(\mathcal{I}_\epsilon,\mathcal{R}_\epsilon,\mathcal{F}_\epsilon,\mathcal{B}_\epsilon)$ by and separate $\mathcal{E}_\epsilon\subset\mathcal{I}_\epsilon$ that $|\mathcal{E}_\epsilon|=|\mathcal{B}_\epsilon|$. - *Encoding:* - $u^{\mathcal{I}_\epsilon\setminus\mathcal{E}_\epsilon}$ are assigned with information bits; - $u^{\mathcal{E}_\epsilon\cup\mathcal{R}_\epsilon}$ are assigned with uniformly distributed random bits; - $u^{\mathcal{F}_\epsilon}$ are assigned with publicly known frozen bits; - if $t=1$, $u^{\mathcal{B}_\epsilon}$ are assigned with pre-shared random bits; - if $t\geq2$, $u^{\mathcal{B}_\epsilon}$ are assigned with the bits of $u^{\mathcal{E}_\epsilon}$ of block $t-1$; - encode the $u^N$ into channel inputs $x^N$ by $x^N=u^N\mathbf{G}_N$. - *Transmission:* - Alice transmits $x^N$ to Bob through a noiseless communication channel; - Eve arbitrarily chooses $\mathcal{S}_w$ with fraction $\rho_w$ and writes $x^{\mathcal{S}_w}$ into $``?"$; - Bob receives the modified channel outputs as $y^N$; - Eve arbitrarily chooses $\mathcal{S}_r$ with fraction $\rho_r$ and reads $x^{\mathcal{S}_r}$ to obtain $z^N$. - *Decoding:* Bob uses the successive cancelation (SC) decoding[@Arikan2009] to decode $y^N$ into $\hat{u}^N$ as follow: - if $i\in\mathcal{I}_\epsilon\cup\mathcal{R}_\epsilon$, $$\hat{u}^i=\arg \max\limits_{u\in\{0,1\}} W_{\epsilon_{\rho_w} N}^{(i)}(u|\hat{u}^{1:i-1},y^{1:N});$$ - if $i\in\mathcal{F}_\epsilon$, $\hat{u}^i$ is decoded as publicly known frozen bit; - if $i\in\mathcal{B}_\epsilon$, in case of $t=1$, $\hat{u}^i$ is decoded as pre-shared random bit, in case of $t\geq2$, $\hat{u}^i$ is decoded as corresponding bit of $\hat{u}^{\mathcal{E}_\epsilon}$ of block $t-1$. Performance {#sec_performance} =========== In this section, we discuss the performance of the proposed secure polar coding scheme for AWTC model. Theoretical Analysis -------------------- Consider the following Theorem for polar codes. \[theo\_drr\](Decoding error rate of polar codes)[@Mahdavifar2011 Prop. 3] For any B-DMC channel block $W^N$, let $\mathcal{A}$ be an arbitrary subset of index $[\![1,N]\!]$ and used as the information set for polar codes. Then the corresponding block error rate of SC decoding satisfies $$\mathrm{P_e}\leq\sum_{i\in \mathcal{A}}Z(W_N^{(i)}).$$ Now we analyze the performance of proposed secure polar coding scheme by reliability, security and achievable secrecy rate. \[prop\_reliability\](Reliability) By choosing a proper $\beta^*$, reliability can be achieved by the proposed secure polar coding scheme over AWTC model. Since the frozen bits in $\mathcal{F}_\epsilon$ are publicly known and Bob known the pre-shared bits for the first $\mathcal{B}_\epsilon$, then according to the multi-block chaining structure, the decoding error rate of entire $T$ blocks are determined by the SC decoding of $T$ blocks’ $\mathcal{I}_\epsilon\cup\mathcal{R}_\epsilon$ and $T-1$ blocks’ $\mathcal{E}_\epsilon$. For each codeword $X^i$, it is transmitted through actual equivalent channel $W_w^i$ but decoded according to BEC channel $W_{\rho_w}^i$ by legitimate parties, thus we use $\max[Z(W_{wN}^{(i)}),Z(W_{\rho_w N}^{(i)})]$ to analyze the reliability of polar decoding. Then for the decoding error of $T$ blocks, have $$\label{eq_ber} \begin{split} \mathrm{P_e}(T)&\leq T\sum_{i\in \mathcal{I}_\epsilon\cup\mathcal{R}_\epsilon}\max\left[Z(W_{wN}^{(i)}),Z(W_{\epsilon_{\rho_w} N}^{(i)})\right]+(T-1)\sum_{i\in \mathcal{E}_\epsilon}\max\left[Z(W_{wN}^{(i)}),Z(W_{\epsilon_{\rho_w} N}^{(i)})\right]\\ &\overset{(a)}=T\sum_{i\in \mathcal{I}_\epsilon\cup\mathcal{R}_\epsilon}Z(W_{\epsilon_{\rho_w} N}^{(i)})+(T-1)\sum_{i\in \mathcal{E}_\epsilon}Z(W_{\epsilon_{\rho_w} N}^{(i)})\\ &\overset{(b)}\leq (2T-1)o(2^{-N^{\beta^*}}), \end{split}$$ where $(a)$ is because $\mathcal{E}_\epsilon\subset\mathcal{I}_\epsilon\cup\mathcal{R}_\epsilon\subseteq \mathcal{L}_{\rho_w}$ that the corresponding $Z(W_{wN}^{(i)})=0$; $(b)$ is due to Theo. \[theo\_drr\]. Therefore, when $N$ is infinite, have $\lim_{N\rightarrow\infty}\mathrm{P_e}(T)=0$, which indicates that reliability can be achieved. When $N$ is finite, it can be observed that if $\beta^*$ increases, the decoding error rate will decrease and a better reliability performance can be achieved. \[prop\_security\](Security) By choosing a proper $\beta^*$, strong security can be achieved by the proposed secure polar coding scheme over AWTC model. For block $t$, let $\mathrm{M}^t=U^{\mathcal{I}_\epsilon\setminus\mathcal{E}_\epsilon}$, $\mathrm{Z}^t=Z^N$, $\mathrm{E}^t=U^{\mathcal{E}_\epsilon}$ and $\mathrm{F}^t=U^{\mathcal{F}_\epsilon}$. Then the information leakage for entire $T$ blocks is $\mathrm{L}(T)=I(\mathrm{M}^{1:T};\mathrm{Z}^{1:T})$. For multi-block chaining structure, as deduced in [@Vard2013strong Section IV-B], with publicly known frozen bits, have $$\mathrm{L}(T)\leq\sum_{t=1}^T I(\mathrm{M}^t,\mathrm{E}^t,\mathrm{F}^t;\mathrm{Z}^t)+I(\mathrm{E}^0;\mathrm{Z}^0),$$ where $I(\mathrm{E}^0;\mathrm{Z}^0)$ refers to the information leakage of the pre-shared bits before transmission which should be $0$. Let $\mathrm{a}^1<\mathrm{a}^2<...<\mathrm{a}^{|\mathcal{A}|}$ be the correspondent indices of the elements $U^\mathcal{A}$ for any subset $\mathcal{A}$, such that $U^\mathcal{A}\triangleq U^{\mathrm{a}^1:\mathrm{a}^{|\mathcal{A}|}}=U^{\mathrm{a}^1},...,U^{\mathrm{a}^{|\mathcal{A}|}}$. Since subsets $\mathcal{I}_\epsilon\cup\mathcal{F}_\epsilon$ and $\mathcal{R}_\epsilon$ match the construction of induced channel [@Mahdavifar2011 Lem. 15], we have $$\label{eq_laekage} \begin{split} I(\mathrm{M}^t,\mathrm{E}^t,\mathrm{F}^t;\mathrm{Z}^t) =&I(U^{\mathcal{I}_\epsilon\cup\mathcal{F}_\epsilon};Z^N)\\ =&\sum_{i=1}^{|\mathcal{I}_\epsilon\cup\mathcal{F}_\epsilon|}I(U^{\mathrm{a}^i};Z^N|U^{\mathrm{a}^1:\mathrm{a}^{i-1}})\\ \overset{(a)}{=}& \sum_{i=1}^{|\mathcal{I}_\epsilon\cup\mathcal{F}_\epsilon|}I(U^{\mathrm{a}^i};U^{\mathrm{a}^1:\mathrm{a}^{i-1}},Z^N)\\ \leq&\sum_{i=1}^{|\mathcal{I}_\epsilon\cup\mathcal{F}_\epsilon|}I(U^{\mathrm{a}^i};U^{1:\mathrm{a}^{i}-1},Z^N)\\ \overset{(b)}=& \sum_{j\in\mathcal{I}_\epsilon\cup\mathcal{F}_\epsilon} \max \left[ I(W_{rN}^{(j)}), I(W_{\epsilon_{1-\rho_r}N}^{(j)})\right]\\ \overset{(c)}\leq& o(N2^{-N^{\beta^*}}), \end{split}$$ where $(a)$ is because $U^{\mathrm{a}^{i}}$ are independent from each other; $(b)$ is because Eve can use either $W_r^N$ or $W_{\epsilon_{1-\rho_r}}^N$ for SC decoding and her best strategy is to choose the one with higher capacity from $W_{rN}^{(j)}$ and $W_{\epsilon_{1-\rho_r}N}^{(j)}$; $(c)$ is because that, by choosing a proper $\beta^*$, have $(\mathcal{I}_\epsilon\cup\mathcal{F}_\epsilon)\subseteq\mathcal{H}_r$ and from have $Z(W_{rN}^{(j)})=1$ for all $j\in\mathcal{H}_r$, thus form , have $I(W_{rN}^{(j)})=0$ for $j\in\mathcal{I}_\epsilon\cup\mathcal{F}_\epsilon$; $(c)$ is also because and that for $j\in\mathcal{I}_\epsilon\cup\mathcal{F}_\epsilon$, have $I(W_{\epsilon_{1-\rho_r}N}^{(j)})=1-Z(W_{\epsilon_{1-\rho_r}N}^{(j)})\leq 2^{-N^{\beta^*}}$. Thus we have $\mathrm{L}(T)\leq o(TN2^{-N^{\beta^*}})$. When $N$ is infinite, have $\lim_{N\rightarrow\infty}\mathrm{L}(T)=0$, which indicates that strong security can be achieved. When $N$ is finite, it can be observed that if $\beta^*$ increases, the information leakage will decrease and a better security performance can be achieved. \[prop\_secrecyrate\](Secrecy rate) When $N$ is infinite, by choosing a proper $\beta^*$, the proposed secure polar coding scheme cam achieve the secrecy capacity of AWTC model. Since for entire $T$ blocks, message bits $\mathrm{M}^{1:T}$ are transmitted over subset $\mathcal{I}_\epsilon\setminus\mathcal{E}_\epsilon$ which is proven secure and reliable with proper $\beta^*$, we have the secrecy rate as $$\label{eq_srate} \mathrm{R_s}(T)=\frac{\sum_{t=1}^{T}|\mathcal{I}_\epsilon\setminus\mathcal{E}_\epsilon|}{TN}=\frac{|\mathcal{I}_\epsilon\setminus\mathcal{E}_\epsilon|}{N}.$$ Then when $N$ is infinite, have $$\begin{split} \lim_{N\rightarrow\infty}\mathrm{R_s}(T)&=\lim_{N\rightarrow\infty}\frac{|\mathcal{I}_\epsilon\setminus\mathcal{E}_\epsilon|}{N}\\ &=\lim_{N\rightarrow\infty}\frac{|\mathcal{I}_\epsilon\cup\mathcal{R}_\epsilon|-|\mathcal{E}_\epsilon\cup\mathcal{R}_\epsilon|}{N}\\ &=\lim_{N\rightarrow\infty}\frac{|\mathcal{I}_\epsilon\cup\mathcal{R}_\epsilon|-|\mathcal{B}_\epsilon\cup\mathcal{R}_\epsilon|}{N}\\ &=\lim_{N\rightarrow\infty}\frac{|\mathcal{L}_{\epsilon_{\rho_w}}|-|\mathcal{H}_{\epsilon_{1-\rho_r}}^c|}{N}\\ &=1-\rho_w-\rho_r, \end{split}$$ where $(a)$ is due to [@Mahdavifar2011 Theo. 1] that $$\begin{split} &\lim_{N\rightarrow\infty}\frac{|\mathcal{L}_{\epsilon_{\rho_w}}|}{N}=I(W_{\epsilon_{\rho_w}})=1-\rho_w, \\ &\lim_{N\rightarrow\infty}\frac{|\mathcal{H}_{\epsilon_{1-\rho_r}}^c|}{N}=I(W_{\epsilon_{1-\rho_r}})=\rho_r, \end{split}$$ Thus the secrecy capacity of AWTC model can be achieved with a proper $\beta^*$ and infinite $N$. Simulations ----------- Next, we test the performance of proposed secure polar coding scheme with finite block length $N$. First, we test the upper bound of information leakage in , upper bound of legitimate BER in and the secrecy rate in . Particularly, we let $\rho_w=0.2$, $\rho_r=0.4$, block length $N=2^8$ to $2^{18}$, parameter $\beta=0.20$ to $0.32$ and block number $T=300$. The simulation results are illustrated in Fig. \[fig\_theo\_sim\]. Fig. \[fig\_ulb\] shows the decrease of upper bound of legitimate BER with the growing of block length $N$ for all $\beta$, which matches our analysis of reliability in Prop. \[prop\_reliability\]. There are some unstable point in Fig. \[fig\_ulb\] which is because the corresponding value of $\beta$ is not in the range of proper $\beta^*$ for that block length $N$. Then with the increasing of block length $N$, this unstable phenomenon is disappear, which matches Prop. \[prop\_including\]. Fig. \[fig\_uil\] shows the decrease of upper bound of information leakage with the growing of block length $N$ for all $\beta$, which matches our analysis of security in Prop. \[prop\_security\]. Fig. \[fig\_sr\] shows the increase of secrecy rate with the growing of block length $N$ for all $\beta$, which matches our analysis of secrecy rate in Prop. \[prop\_secrecyrate\]. Next, we test the actual BERs for both Bob and Eve by implementing the entire secure communication process over AWTC model. Particularly, we let $\rho_w=0.2$, $\rho_r=0.4$, block length $N=2^8$ to $2^{12}$, parameter $\beta=0.22$ to $0.30$ and block number $T=1000$. For the transmitted message, we use uniformly distributed binary random bits. The simulation results are illustrated in Fig. \[fig\_ber\_sim\]. Fig. \[fig\_ler\] is the BER of entire $1000$ blocks for legitimate user Bob decoding the message bits, which shows that the legitimate BER decreases significantly with the increasing of block length $N$ for all the $\beta$. Fig. \[fig\_eer\] is the BER of entire $1000$ blocks for eavesdropper Eve decoding the message bits, which shows that the eavesdropper BER remains closely to $0.5$ with the increasing of block length $N$ for all the $\beta$. Therefore, Fig. \[fig\_ber\_sim\] proves that the proposed securer polar coding scheme can provide secure and reliable communication over the AWTC model with finite block length $N$. Conclusion {#sec_con} ========== In this paper, we have considered the secure coding problem of AWTC model. We have presented an channel based equivalent model of AWTC model by using the $\rho$ equivalent channel block. Then we have studied the polarization relationship of the $\rho$ equivalent channel block and the $\rho$ BEC block and found out that by choosing a proper polarization parameter $\beta=\beta^*$, the polarized subsets of $\rho$ BEC block is fully contained by the polarized subsets of $\rho$ equivalent channel block (Prop. \[prop\_including\]). Based on this results, we have constructed a sucre polar coding scheme for the AWTC model and analyzed its performance. Theoretically, we have proven that the proposed scheme achieves the secrecy capacity of AWTC model under both reliability and strong security criterions with an infinite block length $N$. Further, for the case of finite block length $N$, we have carried out simulations which proves that the proposed secure polar coding scheme can provide secure and reliable communication over AWTC model. The containment relationship of polarized subsets for $\rho$ equivalent channel block and $\rho$ BEC block in Prop. \[prop\_including\] is the key element of our secure polar coding construction. In this work, we obtained this key result by observing from the simulation test, which is however not rigourous enough, although its correctness has been proven by the communication experiments of the proposed secure polar coding scheme. Thus the theoretical proof of Prop. \[prop\_including\] will be our future work. Acknowledgment {#acknowledgment .unnumbered} ============== This work is supported in part by the Natural Science Foundation of Hubei Province (Grant No.2019CFB137) and the Fundamental Research Funds for the Central Universities (Grant No.2662017QD042, No.2662018JC007). [1]{} A. D. Wyner, “The wire-tap channel", *Bell System Tech. J.*, vol. 54, no. 8, pp. 1355-1387, Oct. 1975. L. H. Ozarow and A. D. Wyner, “Wire-tap channel II”, AT&T Bell Laboratories technical journal", vol. 63, no. 10, pp. 2135-2157, 1984. P. Wang and R. Safavi-Naini, “A model for adversarial wiretap channels", *IEEE Trans. Inf. Theory*, vol. 62, no. 2, pp. 970-983, Nov. 2015. E. Ar[i]{}kan, “Channel polarization: a method for constructing capacity achieving codes for symmetric binary-input memoryless channels", *IEEE Trans. Inf. Theory*, vol. 55, no. 7, pp. 3051-3073, Jul. 2009. H.Mahdavifar and A.Vardy, “Achieving the secrecy capacity of Wiretap channels using Polar codes", *IEEE Trans. Inf. Theory*, vol. 57, no. 10, pp. 6428-6443, Oct. 2011. E. Şaşoğlu and A. Vardy, “A new polar coding scheme for strong security on wiretap channels", *IEEE Int. Symp. Inf. Theory (ISIT)*, pp. 1117-1121, Jul. 2013. Y.-P. Wei and S. Ulukus, “Polar coding for the general wiretap channel", in *Proc. Information Theory Workshop*, pp. 1-5, Apr. 26/May 1 2015. R. A. Chou and M. R. Bloch, “Polar coding for the broadcast channel With confidential messages: a random binning analogy,” *IEEE Trans. Inf. Theory*, vol. 62, no. 5, pp. 2410-2429, May 2016. H. Si, O. O. Koyluoglu and S. Vishwanath, “Hierarchical polar coding for achieving secrecy over state-dependent wiretap channels without any instantaneous CSI," *IEEE Trans. Comm.*, vol. 64, no. 9, pp. 3609-3623, Sept. 2016. T. C. Gulcu and A. Barg, “Achieving secrecy capacity of the wiretap channel and broadcast channel With a confidential component," *IEEE Trans. Inf. Theory*, vol. 63, no. 2, pp. 1311-1324, Feb. 2017. R. A. Chou and A. Yener, “Polar coding for the multiple access wiretap channel via rate-splitting and cooperative jamming," *IEEE Trans. Inf. Theory*y, vol. 64, no. 12, pp. 7903-7921, Dec. 2018. M. Alsan and E. Telatar, “A simple proof of polarization and polarization for non-stationary memoryless channels”, *IEEE Trans. Inf. Theory*, vol. 62, no. 9, pp. 4873-4878, Sep. 2016. H.Mahdavifar, “Polar coding for non-stationary channels", *Online: https://arxiv.org/abs/1611.04203v3*. Y. Zhao, “Non-stationary polarization: proof and application", *Online: https://arxiv.org/abs/1812.00160*. [Yizhi Zhao]{} received the Ph.D. degree in the school of Optical and Electronic Information from the Huazhong University of Science and Technology, Wuhan, China, in 2017. He is currently an Assistant Professor with the College of Informatics, Huazhong Agricultural University. His research interests include physical layer coding, information theory and machine learning. [Hongmei Chi]{} received her Ph.D. degree in the School of Mathematics and Statistics from Wuhan University, Wuhan, China, in 2014. Currently, she is an Assistant Professor with College of Science, Huazhong Agricultural University. Her research interest is statistic learning, stochastic analysis and information theory. [^1]: Y. Zhao was with the College of Informatics, Huazhong Agricultural University, Wuhan, Hubei, China. E-mail: zhaoyz@mail.hzau.edu.cn. [^2]: H. Chi was with the College of Science, Huazhong Agricultural University, Wuhan, Hubei, China. E-mail: chihongmei@mail.hzau.edu.cn.
--- author: - 'H. Beuther' - 'P. Schilke' - 'F. Gueth' - 'M. McCaughrean' - 'M. Andersen' - 'T.K. Sridharan' - 'K.M. Menten' date: 'Received ... /Accepted ...' title: 'IRAS 05358+3543: Multiple outflows at the earliest stages of massive star formation' --- Introduction ============ It is still not known whether the physical processes leading to massive stars are similar to those of low-mass star formation, or whether different processes are taking place. Classical star formation scenarios predict moderate accretion rates around $10^{-6}-10^{-5}$ M$_{\odot}$ yr$^{-1}$ [@shu; @1977], which are incapable to overcome the radiation pressure for sources more massive than approximately 10 M$_{\odot}$ [@wolfire; @1987]. Observations over the last decade showed that massive star formation occurs most likely in a clustered mode, and theories were proposed that advocate the coalescence of protostars in dense cluster centers to build up the most massive sources [@bonnell; @1998; @stahler; @2000]. In contrast, according to other theories following the classical accretion scenario, it is possible to form massive stars via enhanced accretion and disks [@jijina; @1996; @norberg; @2000; @tan; @2002; @yorke; @2002]. For more details on massive star formation see recent reviews, e.g., @stahler [@2000], @richer [@2000], @churchwell [@2000], and @kurtz [@2000]. Because molecular outflows give on large angular scales a wealth of information about the innermost part of star-forming regions, in recent years massive outflows have been investigated in more detail. Such outflow studies seemed to indicate a lower degree of collimation for massive outflows than known for low-mass sources. Collimation factors between 1 and 2 were reported as compared to low-mass outflows, where collimation factors up to 10 are found (e.g., @richer [@2000]). Based on those observations new outflow scenarios were suggested, e.g., @churchwell [@2000] proposed that massive outflows might be produced by deflection of infalling material from the central protostar. But the analysis of the data has not taken into account properly the spatial resolution of single-dish observations. Massive star formation sites are on the average far more distant (a few kpc) than well known low-mass sources (a few hundred pc), and thus their angular sizes are smaller in spite of larger linear sizes. Recent statistical work on massive outflows with $11''$ resolution by @beuther [@2001b] shows that the average collimation – even with this spatial resolution – is higher than previously thought. Additionally, @beuther [@2001b] find that low-mass correlations of outflow and core parameters continue up to the high-mass regime, suggesting that similar star formation processes are responsible for forming stars of all masses. But their data also indicate that even higher angular resolution is needed to disentangle real source and outflow structures, and thus detailed studies with mm interferometers are needed. There have been recent interferometric studies of massive outflow sources, of which prominent examples are [*IRAS*]{} 20126+4104 [@cesaroni; @1997; @cesaroni; @1999] or G192.16-3.82 [@shepherd; @1998; @shepherd; @1999]. While the outflow of [*IRAS*]{} 20126+4104 is rather collimated, G192.16-3.82 does not show high degrees of collimation, and @shepherd [@1999] propose as outflow mechanism a combination of a strong, wide-angle wind with a weak jet. In this paper, we present a detailed interferometric and single-dish study at mm wavelengths of an extremely young and deeply embedded high-mass protostellar object [*IRAS*]{} 05358+3543, which is also known in the literature as S233IR. This source is part of a larger sample presented and discussed by @sridha and @beuther [@2001a; @beuther; @2001b; @beuther; @2001c]. [*IRAS*]{} 05358+3543 has different peak positions in the [*IRAS*]{} 12 $\mu$m and 100 $\mu$m bands. While the position in the Point Source Catalog is based on the 12 $\mu$m emission due to an infrared cluster, the 100 $\mu$m emission peaks approximately $40''$ away to the north-east and traces the high-mass protostellar cluster we are focusing on (Fig. \[mosaic\]). As the mm dust continuum emission coincides with the 100 $\mu$m peak, we will refer to that source as [*IRAS*]{} 05358+3543mm. Based on [*IRAS*]{} infrared data, we estimate the bolometric luminosity of the source to be $\sim 6300$ L$_{\odot}$, and the dust temperature to be around 47 K (Table \[calc\], @sridha). The first CO 1–0 observations of this region are reported by @snell [@1990], who detected a 15 M$_{\odot}$ bipolar outflow and classify the source as a massive star formation site. The estimated distance of 1.8 kpc is based on observations of the nearby H[ii]{} region S233 [@snell; @1990]. As additional massive star formation and outflow signposts, Class [ii]{} CH$_3$OH [@menten; @1991; @sridha] and H$_2$O maser emission [@tofani; @1995; @beuther; @2001c] is observed towards [*IRAS*]{} 05358+3543mm. Additionally, OH maser emission is found in the region with low spatial resolution [@wouterloot; @1988]. As reported by @tofani [@1995] and confirmed by @sridha, [*IRAS*]{} 05358+3543mm does not show any 3.6 cm emission down to 1 mJy. Thus, based on the total luminosity of the region and a regular initial mass function, it is likely that we are witnessing a very early stage of massive star formation (see @sridha). The spectral type of the most massive object of the cluster is likely between B2 and B1 ($10~{\rm{M_{\odot}}}\leq M \leq 13~{\rm{M_{\odot}}}$). @porras [@2000], @yao [@2000] and @jiang [@2001] have published a series of near-infrared imaging and polarimetric studies of this region, showing that there are two young embedded stellar clusters, one associated with the 12 $\mu$m source (age $\sim$3 Myr) and one with [*IRAS*]{} 05358+3543mm (age $\leq 2$ Myr). Narrow-band observations in the v=1–0 S(1) line of molecular hydrogen at 2.122${\mbox{\,${\mu}$m}}$ delineate at least two bipolar jets emanating from the younger cluster, and polarimetric imaging identifies the likely source of the outflows as being deeply embedded and undetected at near-infrared wavelengths [@yao; @2000; @jiang; @2001]. In an accompanying paper, @mccaughrean [@2001] present new, more sensitive, and higher-spatial-resolution near- and mid-infrared observations of the region, which reveal new information concerning the shocked H$_2$ outflows and their driving sources. Observations ============ We have observed [*IRAS*]{} 05358+3543mm in several tracers with different telescopes. Table \[parameter\] summarizes the observations described below. ---------------- --------- ------ ----------------- ---------------- ------------ freq. Obs HPBW $T_{\rm{sys}}$ $\Delta v$ \[GHz\] \[$''$\] \[K\] \[km/s\] CO 1–0 115.27 PV 22 120 0.2 CO 1–0 115.27 PdBI $4.1\times 3.3$ 300 0.4 SiO 2–1 86.85 PV 29 85 3.5 SiO 2–1 86.85 PdBI $5.8\times 5.6$ 150 0.3 H$^{13}$CO$^+$ 86.75 PV 29 85 3.5 H$^{13}$CO$^+$ 86.75 PdBI $5.8\times 5.6$ 150 0.3 $^{13}$CO 1–0 110.20 PV 22 120 0.8 CO 2–1 230.54 PV 11 250 0.1 CO 6–5 691.47 CSO 11 3000 0.02 CH$_3$OH 241.79 PV 11 250 0.1 ---------------- --------- ------ ----------------- ---------------- ------------ : Observation parameters of spectral line observations; observatories are the 30 m at Pico Veleta (PV), the Plateau de Bure Interferometer (PdBI), the Caltech Submillimeter Observatory (CSO), and the Very Large Array (VLA); quoted system temperatures $T_{\rm{sys}}$ are average values, $\Delta v$ is the velocity resolution. \[parameter\] Plateau de Bure Interferometer (PdBI) ------------------------------------- [*IRAS*]{} 05358+3543mm was observed with the IRAM Plateau de Bure millimeter array [@guilloteau; @1992] between August and October 1999 in two frequency setups[^1]. Five 15 m antennas equipped with dual-frequency receivers were used in three different configurations (4D1, 4C2 & 4C2+N09) with baselines ranging from 24 m to 180 m. Only four antennas were used during the August observations. Because of the poor summer weather conditions, the quality of the 1 mm data was not satisfactory in both setups, and they were only used for phase corrections. The 3 mm receivers were tuned to 115.27 GHz (USB) and 86.8 GHz (LSB) to cover the CO 1–0 line, and the SiO 2-1 and H$^{13}$CO$^+$ 1–0 lines simultanously. The phase noise was lower than 30$^{\circ}$ and atmospheric phase correction based on the 1.3 mm total power was applied. For continuum measurements, we placed two 160 MHz correlator units in each band. Temporal fluctuations of amplitude and phase were calibrated with frequent observations of the quasars 2145+067 and 0548+378. The amplitude scale was derived from measurements of MWC349 and CRL618. We estimate the final flux density accuracy to be $\sim 15\%$. The reference center is R.A.\[J2000\] 05:39:13.0 and Dec.\[J2000\] 35:45:54, and the $v_{\rm{LSR}}$ is $-17.6$ km s$^{-1}$. Ten fields (see Figure \[mosaic\]) were observed to cover the whole source, with the exception of the first configuration which covered only the 6 eastern fields. Pico Veleta (PV) ---------------- [*Line observations:*]{}\ To obtain large scale information about [*IRAS*]{} 05358+3543mm, we used the IRAM 30 m telescope at Pico Veleta in the Sierra Nevada, Spain in April 1999 and mapped the whole region on-the-fly in the CO 2–1, CO 1–0, $^{13}$CO 1–0, H$^{13}$CO$^+$ 1–0, SiO 2–1 and CH$_3$OH $5_0-4_0(A+)$ lines (near 241.79 GHz). Due to the continuous movement of the telescope during on-the-fly observations, spurious stripes in scanning direction (R.A.) are found in some maps (see Fig \[sd\]). Except for CO 1–0 (observed later \[May 2000\] to add the short spacings, see §\[short\]) all the other lines were observed simultaneously, which guarantees the best alignment between the different maps. The on-the-fly coverages were done twice with 2 sec integration time per dump and a $4''$ grid (i.e., Nyquist-sampling at 241 GHz). As backends we used the autocorrelator and two 1 MHz filterbanks.\ [*Bolometer observations:*]{}\ The 1.2 mm single-dish continuum observations were conducted with the MAMBO array at the IRAM 30m telescope. For details on the observations and analysis of this data see @beuther [@2001a]. Merging the interferometric and single-dish data {#short} ------------------------------------------------ The IRAM 30 m data (CO 1–0, SiO 2–1 and H$^{13}$CO$^+$ 1–0) were used to derive short-spacing information and thereby complement the interferometric PdBI data. The algorithm used to derive the visibilities corresponding to each pointing center of the mosaic is described by @gueth [@1996]. The single-dish and interferometer visibilities are subsequently processed together. Relative weighting has been chosen to minimize the negative side-lobes in the resulting dirty beam while keeping the highest angular resolution possible. Again images were produced using natural weighting, then a CLEAN-based deconvolution of the mosaic was performed. The final beam sizes are $4.7''\times 3.8''$ (P.A. $51^{\circ}$) for CO 1–0, and $5.9''\times 5.5''$ (P.A. $65^{\circ}$) for H$^{13}$CO$^+$ 1–0 and SiO 2–1. The beam size for the 115 GHz continuum data, where only the PdBI data were used, is $4''\times 3''$ (P.A. $46^{\circ}$). Caltech Submillimeter Observatory (CSO) --------------------------------------- We used the CSO[^2] 10.4 m telescope to obtain a CO 6–5 map of [*IRAS*]{} 05358+3543mm in the on-the-fly mode on December, 15th 1999. At this frequency, the angular resolution of the CSO is $11''$, well suited to be compared with the CO 2–1 data of Pico Veleta. As backend we used the facility AOS. Further observation parameters are given in Table \[parameter\]. Observational results ===================== Single-dish observations ------------------------ ### Dust continuum emission {#dust} Figure 1 presents the 1.2 mm continuum image of [*IRAS*]{} 05358+3543mm. The main peak is at the [*IRAS*]{} 100 $\mu$m peak position and has more extended emission to the north-west and the south-west. The latter sub-source is directly next to the 12 $\mu$m peak position, which is the center of a slightly older cluster [@porras; @2000] and can be interpreted as a remnant of this older star-forming region. Additionally, there exists a north-western elongation in the dust map, which will be shown to be associated with molecular line emission (see section \[single\_dish\_line\]). Assuming that the mm continuum is mainly produced by optically thin dust emission with a grain emissivity index $\beta=2$, @beuther [@2001a] estimate the total gas mass $M$ to $\sim 600~\rm{M}_{\odot}$ and the column density $N_{\rm{H_2}}$ to a few times $10^{24}$ cm$^{-2}$ (Table \[calc\]), which shows that we are really dealing with a high-mass star formation site. The column density converts to a visual absorption magnitude $A_{\rm{v}}$ of $\sim 1000$ ($A_{\rm{v}}=N_{\rm{H_2}}/0.94\times 10^{21}$, @frerking [@1982]). If a central star has already ignited, its free-free emission could be quenched by the infalling core [@walmsley; @1995]. ----------------------------------------- --------------------- $M$ \[M$_{\odot}$\] 610 $N_{\rm{H_2}}$ \[cm$^{-2}$\] $4.0\times 10^{24}$ $L$ \[L$_{\odot}$\] 6300 $T_{\rm{dust}}$ \[K\] 47 $M_{\rm{out}}$ \[M$_{\odot}$\] 20 $\dot{M}_{\rm{out}}$ \[M$_{\odot}$/yr\] $6\times 10^{-4}$ $t$ \[yr\] 36000 ----------------------------------------- --------------------- : Physical parameters of the observed region from 1.2 mm continuum data ($M$ and $N_{\rm{H_2}}$, §\[dust\]), [*IRAS*]{} ($L$ and $T_{\rm{dust}}$, @sridha) and CO 2–1 observations ($M_{\rm{out}}$, $\dot{M}_{\rm{out}}$ and dynamical age $t$, §\[single\_dish\_line\]) \[calc\] ### Molecular emission {#single_dish_line} Figure \[sd\] presents single-dish data obtained at the 30 m telescope on Pico Veleta. The $^{12}$CO 2–1 lines show self-absorption at the line center (Fig. \[spectrum\]), and blue and red wing emission due to molecular outflows. Blue (v=\[$-32,-21$\] km s$^{-1}$) and red (v=\[$-12,-4$\] km s$^{-1}$) $^{12}$CO wing emission maps were produced by integrating the wing-part of the spectra. It is difficult to disentangle bipolar structure in spite of the known outflow features presented by @porras [@2000], but the outflow lobes are centered at the mm core. As outlined in §\[pdb\], the inclination angle of the outflow is low, and we use the mean intensities of the red and blue line-wing maps in the spatial area known from the Plateau de Bure observations for further outflow parameter determination. Opacity-corrected H$_2$ column densities $N_{\rm{b}}$ and $N_{\rm{r}}$ in both outflow lobes can be calculated by assuming a constant $^{13}$CO/$^{12}$CO $2-1$ line wing ratio throughout the outflow [@cabrit; @1990]. @choi [@1993] found an average $^{13}$CO/$^{12}$CO $2-1$ line wing ratio around 0.1 in 7 massive star-forming regions, corresponding to a $\tau (^{13}$CO $2\to1)=0.1$. We adopt this value for our sample as well, and we assume 30 K as average temperature in the outflow. The outflow mass $M_{out}$, the dynamical timescale $t$, and the mass entrainment rate $\dot{M}_{\rm{out}}$ are calculated via: $$\begin{aligned} M_{\rm{out}} &=& (N_{\rm{b}} \times \rm{size_b} + N_{\rm{r}} \times \rm{size_r})\ m_{\rm{H_2}} \\ t &=& \frac{r}{(v_{\rm{max}_b}+v_{\rm{max}_r})/2}\\ \dot{M}_{\rm{out}} &=& \frac{M_{\rm{out}}}{t} \\\end{aligned}$$ where $\rm{size_b}$ and $\rm{size_r}$ are the areas of the blue and red outflow lobes, respectively, $m_{\rm{H_2}}$ the mass of the H$_2$ molecule, and $v_{\rm{max}_b}$ and $v_{\rm{max}_r}$ the maximum velocities observed in each line wing. A more detailed description of how the outflow parameters are determined is given in @beuther [@2001b]. According to @cabrit [@1990] derived masses are accurate to a factor 2 to 4, whereas the error in the determination of the dynamical parameters is higher, up to a factor 10. The derived total outflow mass $M_{\rm{out}}$ is around 20 $M_{\odot}$ and the mass entrainment rate of molecular gas $\dot{M}_{\rm{out}}$ approximately $6\times 10^{-4}$ M$_{\odot}$yr$^{-1}$ (see Table \[calc\]). The latter value is derived dividing $M_{\rm{out}}$ by the dynamical timescale $t$. As outlined in @beuther [@2001b], such a mass entrainment rate results in accretion rate estimates $\dot{M}_{\rm{accr}}$ around $10^{-4}$ M$_{\odot}$yr$^{-1}$. These are lower limits, because the outflow is not strongly inclined to the plane of the sky (see section \[pdb\]), which minimizes the wing emission and by that the detectable (and separable) outflow emission. In spite of the low accuracy, these values are high and confirm the outflow to be massive. All other molecular line maps presented in Figure \[sd\] peak at positions that are clearly offset from the main mm core. This is in contrast with the $^{12}$CO emission, where not only the wing emission is centered on the main mm peak, but also the integrated emission (see the $^{12}$CO 6–5 map in Figure \[single\]). All data at Pico Veleta were taken simultaneously, thus pointing errors cannot produce these offsets. The $^{13}$CO 1–0, H$^{13}$CO$^+$, SiO and CH$_3$OH images at the line center show a similar north-western elongation as the mm continuum map. Low spatial as well as spectral resolution of the H$^{13}$CO$^+$ and SiO data prevents further analysis of these data, but the line wings of the CH$_3$OH data show a bipolar distribution slightly shifted to the west with respect to the CO outflow, indicating that the CH$_3$OH emission might trace a different outflow in the west (see §\[pdb\]). ### Temperature gradients {#temp} Multi-line studies of rotationally excited CO provide a good tool to study temperature variations in molecular clouds. In thermalized gas, where the local thermodynamical equilibrium (LTE) approximation applies, the low-$J$ CO 1–0 and 2–1 lines trace cooler material while the mid-$J$ CO 6–5 line is sensitive to warmer molecular gas (e.g., @beuther [@2000], @hirano [@2001]). To get an idea of the temperature distribution in [*IRAS*]{} 05358+3543mm, we derived $^{12}\rm{CO}~6-5/2-1$ and $^{12}\rm{CO}~1-0/2-1$ ratio maps for the blue and red wing emission as defined in §\[single\_dish\_line\] (Figure \[co\_ratios\]). All data are smoothed to the resolution of the CO 1–0 transition ($22''$), even the 6–5/2–1 ratio map to increase its signal-to-noise ratio. In most regions rather uniform ratios around 0.5 are observed, but each of the ratio maps has a prominent region with line ratios larger than 1. The blue 6–5/2–1 ratios rise slightly east of the [*IRAS*]{} Position (named P1 and marked by an asterisk in Figure \[co\_ratios\]), and the red 6–5/2–1 ratios rise south-east of the main mm peak (P2, triangle in Figure \[co\_ratios\]), which is not prominent in the blue wings. Contrasting to the increases in the 6–5/2–1 ratios around P1 and P2, the 1–0/2–1 ratios are rather uniform around 0.5 throughout those regions, but they rise – independent of the line wings – around $30''$ west of the mm peak, where the mm emission drops significantly (P3, circle in Figure \[co\_ratios\]). The blue 6–5/2–1 ratios are lowest there. As the average $^{13}$CO/$^{12}$CO $2-1=0.1$ line wing ratio (§\[single\_dish\_line\]) corresponds to a $^{13}\rm{CO}\,2-1$ opacity of $\approx 0.1$, the average $^{12}\rm{CO}\,2-1$ opacity in the wings is about 6 [@langer; @1990]. To quantify the temperatures in those regions, we ran several Large Velocity Gradient models (LVG) for optically thick CO emission ($\tau (^{12}\rm{CO}\,2-1) \approx 6$). The regions around P1 and P2 have average densities of a few times $10^5$ cm$^{-3}$ [@beuther; @2001a], which is sufficient to thermalize even the $J=6-5$ transition. To thermalize the CO 2–1 transition, densities of $10^4$ cm$^{-3}$ are sufficient, which is a reasonable assumption at position P3. Fig. \[ttt\] presents the resulting line ratios versus the kinetic temperatures for the above estimated densities, as calculated via the LVG approximation. Obviously, increasing $^{12}\rm{CO}~1-0/2-1$ ratios do indicate decreasing temperatures, whereas increasing $^{12}\rm{CO}~6-5/2-1$ ratios are signposting enhanced temperatures in such regions. The LVG calculations result in temperatures $\geq 80$ K at P1 and P2, and temperatures below 20 K around P3. Interferometric observations ---------------------------- ### The main core at high resolution Zooming in the main single-dish mm core with the PdBI at 2.6 mm at a spatial resolution of $4''\times 3''$ (7000$\times$5000 AU), it resolves into three massive sub-sources with separations between $4''$ and $6''$. We label those three sources mm1, mm2 and mm3 (see Figure \[out\]). The major and minor core sizes were determined by fitting two-dimensional Gaussian to the data, the integrated emission is derived within approximately these areas. Assuming optically thin dust emission, masses and column densities presented in Table \[mmm\] were calculated at 30 K (for more details on the calculations see @beuther [@2001a]). Source mm1 coincides within less than $1''$ with the main mid-infrared source (at $11.7~\mu$m) presented by @mccaughrean [@2001] as well as with the deeply embedded source found by @yao [@2000] and @jiang [@2001] in their K-band polarimetric imaging study. mm1 mm2 mm3 --------------------------------------- ------------------ ------------------ ------------------ R.A. \[J2000.0\] 5:39:13.08 5:39:12.78 5:39:12.49 Dec. \[J2000.0\] 35:45:50.5 35:45:50.6 35:45:55.2 maj.$\times$min. \[$''$\] $5.6 \times 4.5$ $5.6 \times 4.1$ $6.1 \times 4.1$ peak \[mJy/beam\] 23 16 16 int. \[mJy\] 30 22 23 $M$ \[M$_{\odot}$\] 100 73 77 $N_{\rm{H_2}}$ \[$10^{24}$cm$^{-2}$\] 5.1 3.5 3.5 : Parameters derived for the PdBI mm cores: positions, major and minor core sizes, peak and integrated intensities, and masses and column densities.\[mmm\] Three H$_2$O maser features found by @tofani [@1995] are associated with mm1, while one is associated with mm2. In 1999, @beuther [@2001c] detected only one of the previously known H$_2$O maser feature south of mm1 (Fig. \[out\]). The CH$_3$OH maser feature is located near the mm cores as well, but the position is based on single-dish observations [@menten; @1991] and not accurate enough for a closer interpretation. @porras [@2000] found 3 infrared sources (IR56, IR58 and IR93 in Figure \[out\]) that are all offset from mm1 and mm2. @yao [@2000] and @jiang [@2001] then showed that the emission from IR58 and IR93 is highly polarized, and thus both are not independent sources but reflection nebulae powered by mm1. ### Three (at least) bipolar outflows {#pdb} The most striking features of [*IRAS*]{} 05358+3543mm are the three outflows observed with the Plateau de Bure Interferometer in CO and SiO (Fig. \[obs\]).\ [**A large scale highly collimated CO outflow ($\mathcal{A}$)**]{}\ Figure \[obs\]a shows a highly collimated large scale ($\approx 1$ pc) molecular outflow observed in CO in the east of the region. We present the PdBI data without merging the single-dish data in Figure \[obs\]a, because the spatial filtering properties of the interferometer help to isolate the flow from the ambient gas (the combined data are presented in §\[channel2\]). To highlight the differences between outflow ($\mathcal{A}$) and the high-velocity outflow ($\mathcal{B}$), which is presented in the next paragraph, we used slightly different velocity windows compared to the single-dish image (Fig. \[sd\]). Outflow ($\mathcal{A}$) is slightly bent and oriented from north to south terminating in H$_2$ bow shocks. This corresponds to one of the two outflows discussed by @porras [@2000] and by @mccaughrean [@2001] based on H$_2$ data. To quantify the degree of collimation, we divide the length of the outflow by its width, which results in a collimation factor of 10. A number of emission peaks along the outflow show red and blue emission in CO and SiO (Fig. \[obs\]a,b), which is a typical feature for expanding bow shocks near the plane of the sky. Thus, we conclude that the outflow is not strongly inclined to the plane of the sky. @porras [@2000] claim that IR93 is powering the outflow, but @yao [@2000] and @jiang [@2001] show that IR93 is highly polarized and thus not a separate source. Our mm data, the mid-infrared images by @mccaughrean [@2001], and the polarimetric images by @yao [@2000] and @jiang [@2001] strongly suggest that a protostar within mm1 is the powering source of outflow ($\mathcal{A}$).\ [**A high-velocity CO outflow ($\mathcal{B}$)**]{}\ A second bipolar outflow is seen in CO at high velocities relative to the system velocity (Fig. \[out\], blue lobe v=\[–43,–29\] km s$^{-1}$ , red lobe v=\[–9,0\] km s$^{-1}$). As Fig. \[out\] shows, this outflow is driven most likely by a source within the same protostellar condensation as outflow ($\mathcal{A}$), and both outflows together form a quadrupolar system, inclined by a position angle PA of $\approx 40^{\circ}$. @mccaughrean [@2001] differently speculate that the second mid-infrared source $2''$ east might be the powering source. Blue and red lobes are clearly resolved by the PdBI with the red one to the south-east and the blue one to the north-west. The high-velocity outflow corresponds to the second outflow detected in H$_2$ emission by @porras [@2000] and further discussed by @mccaughrean [@2001]. It has a collimation factor around 6 (estimated from combined CO and H$_2$ data). At the tip of the south-eastern lobe, shocked SiO and H$_2$ emission is observed (Figure \[obs\]a,b). This is also the position of the warm region P2 (see section \[temp\]), which suggests that the temperature increase may be caused by shock interaction of the outflow with the ambient medium.\ [**A third outflow mainly observed in SiO ($\mathcal{C}$)**]{}\ Further to the west, a third outflow is observed mainly in SiO 2–1 on large scales in north-south direction (Fig. \[obs\]b). Red and blue lobes are resolved with three symmetric bullet-like features in each lobe. But as the region is very complicated, we present two alternate scenarios for the observed outflow features in this region. [**(1) Outflow ($\mathcal{C}_{\rm{bent}}$):**]{} The first hypothesis of outflow structure is outlined by the two western arrows in Fig. \[obs\]. This way, the outflow shows the same bending as the large scale CO outflow ($\mathcal{A}$) in the east, and it follows to a large degree the mm emission as outlined in Fig. \[single\]. CO emission is found there with the same bending structure (Figure \[obs\]a). The collimation factor of this outflow is around 3, and the powering source is possibly the western H$^{13}$CO$^+$\[3\] core (§\[h13co+\]). [**(2) Outflow ($\mathcal{C}_{\rm{cavity}}$):**]{} The other possible interpretation of the observation is sketched by the two ellipses presented in Fig. \[obs\]. This way, the northern SiO emission is not tracing the main part of the outflow, but rather the western cavity wall. The northern elongated CO feature (Figure \[obs\]a), which is not accounted for in the previous scenario, could then represent the bow shock at the tip of the cavity. A problem with this interpretation is that the SiO emission in the north is mainly red-shifted with regard to the system velocity, whereas the northern CO emission is on the blue side of the spectrum with a difference of approximately 10 km s$^{-1}$. A possible solution of this discrepancy is that the outflow is in the plane of the sky at the front side of the dense gas traced by the 1.2 mm dust continuum emission. Thus, the SiO emission is produced at the backside of the cavity, where the outflow interacts with the dense gas, and the material moves away from the observer. The blue CO feature then could be produced in the final expanding bow shock, when the outflow interacts with less dense gas at the tip of the flow, which is in front of the outflow and thus pushed towards the observer, producing blue-shifted emission. In this interpretation, not H$^{13}$CO$^+$\[3\] but rather the H$^{13}$CO$^+$ core \[2\] is the center of the outflow (§\[h13co+\]), and the bending structure, especially of the northern lobe, is less prominent. The collimation factor of this structure is approximately 3 as well. The southern part of the outflow seems to be less complicated and can be interpreted in both scenarios. At the edge of the more evolved [*IRAS*]{} 12 $\mu$m cluster, we find the hot blue region P1 described in section \[temp\]. The temperature increase of the outflowing gas in this region could be caused by UV heating of the adjacent more evolved cluster. We point out that the details of the morphological interpretation of outflow ($\mathcal{C}$) have no bearing on the overall energetics of the whole outflow system, which is dominated by outflow ($\mathcal{A}$).\ [**Outflow parameters**]{}\ We calculated the outflow parameters from the merged CO Plateau de Bure and 30 m data with the same assumptions and within the same velocity range as outlined in §\[single\_dish\_line\]. The derived values (Table \[flowpdb\]) agree reasonably well with the single-dish results (Table \[calc\]), and make us confident that the orders of magnitude are correct. For this calculations we cannot disentangle properly the contributions of the outflows ($\mathcal{A}$) and ($\mathcal{B}$), thus the derived values include both outflows. But from the spatial extent of both outflows, it becomes clear that most of the outflowing mass and of the mass entrainment rate is due to the large scale flow ($\mathcal{A}$). The value of outflow ($\mathcal{C}$) is calculated for the morphologically bent interpretation ($\mathcal{C}_{\rm{bent}}$).\ --------------------------------- ----------------- ----------------------------------- -------- $M_{\rm{out}}$ $\dot{M}_{\rm{out}}$ $t$ \[M$_{\odot}$\] \[$10^{-4}$M$_{\odot}$yr$^{-1}$\] \[yr\] ($\mathcal{A}$)+($\mathcal{B}$) 9.6 2.6 37000 ($\mathcal{C}_{\rm{bent}}$) 4.4 1.4 31000 All emission 16.9 4.6 37000 --------------------------------- ----------------- ----------------------------------- -------- : Outflow parameters from the merged PdBI and Pico Veleta observations. \[flowpdb\] [*Additional features*]{}\ Are there even more outflows? We tentatively identify one more outflow oriented in east-west direction. Molecular line emission of CO and SiO shows extensions in that direction, in the interferometric data as well as with the single-dish observations (dotted east-west line in Figures \[single\], \[obs\], and \[channel\]). Furthermore, at the western end of these molecular extensions, H$_2$ emission is detected. One of the mm sources mm2 or mm3 might be the center of this tentative flow. Finally, we note that even more outflows are indicated in the H$_2$ data of @mccaughrean [@2001]. ### H$^{13}$CO$^+$ emission {#h13co+} The PdBI data of H$^{13}$CO$^+$ 1–0 show three peaks in east-west orientation, and more extended emission to the north and south. One H$^{13}$CO$^+$ peak (H$^{13}$CO$^+$\[1\]) is associated with the three mm continuum peaks (mm1 to mm3). The other H$^{13}$CO$^+$ peaks \[2\] and \[3\] are both close to the center of outflow ($\mathcal{C}$), but we cannot determine with certainty, which of the two is the powering source of this flow (§\[pdb\]). Assuming local thermodynamic equilibrium, an HCO$^+$ abundance of $1\times 10^{-9}$ [@vdishoek; @1993] and a C to $^{13}$C ratio of 67 [@langer; @1990], we can estimate the approximate masses of the H$^{13}$CO$^+$ clumps, which are listed in Table \[h13\]. The temperatures decrease to the west according to the line ratio results found in §\[temp\]. H$^{13}$CO$^+$ \[1\] \[2\] \[3\] ---------------------- ------- ------- ------- mass \[M$_{\odot}$\] 19 6 8 T \[K\] 30 20 15 : Masses of the H$^{13}$CO$^+$ clumps at the given temperatures. The temperatures decrease to the west (§\[temp\]). \[h13\] The three H$^{13}$CO$^+$ clumps are aligned in east-west direction with similar fluxes in each clump. Morphologically, this is different from the mm continuum emission which decreases towards the west (see Fig. \[mosaic\]). Therefore, the H$^{13}$CO$^+$ peaks correspond only weakly to column density and mass concentrations traced by the mm continuum. As we do not have other high-resolution molecular line observations, it is difficult to discriminate between abundance and density variations, and it is probable that the H$^{13}$CO$^+$ peaks \[2\] and \[3\] are caused by an interplay of different processes. The mass of H$^{13}$CO$^+$\[1\] is about an order of magnitude below the value we derive from the mm continuum (Tables \[calc\] & \[mmm\]). Thus, in this region H$^{13}$CO$^+$ is likely depleted. Remarkably, we detect no mm source near H$^{13}$CO$^+$\[2\] and \[3\] and therefore at the center of outflow ($\mathcal{C}$). The 2.6 mm PdBI data show at the H$^{13}$CO$^+$\[3\] position just a peak at the 2$\sigma$ level, and our 3$\sigma$ mm sensitivity corresponds to a mass limit of $\sim 50$ M$_{\odot}$ (at 15 K). Even by a factor 5 enhanced H$^{13}$CO$^+$ abundances result in lower mass estimates based on the H$^{13}$CO$^+$ data, and the clump stays undetected in the mm continuum with our sensitivity limit. This is also the region P3, where the temperature decreases significantly according to the CO $1-0$/$2-1$ ratios (§\[temp\]), and where only weak emission is observed in the integrated CO $6-5$ map (Fig. \[single\]). ### Channel maps of the CO data {#channel2} In Figure \[obs\]a we show for clarity the CO 1–0 data from the interferometer only, since the large-scale flow is more pronounced than in the merged data. Figure \[channel\] now presents a channel map of the Plateau de Bure data merged with short spacings obtained at Pico Veleta. The eastern large scale outflow is still visible (e.g., channels at $-25.5$, $-23.5$, $-7.5$ and $-5.5$ \[km s$^{-1}$\]), and new features appear by adding the short spacings due to bright extended emission at velocities close to the system velocity. The ring-like structures seen in a number of channels are no artefacts caused by the merging with the interferometric data, because the ring is also seen in the $^{12}$CO 6–5 single-dish data (Fig. \[single\]). At $-21.5$ km $^{-1}$, the ring-like structure seems to surround the position of the H$^{13}$CO$^+$\[3\] peak, which suggests that self absorption could cause the ring-like structure. But as the center of this structure moves spatially between the different channels, the ring might be produced by different overlapping outflows. The east-west structure at $-23.5$ km s$^{-1}$ has the same orientation as the putative fourth flow, which supports our idea of another outflow. Interpretation of single-dish maps with PdBI data ------------------------------------------------- Some of the puzzling single-dish features outlined in section \[single\_dish\_line\] can be explained with the high-resolution data. Being near the plane of the sky, outflow ($\mathcal{A}$) is difficult to resolve from the ambient gas in the single-dish data, and the outflows are far better visible with the Plateau de Bure Interferometer alone. Its higher spatial resolution is capable of resolving different flows spatially, whereas the interferometric feature of filtering out large scale uniform emission makes outflow ($\mathcal{A}$) a very prominent structure (Fig. \[obs\]a). Interestingly, the offsets between the molecular line peaks of H$^{13}$CO$^+$, SiO, and CH$_3$OH, and the 1.2 mm continuum peak are based on different physical processes in spite of their similar spatial distribution in the single-dish observation (Fig.\[sd\]). In the case of H$^{13}$CO$^+$, the PdBI data resolve the single-dish map into three sub-sources with an extension along the north western 1.2 mm continuum ridge, which makes H$^{13}$CO$^+$ being at least partly a tracer of core-structure, though likely with varying relative abundances (§\[h13co+\]). In contrast, SiO is only found in the outflows, but because [*IRAS*]{} 05358+3543mm exhibits at least three outflows, the single-dish map looks very similar to the H$^{13}$CO$^+$ image. The situation is different with regard to the thermal CH$_3$OH emission at 241 GHz. While the line center emission corresponds to the dust emission, the wing emission seems to be associated with outflow ($\mathcal{C}$). From these data it appears that CH$_3$OH is a tracer of outflows as well as of core emission. Discussion ========== The 1 pc, highly collimated outflow ($\mathcal{A}$) {#flow} --------------------------------------------------- The eastern large scale outflow ($\mathcal{A}$) in [*IRAS*]{} 05358+3543mm is the first example of a highly collimated, jet-like, bipolar and massive outflow with an extension of $\approx$ 1 pc. The collimation factor 10 is as high as the highest found in low-mass flows. High degrees of collimation are difficult to explain by stellar winds or deflection of infalling matter. Outflow models involving highly collimated jets entraining the surrounding material are much more likely [@cabrit; @1997]. @churchwell [@2000] argues that the predictions of shock-entrainment models for the amount of material that can be entrained are far too low to account for several tens of solar masses found in massive outflows, as it is the case for [*IRAS*]{} 05358+3543mm (Table \[calc\]). But he also stresses the caveats in these estimates, because they are highly sensible to different assumptions, especially the entrainment efficiency in the dense interstellar medium as found in massive star formation sites. It is well possible that the entrainment efficiency rises significantly in the very dense interstellar medium, and higher efficiencies could easily account for the observed high outflow masses found in massive star-forming regions. Therefore, outflow ($\mathcal{A}$) links massive outflow phenomena to scenarios already known from low-mass star formation. It strongly suggests that entrainment by bow shocks propagating in a collimated jet also has an important role in the formation of massive outflows. The high-velocity outflow ($\mathcal{B}$) and the quadrupolar structure ----------------------------------------------------------------------- Emanating at a PA of $\approx 40^{\circ}$, most likely from the same mm core mm1 (Fig. \[out\]), the less extended high-velocity outflow ($\mathcal{B}$) is clearly distinct from the large scale outflow ($\mathcal{A}$). Different quadrupolar outflow mechanisms are discussed in the literature (for a brief compilation and discussion see @gueth [@2001]), and the most appealing explanation in the case of [*IRAS*]{} 05358+3543mm seems to be that the two outflows are produced independently by adjacent protostars inside the same mm condensation mm1. Being parts of one and the same outflow is unlikely, mainly because the large scale outflow does not show any elongation at a PA of $40^{\circ}$, which would be expected if the quadrupolar structure was due to deflected or precessing material. Assuming half of the HPBW ($\approx 1.75''\approx 3100$ AU) at a distance of 1.8 kpc as the the maximal separation of the two powering sources, this is well within the range of typical binary separation. The SiO outflow ($\mathcal{C}$) ------------------------------- In the bent interpretation, the SiO outflow ($\mathcal{C}_{\rm{bent}}$) closely follows the north-south elongation of the dust continuum emission. It is slightly less collimated than the large CO outflow ($\mathcal{A}$), and blue and red lobes are separated in SiO as well as in CO. The northern and southern lobes show three peaks each that are symmetric with respect to the assumed origin, suggesting three different ejection events. In contrast to this, the cavity interpretation of outflow ($\mathcal{C}_{\rm{cavity}}$) does not imply that the outflow has to follow the core structure, but rather that SiO is only observed in that part of the flow where the densities are high enough to excite SiO sufficiently. As already mentioned, outflow ($\mathcal{C}_{\rm{cavity}}$) then is likely to be located at the front side of the core outlined in Fig. \[mosaic\]. The bullet-like structures and their interpretation as three ejection events are independent of the exact scenarios. It is a rather intriguing fact that SiO, which forms in shocks from dust disruption [@schilke; @1997], is only found in regions with strong dust emission (Figure \[single\]). Outflow ($\mathcal{C}$) follows to a large degree the north-south dust filament, while the eastern large scale CO outflow leaves the dense core very soon. Thus, throughout most parts of outflow ($\mathcal{A}$), column densities are low and the gas densities there are not high enough to excite the SiO sufficiently. Another reason may be that that C-shocks (shock velocities $v_s\leq 50$ km s$^{-1}$) are needed to produce SiO [@schilke; @1997]. In the less massive outflow ($\mathcal{C}$) C-shocks are likely to be more common than in the more massive outflow ($\mathcal{A}$), where higher jet velocities are expected ($>500$ km s$^{-1}$, e.g., @eisloeffel [@2000]) and shocks are more likely of J-type ($v_s\geq 50$ km s$^{-1}$), dissociating the molecular material. These different scenarios have to be checked for a larger sample. The large scale bending ----------------------- Assuming the bent interpretation of outflow ($\mathcal{C}_{\rm{bent}}$), both large scale outflows and the mm dust core show the same bent morphology. For the cavity interpretation of outflow ($\mathcal{C}_{\rm{cavity}}$), still outflow ($\mathcal{A}$), the southern lobe of outflow ($\mathcal{C}_{\rm{cavity}}$), and the mm dust emission exhibit the same bent structure. Possible bending mechanisms for protostellar jets are discussed, e.g., in Fendt & Zinnecker (1998). Internal bending scenarios on small scales seem to be ruled out in the case of [*IRAS*]{} 05358+3543mm, because it is not likely that in both large scale outflows exactly the same small scale physics takes place (e.g., acceleration of the jet source by a binary component, or precession of an accretion disk in a binary system), and additionally the dust core follows a similar morphology. Considering the whole source structure, external effects seem to be more plausible to explain the structure of this massive star-forming site. On larger scales (4 to 10 pc) other H[ii]{} regions are found: in the north-east S233 (at a distance of $\approx 4$ pc), in the west S235 (at a distance of $\approx 10$ pc), in the north-west S231 (distance $\approx 4$ pc) and a bit further away S232 (distance $\approx 15$ pc, for large scale images see, e.g., @porras [@2000]). As a whole, the region contains different massive star formation sites at the edge of [*IRAS*]{} 05358+3543mm, and the most likely explanation of the overall bending of this youngest site is due to energy input (UV radiation as well as stellar winds) from the more evolved star forming regions in the vicinity. Thus, we are possibly observing an example of sequential star formation and interactions of regions of different ages. Outflows from young stellar objects of all masses ------------------------------------------------- A key question triggering this observational study was whether massive outflows are generated by different physical mechanism than their low-mass counterparts. We therefore compare IRAS 05358+3543mm with other outflow sources of different core masses. [*A low-mass object:*]{} a prototype of a low-mass outflow is HH211 emanating from a dust condensation of $\sim 0.2$ M$_{\odot}$ [@gueth; @1999]. It includes a highly collimated, high-velocity molecular jet surrounded by a less collimated cavity-like outflow at lower velocity. A shock-entrainment model nicely explains the outflow structure. [*An intermediate-mass object:*]{} @gueth [@2001] recently observed the intermediate-mass young stellar object powering HH288 (core mass between 6 and 30 M$_{\odot}$). This outflow is quadrupolar and most likely due to two powering sources not separable by the resolution of the observation. But in spite of being quadrupolar, the whole system can be explained by two outflows with shock-entrained material similar to HH211. [*A high-mass object:*]{} our observations of [*IRAS*]{} 05358+3543mm show an example of a massive star-forming cluster. High-resolution Plateau de Bure observations resolve the single-dish features into a number of different outflows. So far, just a few massive outflows have been mapped with high spatial resolution (see Introduction), and we stress that the large scale eastern outflow is the first massive outflow observed with such a high degree of collimation (collimation factor $\sim 10$). We estimate the accretion rate to be $10^{-4}$ M$_{\odot}$yr$^{-1}$, and @beuther [@2001b] recently showed that mass entrainment rates of massive star-forming regions are usually of that order. This is high enough to overcome the radiation pressure of the central object and build up more massive stars [@wolfire; @1987; @jijina; @1996; @norberg; @2000; @tan; @2002; @yorke; @2002]. Additionally, jet models require disks, which are believed to be observed in a few massive objects (e.g., [*IRAS*]{} 20126+4104, @cesaroni [@1997], @zhang [@1998]; [*IRAS*]{} 23385+6053 @molinari [@1998]). Thus, to explain the outflow features observed in [*IRAS*]{} 05358+3543mm, shock-entrainment models are sufficient (e.g., @gueth [@1999], @richer [@2000], and @cabrit [@1997]), and no other formation mechanism is needed. It cannot be ruled out that in other sources different physical mechanisms are taking place, but these observations indicate that other massive star formation sites of rather chaotic appearance might be disentangled into simpler structures if they are observed at higher angular resolution. Summary ======= [*IRAS*]{} 05358+3543mm is the first example of a massive ($>10$ M$_{\odot}$) bipolar outflow with a high degree of collimation on scales of 1 pc (collimation factor $\sim 10$). High-angular-resolution observations with the Plateau de Bure Interferometer resolve the single-dish observations into at least three different outflows. Two of them form a quadrupolar system, most likely emanating from adjacent protostars within the same mm core. Our data resolve three massive mm cores (between 100 M$_{\odot}$ and 75 M$_{\odot}$) at the center of the quadrupolar outflow. The data suggest that the physical processes associated with this massive outflow are similar to those driving their low-mass counterparts. It is likely that many massive star formation sites can be shown by interferometric high-resolution observations to be composed of basic features known from low-mass star formation. The accretion rate is high ($\sim 10^{-4}$ M$_{\odot}$yr$^{-1}$) and consistent with disk-accretion scenarios explaining the formation of massive stars (e.g., @wolfire [@1987], @jijina [@1996], @norberg [@2000], @tan [@2002], and @yorke [@2002]). The overall distribution of H$^{13}$CO$^+$, SiO and thermal CH$_3$OH looks very similar, but the higher resolution PdBI observations show varying morphologies traced by the different lines. While SiO is observed mainly in the outflows, H$^{13}$CO$^+$ traces core-like structures, which do not coincide exactly with the dust cores because of varying relative H$^{13}$CO$^+$ abundances. Finally, CH$_3$OH can be decomposed into a core tracing component at the line center and wing emission tracing the outflows. Ratio maps between CO 6–5, 2–1 and 1–0 reveal local temperature gradients. At the tip of the high-velocity outflow ($\mathcal{B}$), we find a temperature increase ($\geq 80$ K) caused by shock-interaction of the outflow with the ambient medium. Additionally, the southern lobe of the third outflow ($\mathcal{C}$) is much warmer (again $\geq 80$ K) than the rest of the outflow, possibly due to UV heating of a close by and more evolved cluster. Contrasting to these temperature increases, the possible center H$^{13}$CO$^+$\[3\] of outflow ($\mathcal{C}$) is cold, below 15 K. We like to thank an unknown referee for helpful comments on the initial draft of this paper. H. Beuther is supported by the [*Deutsche Forschungsgemeinschaft, DFG*]{} project number SPP 471. Bonnell I., Bate M., Clarke C., Pringle J., 1997, MNRAS, 285, 201 Bonnell I., Bate M., Zinnecker H., 1998, MNRAS, 298,93 Beuther H., Kramer C., Deiss B., Stutzki J., 2000, A&A, 362, 1109 Beuther H., Sridharan T.K., Schilke P., Menten K.M., Wyrowski F., 2002a, ApJ in press Beuther H., Schilke P., Sridharan T.K., Menten K.M., Walmsley C.M., Wyrowski F., 2002b, A&A in press Beuther H., Walsh A., Schilke P., Sridharan T.K., Menten K.M., Wyrowski F., 2002c, subm. to A&A Cabrit S., Bertout C., 1990, ApJ, 348, 530 Cabrit S., Raga, A., Gueth F., 1997, in: IAU Symposium 182 Canto J., Raga A.C., 1991, ApJ, 372,646 Cesaroni, R., Felli, M., Testi, L., Walmley, C.M., Olmi, L., 1997, A&A 325, 725 Cesaroni R., Felli M., Jenness T., Neri R., Olmi L., Robberto M., Testi L., Walmsley C.M., 1999, A&A, 345, 949 Chan S., Henning T., Schreyer K., 1996, A&ASS, 115, 285 Choi M., Evans N.J. II, Jaffe D.T. 1993, ApJ, 417, 624 Churchwell E., 2000, in: The Origins of Stars and Planetary Systems, eds. Lada C.J. & Kylafis N.D., Kluwer Academic Press Eislöffel J., Mundt R., Ray T.P., Rodriguez L.F., 2000, in ProtoStars & Planets IV, ed. V. Mannings Fendt C., Zinnecker H., 1998, A&A, 334,750 Frerking M., Langer L., Wilson R., 1982, ApJ, 262, 590 Gueth F., Guilloteau S., Bachiller R., 1996, A&A, 307, 891 Gueth F., Guilloteau S., 1999, A&A, 343, 571 Gueth F., Schilke P., McCaughrean M., et al., 2001, A&A, 375, 1018 Guilloteau S., Delannoy J., Downes D., et al., 1992, A&A, 323, 943 Hildebrand R., 1983, Q. Jl. R. astr. S., 24, 267 Hirano N., Taniguchi Y., 2001, ApJ, L219 Hunter T., 1997, Ph.D. Thesis, Caltech Jiang Z., Yao Y., Yang J., Ishii M., Nagata T., Nakaya H., Sato S., 2001, AJ, 122, 313 Jijina J., Adams F., 1996, ApJ, 462, 874 Kurtz S., Cesaroni R., Churchwell E., Walmsley C.M., 2000, in ProtoStars & Planets IV, ed. V. Mannings Langer W., Penzias A., 1990, ApJ 357, 477 Lis D., Serabyn E., Keene J., Dowell C., Benford D., Phillips T., 1998, ApJ, 509, 299 McCaughrean M.J., Stanke T., Andersen M., A&A, in prep. Menten K., 1991, ApJ, 380, L75 Mezger P., Wink J., Zylka R., 1990, A&A, 228, 95 Molinari S., Testi L., Brand J., Cesaroni R., Palla F., 1998, ApJ, 505, L39 Molinari S., Brand J., Cesaroni R., Palla F., 2000, A&A, 355, 617 Norberg P., Maeder A., 2000, A&A, 359, 1025 Porras A., Cruz-Gonzales I., Salas L., 2000, A&A, 361, 660 Ramesh B., Sridharan T.K., 1997, MNRAS, 284, 1001, (RS) Richer J., Shepherd D., Cabrit S., Bachiller R., Churchwell E., 2000, in ProtoStars & Planets IV, ed. V. Mannings Schilke P., Walmsley C., Pineau de Forets, Flower D., 1997, A&A, 321, 293 Shepherd D., Churchwell E., 1996, ApJ, 472, 225 Shepherd D., Watson A., Sargent A., Churchwell E., 1998, ApJ, 507, 861 Shepherd D., Kurtz S., 1999, ApJ, 523, 690 Shu F., 1977, ApJ, 214, 488 Snell R., Dickman R., Huang Y., 1990, ApJ, 352, 139 Sridharan T.K., Beuther H., Schilke P., Menten K., Wyrowski F., ApJ in press Stahler S., Palla F., Ho P., 2000, in ProtoStars & Planets IV, The University of Arizona Press Tan, J., McKee C., 2002 , Proceedings of “The earliest stages of massive star formation”, to be published in ASP Conf. Series, ed. P. Crowther Synthesis Imaging in Radio Astronomy II, 1999, ASP Conference Series, Vol. 180, ed. Taylor G.B., Carilli C.L., Perley R.A. Tofani G., Felli M., Taylor G., Hunter T., 1995, A&ASS, 112, 299 van Dishoeck E.F., Blake G.A., Draine B.T., Lunine J.I., 1993, in ProtoStars & Planets III, The University of Arizona Press Walmsley M., 1995, RMxAC, 1, 137 Wolfire M., Cassinelli J., 1987, ApJ, 319, 850 Wouterloot J., Brand J., Henkel C., 1988, A&A, 191,323 Yao Y., Ishii M., Nagata T., Nakaya H., Sato S., 2000, ApJ, 542, 392 Yorke, H., 2002 , Proceedings of “The earliest stages of massive star formation”, to be published in ASP Conf. Series, ed. P. Crowther Zhang Q., Hunter T.R., Sridharan T.K., 1998, ApJ, 505, L151 [^1]: The IRAM Plateau de Bure Interferometer is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). [^2]: The CSO is operated by the California Institute of Technology under funding from the National Science Foundation, Grant No. AST 9980846.
--- abstract: 'Let $q$ be a prime and $A$ an elementary abelian group of order at least $q^3$ acting by automorphisms on a finite $q''$-group $G$. It is proved that if $|\gamma_{\infty}(C_{G}(a))|\leq m$ for any $a\in A^{\#}$, then the order of $\gamma_{\infty}(G)$ is $m$-bounded. If $F(C_{G}(a))$ has index at most $m$ in $C_G(a)$ for any $a \in A^{\#}$, then the index of $F_2(G)$ is $m$-bounded.' address: - 'Department of Mathematics, University of Brasília, Brasília-DF 70910-900, Brazil' - 'Department of Mathematics, University of Brasília, Brasília-DF, 70910-900, Brazil' author: - Emerson de Melo - Pavel Shumyatsky date: 2017 title: Fitting subgroup and nilpotent residual of fixed points --- Introduction ============ Suppose that a finite group $A$ acts by automorphisms on a finite group $G$. The action is coprime if the groups $A$ and $G$ have coprime orders. We denote by $C_G(A)$ the set $$\{g\in G\ |\ g^a=g \ \textrm{for all} \ a\in A\},$$ the centralizer of $A$ in $G$ (the fixed-point subgroup). In what follows we denote by $A^\#$ the set of nontrivial elements of $A$. It has been known that centralizers of coprime automorphisms have strong influence on the structure of $G$. Ward showed that if $A$ is an elementary abelian $q$-group of rank at least 3 and if $C_G(a)$ is nilpotent for any $a\in A^\#$, then the group $G$ is nilpotent [@War]. Later the second author showed that if, under these hypotheses, $C_G(a)$ is nilpotent of class at most $c$ for any $a\in A^\#$, then the group $G$ is nilpotent with $(c,q)$-bounded nilpotency class [@Sh1]. Throughout the paper we use the expression “$(a,b,\dots )$-bounded” to abbreviate “bounded from above in terms of $a,b,\dots$ only”. Subsequently the above result was extended to the case where $A$ is not necessarily abelian. Namely, it was shown in [@Eme1] that if $A$ is a finite group of prime exponent $q$ and order at least $q^3$ acting on a finite $q'$-group $G$ in such a manner that $C_G(a)$ is nilpotent of class at most $c$ for any $a\in A^{\#}$, then $G$ is nilpotent with class bounded solely in terms of $c$ and $q$. Many other results illustrating the influence of centralizers of automorphisms on the structure of $G$ can be found in [@khukhro]. In the present article we address the case where $A$ is an elementary abelian $q$-group of rank at least 3 and $C_G(a)$ is “almost" nilpotent for any $a\in A^\#$. Recall that the nilpotent residual of a finite group $G$ is the intersection of all terms of the lower central series of $G$. This will be denoted by $\gamma_\infty(G)$. One of the results obtained in [@Eme2] says that if $A$ and $G$ are as above and $\gamma_\infty(C_G(a))$ has order at most $m$ for any $a\in A^\#$, then the order of $\gamma_\infty(G)$ is $(m,q)$-bounded. The purpose of the present article is to obtain a better result by showing that the order of $\gamma_\infty(G)$ is $m$-bounded and, in particular, the order of $\gamma_\infty(G)$ can be bounded by a number independent of the order of $A$. \[main1\] Let $q$ be a prime and $m$ a positive integer. Let $A$ be an elementary abelian group of order at least $q^3$ acting by automorphisms on a finite $q'$-group $G$. Assume that $|\gamma_{\infty}(C_{G}(a))|\leq m$ for any $a\in A^{\#}$. Then $|\gamma_{\infty}(G)|$ is $m$-bounded. Further, suppose that the Fitting subgroup of $C_G(a)$ has index at most $m$ in $C_G(a)$ for any $a\in A^\#$. It was shown in [@Sh] that under this assumption the index of the Fitting subgroup of $G$ is $(m,q)$-bounded. In view of Theorem \[main1\] it is natural to conjecture that in fact the index of the Fitting subgroup of $G$ can be bounded in terms of $m$ alone. We have not been able to confirm this. Our next result should be regarded as an evidence in favor of the conjecture. Recall that the second Fitting subgroup $F_2(G)$ of a finite group $G$ is defined as the inverse image of $F(G/F(G))$, that is, $F_2(G)/F(G)=F(G/F(G))$. Here $F(G)$ stands for the Fitting subgroup of $G$. \[main2\] Let $q$ be a prime and $m$ a positive integer. Let $A$ be an elementary abelian group of order at least $q^3$ acting by automorphisms on a finite $q'$-group $G$. Assume that $F(C_{G}(a))$ has index at most $m$ in $C_G(a)$ for any $a \in A^{\#}$. Then the index of $F_2(G)$ is $m$-bounded. In the next section we give some helpful lemmas that will be used in the proofs of the above results. Section 3 deals with the proof of Theorem \[main2\]. In Section 4 we prove Theorem \[main1\]. Preliminaries ============= If $A$ is a group of automorphisms of a group $G$, the subgroup generated by elements of the form $g^{-1}g^\alpha$ with $g\in G$ and $\alpha\in A$ is denoted by $[G,A]$. The subgroup $[G,A]$ is an $A$-invariant normal subgroup in $G$. Our first lemma is a collection of well-known facts on coprime actions (see for example [@GO]). Throughout the paper we will use it without explicit references. \[111\] Let $A$ be a group of automorphisms of a finite group $G$ such that $(|G|,|A|)=1$. Then 1. $G=[G,A]C_{G}(A)$. 2. $[G,A,A]=[G,A]$. 3. $A$ leaves invariant some Sylow $p$-subgroup of $G$ for each prime $p\in\pi(G)$. 4. $C_{G/N}(A)=C_G(A)N/N$ for any $A$-invariant normal subgroup $N$ of $G$. 5. If $A$ is a noncyclic elementary abelian group and $A_1,\dots,A_s$ are the maximal subgroups in $A$, then $G=\langle C_G(A_1),\ldots,C_G(A_s)\rangle$. Furthermore, if $G$ is nilpotent, then $G=\prod_i C_G(A_i)$. The following lemma was proved in [@almo]. The case where the group $G$ is soluble was established in Goldschmidt [@gold Lemma 2.1]. \[golds\] Let $G$ be a finite group acted on by a finite group $A$ such that $(|A|,|G|)=1$. Then $[G,A]$ is generated by all nilpotent subgroups $T$ such that $T=[T,A]$. \[L1\] Let $q$ be a prime and $A$ an elementary abelian group of order at least $q^2$ acting by automorphisms on a finite $q'$-group $G$. Let $A_1,\dots,A_s$ be the subgroups of index $q$ in $A$. Then $[G,A]$ is generated by the subgroups $[C_G(A_i),A]$. If $G$ is abelian, the result is immediate from Lemma \[111\](v) since the subgroups $C_G(A_i)$ are $A$-invariant. If $G$ is nilpotent, the result can be obtained by considering the action of $A$ on the abelian group $G/\Phi(G)$. Finally, the general case follows from the nilpotent case and Lemma \[golds\]. The following lemma is an application of the three subgroup lemma. \[lz\] Let $A$ be a group of automorphisms of a finite group $G$ and let $N$ be a normal subgroup of $G$ contained in $C_G(A)$. Then $[[G,A],N]=1$. In particular, if $G=[G,A]$, then $N\leq Z(G)$. Indeed, by the hypotheses, $[N,G,A]=[A,N,G]=1$. Thus, $[G,A,N]=1$ and the lemma follows. In the next lemma we will employ the fact that if $A$ is any coprime group of automorphisms of a finite simple group, then $A$ is cyclic (see for example [@GS]). We denote by $R(H)$ the soluble radical of a finite group $H$, that is, the largest normal soluble subgroup of $H$. \[solu\] Let $q$ be a prime and $m$ a positive integer such that $m<q$. Let $A$ be an elementary abelian group of order $q^2$ acting on a finite $q'$-group $G$ in such a way that the index of $R(C_G(a))$ in $C_G(a)$ is at most $m$ for any $a\in A^{\#}$. Then $[G,A]$ is soluble. We argue by contradiction. Choose a counter-example $G$ of minimal order. Then $G=[G,A]$ and $R(G)=1$. Suppose that $G$ contains a proper normal $A$-invariant subgroup $N$. Since $[N,A]$ is subnormal, we conclude that $[N,A]=1$ and so $N=C_N(A)$. In that case by Lemma \[lz\] $N$ is central and, in view of $R(G)=1$, we have a contradiction. Hence $G$ has no proper normal $A$-invariant subgroups and so $G=S_1\times\dots S_l$, where $S_i$ are isomorphic nonabelian simple subgroups transitively permuted by $A$. We will prove that under these assumptions $G$ has order at most $m$. If $l=1$, then $G$ is a simple group and so $G=C_G(a)$ for some $a\in A^\#$. In this case we conclude that $G$ has order at most $m$ by the hypotheses. Suppose that $l\neq1$ and so $l=q$, or $l=q^2$. In the first case $G=S\times S^a\times\dots\times S^{a^{q-1}}$ for some $a\in A$ and there exists $b\in A$ such that $S^b=S$. Here $S=S_1$. We see that $C_G(a)$ is the “diagonal" of the direct product. In particular $C_G(a)\cong S$ is a simple group and so $C_G(a)$ is of order at most $m$. Since $m<q$ and $b$ leaves $C_G(a)$ invariant we conclude that $C_G(a)\leq C_G(b)$. Combining this with the fact that $b$ stabilizes all simple factors, we deduce that $b$ acts trivially on $G$. It follows that $|G|\leq m$. Finally suppose that $G$ is a product of $q^2$ simple factors which are transitively permuted by $A$. For each $a\in A$ we see that $C_G(a)$ is a product of $q$ “diagonal" subgroups. In particular, $C_G(a)$ contains a direct product of $q$ nonabelian simple groups. This is a contradiction since $[C_G(a):R(C_G(a))]$ is at most $m$ and $m<q$. This proves that $G$ has order at most $m$. Then of course $A$ acts trivially on $G$. We conclude that $[G,A]=1$ that is a contraction and the proof is complete. Proof of Theorem \[main2\] ========================== Assume the hypothesis of Theorem \[main2\]. Thus, $A$ is an elementary abelian group of order at least $q^3$ acting on a finite $q'$-group $G$ in such a manner that $F(C_{G}(a))$ has index at most $m$ in $C_G(a)$ for any $a\in A^{\#}$. We wish to show that $F_2(G)$ has $m$-bounded index in $G$. It is clear that $A$ contains a subgroup of order $q^3$. Thus, replacing if necessary $A$ by such a subgroup we may assume that $A$ has order $q^3$. In what follows $A_1,\dots,A_s$ denote the subgroups of index $q$ in $A$. It was proved in [@Sh 2.11] that under this hypothesis the subgroup $F(G)$ has $(q,m)$-bounded index in $G$. Hence if $q\leq m$, the subgroup $F(G)$ (and consequently $F_2(G)$) has $m$-bounded index. We will therefore assume that $q>m$. In this case $A$ acts trivially on $C_G(a)/F(C_G(a))$ for any $a\in A^{\#}$. Consequently $[C_G(a),A]\leq F(C_G(a))$ for any $a\in A^{\#}$. Observe that $\langle[C_G(A_i),A],[C_G(A_j),A]\rangle$ is nilpotent for any $1\leq i,j\leq s$. This is because the intersection $A_i\cap A_j$ contains a nontrivial element $a$ and the subgroups $[C_G(A_i),A]$ and $[C_G(A_j),A]$ are both contained in the nilpotent subgroup $[C_G(a),A]$. \[nilp\] The subgroup $[G,A]$ is nilpotent. We argue by contradiction. Suppose $G$ is a counterexample of minimal possible order. By Lemma \[solu\] the subgroup $[G,A]$ is soluble. Let $V$ be a minimal $A$-invariant normal subgroup of $G$. Then $V$ is an elementary abelian $p$-group and $G/V$ is an $r$-group for some primes $p\neq r$. Write $G=VH$ where $H$ is an $A$-invariant Sylow $r$-subgroup such that $H=[H,A]$. Lemma \[L1\] says that $H$ is generated by the subgroups $[C_H(A_i),A]$. Thus, $H$ centralizes $[V,A]$ since $[C_V(A_i),A]$ and $[C_H(A_j),A]$ have coprime order for each $1\leq i,j\leq s$. Hence $[V,A]\leq Z(G)$ and by the minimality we conclude that $[V,A]=1$ and $V=C_V(A)$. But then by Lemma \[lz\] $V\leq Z(G)$ since $V$ is a normal subgroup and $G=[G,A]$. This is a contradiction and the lemma is proved. We can now easily complete the proof of Theorem \[main2\]. By the above lemma $A$ acts trivially on the quotient $G/F(G)$. Therefore $G=F(G)C_G(A)$. This shows that $F(C_G(A))\leq F_2(G)$. Since the index of $F(C_G(A))$ in $C_G(A)$ is at most $m$, the result follows. Proof of Theorem \[main1\] ========================== We say that a finite group $G$ is metanilpotent if $\gamma_{\infty}(G)\leq F(G)$. The following elementary lemma will be useful (for the proof see for example [@AST Lemma 2.4]). \[I3\] Let $G$ be a metanilpotent finite group. Let $P$ be a Sylow $p$-subgroup of $\gamma_{\infty} (G)$ and $H$ be a Hall $p'$-subgroup of G. Then $P=[P,H]$. Let us now assume the hypothesis of Theorem \[main1\]. Thus, $A$ is an elementary abelian group of order at least $q^3$ acting on a finite $q'$-group $G$ in such a manner that $\gamma_{\infty} (C_{G}(a))$ has order at most $m$ for any $a\in A^{\#}$. We wish to show that $\gamma_{\infty} (G)$ has $m$-bounded order. Replacing if necessary $A$ by a subgroup we may assume that $A$ has order $q^3$. Since $\gamma_{\infty}(C_{G}(a))$ has order at most $m$, we obtain that $F(C_G(a))$ has index at most $m!$ (see for example [@khukhro 2.4.5]). By [@Eme2 Theorem 1.1] $\gamma_{\infty} (G)$ has $(q,m)$-bounded order. Without loss of generality we will assume that $m!<q$. In particular, $[G,A]$ is nilpotent by Lemma \[nilp\]. \[gam\] If $G$ is soluble, then $\gamma_{\infty}(G)=\gamma_{\infty}(C_G(A))$. We will use induction on the Fitting height $h$ of $G$. Suppose first that $G$ is metanilpotent. Let $P$ be a Sylow $p$-subgroup of $\gamma_{\infty}(G)$ and $H$ a Hall $A$-invariant $p'$-subgroup of $G$. By Lemma \[I3\] we have $\gamma_{\infty}(G)=[P,H]=P$. It is sufficient to show that $P\leq\gamma_{\infty}(C_G(A))$. Therefore without loss of generality we assume that $G=PH$. With this in mind, observe that $\gamma_{\infty}(C_G(a))=[C_P(a),C_H(a)]$ for any $a\in A^{\#}$. We will prove that $P=[C_P(A),C_H(A)]$. Note that $A$ acts trivially on $\gamma_{\infty}(C_G(a))$ for any $a\in A^{\#}$ since $m<q$. Hence $\gamma_{\infty}(C_G(a))\leq C_P(A)$ for any $a\in A^{\#}$. Let $a,b\in A$. We have $[\gamma_{\infty}(C_G(a)),C_H(b)]\leq[C_P(A),C_H(b)]\leq\gamma_{\infty}(C_G(b))$. Let us show that $P=C_P(A)$. Assume first that $P$ is abelian. Observe that the subgroup $N=\prod_{a\in A^{\#}}\gamma_{\infty}(C_G(a))$ is normal in $G$. Since $N$ is $A$-invariant, we obtain that $A$ acts on $G/N$ in such a way that $C_G(a)$ is nilpotent for any $a\in A^{\#}$. Thus $G/N$ is nilpotent by [@War]. Therefore $P=\prod_{a\in A^{\#}}\gamma_{\infty}(C_G(a))$. In particular $P=C_P(A)$. Suppose now that $P$ is not abelian. Consider the action of $A$ on $G/\Phi(P)$. By the above, $P/\Phi(P)=C_P(A)\Phi(P)/\Phi(P)$, which implies that $P=C_P(A)$. Since $P=C_P(A)$ is a normal subgroup of $G$, by Lemma \[lz\] we deduce that $[H,A]$ centralizes $P$. Therefore, $P=[C_P(A),C_H(A)]$ since $H=[H,A]C_H(A)$. This completes the proof for metanilpotent groups. If $G$ is soluble and has Fitting height $h>2$, we consider the quotient group $G/\gamma_{\infty} (F_2(G))$ which has Fitting height $h-1$. Clearly $\gamma_{\infty} (F_2(G))\leq \gamma_{\infty}(G)$. Hence, we deduce that $\gamma_{\infty}(G)=\gamma_{\infty}(C_G(A))$. Recall that under our assumptions $[G,A]$ is nilpotent and $C_G(A)$ has a normal nilpotent subgroup of index at most $m!$. Let $R$ be the soluble radical of $G$. Since $G=[G,A]C_G(A)$, the index of $R$ in $G$ is at most $m!$. Lemma \[gam\] shows that the order of $\gamma_{\infty}(R)$ is at most $m$. We pass to the quotient $G/\gamma_{\infty}(R)$ and without loss of generality assume that $R$ is nilpotent. If $G=R$, we have nothing to prove. Therefore assume that $R<G$ and use induction on the index of $R$ in $G$. Since $[G,A]\leq R$, it follows that each subgroup of $G$ containing $R$ is $A$-invariant. If $T$ is any proper normal subgroup of $G$ containing $R$, by induction the order of $\gamma_{\infty}(T)$ is $m$-bounded and the theorem follows. Hence, we can assume that $G/R$ is a nonabelian simple group. We know that $G/R$ is isomorphic to a quotient of $C_G(A)$ and so, being simple, $G/R$ has order at most $m$. As usual, given a set of primes $\pi$, we write $O_\pi(U)$ to denote the maximal normal $\pi$-subgroup of a finite group $U$. Let $\pi=\pi(m!)$ be the set of primes at most $m$. Let $N=O_{\pi'}(G)$. Our assumptions imply that $G/N$ is a $\pi$-group and $N\leq F(G)$. Thus, by the Schur-Zassenhaus theorem [@GO Theorem 6.2.1] the group $G$ has an $A$-invariant $\pi$-subgroup $K$ such that $G=NK$. Let $K_0=O_\pi(G)$. Suppose that $K_0=1$. Then $G$ is a semidirect product of $N$ by $K=C_K(A)$. For an automorphism $a\in A^\#$ observe that $[C_N(a),K]\leq\gamma_\infty(C_G(a))$ since $C_N(a)$ and $K$ have coprime order. On the one hand, being a subgroup of $\gamma_\infty(C_G(a))$, the subgroup $[C_N(a),K]$ must be a $\pi$-group. On the one hand, being a subgroup of $N$, the subgroup $[C_N(a),K]$ must be a $\pi'$-group. We conclude that $[C_N(a),K]=1$ for each $a\in A^\#$. Since $N$ is a product of all such centralizers $C_N(a)$, it follows that $[N,K]=1$. Since $K_0=1$ and $K$ is a $\pi$-group, we deduce that $K=1$ and so $G=N$ is a nilpotent group. In general $K_0$ does not have to be trivial. However considering the quotient $G/K_0$ and taking into account the above paragraph we deduce that $G=N\times K$. In particular, $\gamma_\infty(G)=\gamma_\infty(K)$ and so without loss of generality we can assume that $G$ is a $\pi$-group. It follows that the number of prime divisors of $|R|$ is $m$-bounded and we can use induction on this number. It will be convenient to prove our theorem first under the additional assumption that $G=G'$. Suppose that $R$ is an $p$-group for some prime $p\in\pi$. Note that if $s$ is a prime different from $p$ and $H$ is an $A$-invariant Sylow $s$-subgroup of $G$, then in view of Lemma \[gam\] we have $\gamma_{\infty}(RH)\leq\gamma_{\infty}(C_G(A))$ because $RH$ is soluble. We will require the following observation about finite simple groups (for the proof see for example [@Eme2 Lemma 3.2]). \[uu\] Let $D$ be a nonabelian finite simple group and $p$ a prime. There exists a prime $s$ different from $p$ such that $D$ is generated by two Sylow $s$-subgroup. In view of Lemma \[uu\] and the fact that $G/R$ is simple we deduce that $G/R$ is generated by the image of two Sylow $s$-subgroup $H_1$ and $H_2$ where $s$ is a prime different from $p$. Both subgroups $RH_1$ and $RH_2$ are soluble and $A$-invariant since $[G,A]\leq R$. Therefore both $[R,H_1]$ and $[R,H_2]$ are contained in $\gamma_{\infty }(C_G(A))$. Let $H=\langle H_1,H_2\rangle$. Thus $G=RH$. Since $G=G'$, it is clear that $G=[R,H]H$ and $[R,G]=[R,H]$. We have $[R,H]=[R,H_1][R,H_2]$ and therefore the order of $[R,H]$ is $m$-bounded. Passing to the quotient $G/[R,G]$ we can assume that $R=Z(G)$. So we are in the situation where $G/Z(G)$ has order at most $m$. By a theorem of Schur the order of $G'$ is $m$-bounded as well (see for example [@khukhro 2.4.1]). Taking into account that $G=G'$ we conclude that the order of $G$ is $m$-bounded. Suppose now that $\pi(R)=\{p_1,\dots,p_t\}$, where $t\geq2$. For each $i=1,\dots,t$ consider the quotient $G/O_{p_i'}(G)$. The above paragraph shows that the order of $G/O_{p_i'}(G)$ is $m$-bounded. Since also $t$ is $m$-bounded, the result follows. Thus, in the case where $G=G'$ the theorem is proved. Let us now deal with the case where $G\neq G'$. Let $G^{(l)}$ be the last term of the derived series of $G$. The previous paragraph shows that $|G^{(l)}|$ is $m$-bounded. Consequently, $|\gamma_{\infty}(G)|$ is $m$-bounded since $G/G^{(l)}$ is soluble and $G^{(l)}\leq\gamma_{\infty}(G)$. The proof is now complete. 11 pt [ABC]{} C. Acciarri, P. Shumyatsky, A. Thillaisundaram,  [*Conciseness of coprime commutators in finite groups*]{},   Bull. Aust. Math. [**89**]{} (2014), 252-258. E. de Melo, P. Shumyatsky, [*Finite groups and their coprime automorphisms*]{}, Proc. Amer. Math. Society [**145**]{} (2017), 3755-3760. E. de Melo, A. S. Lima, P. Shumyatsky, [*Nilpotent residual of fixed points*]{}, Archiv der Mathematik [**111**]{} (2018), 13-21. D. M. Goldschmidt, [*Weakly Embedded 2-Local Subgroups of Finite Groups,*]{} J. Algebra. [**21**]{} (1972), 341-351. D. Gorenstein,  [*Finite Groups*]{},  New York, Evanston, London : Harper and Row, (1968). R. Guralnick, P. Shumyatsky,  [*Derived Subgroups of Fixed Points*]{},   Israel J. Math. [**126**]{} (2001), 345-362. E. I. Khukhro,  [*Nilpotent Groups and their Automorphisms*]{},  Berlin; New York: de Gruyter, (1993). P. Shumyatsky, [*Finite groups and the fixed points of coprime automorphisms,*]{} Proc. Amer. Math. Soc. [**129**]{} (2001), 3479-3484. P. Shumyatsky, [*Positive Laws in fixed points,*]{} Trans. Amer. Math. Soc. [**356**]{} (2003), 2081-2091. P. Shumyatsky,  [*Linear groups with almost right Engel elements* ]{}, Proc. Edinburgh Math. Soc., To appear. J. N. Ward.  [*On finite groups admitting automorphisms with nilpotent fixed-point,*]{} Bull. Aust. Math. [**5**]{} (1971), 281-282.
--- abstract: 'JEM–EUSO is an international collaboration for the development of space–based Ultra High Energy Cosmic Ray (UHECR) detectors. The instrument consists of a wide Field Of View (FOV) camera for the detection of the UV light emitted by Extensive Air Showers (EAS) in the atmosphere. Within the JEM–EUSO framework several pathfinders have been developed or are in course of development: EUSO–TA, EUSO–Balloon, EUSO–SPB and Mini–EUSO. For the near future the K–EUSO detector is foreseen to detect cosmic rays from space. In this paper we present the JEM–EUSO project and give an overview of the pathfinders and of their results.' author: - 'F. Fenu, for the JEM–EUSO collaboration' title: 'The JEM–EUSO program' --- Introduction ============ The Extreme Universe Space Observatory on board the JEM exposure facility of the ISS (JEM–EUSO) is a detector concept for the study of UHECRs [@JEIntro]. This mission focuses on the spectrum above 5 $\times 10^{19}$ eV where the flux is extremely low and the required detector areas are extremely large. The detector consists of a wide FOV ($\pm$ 30 deg) downward–looking UV camera orbiting at 400 km altitude which monitors a $\sim$ $10^{5}$ km$^{2}$ area of atmosphere. JEM–EUSO aims at the detection of the fluorescence light emitted by EAS in the atmosphere (see Fig. \[fig1\]). In such a way we can reconstruct the direction, energy and X$_\mathrm{max}$ of the showers at the most extreme energies. ![the JEM–EUSO observational principle[]{data-label="fig1"}](jemeuso_fig2.eps){width="45mm"} The identification of the direction of arrival jointly with a proper energy reconstruction is mandatory for anisotropy studies, while the possibility to measure the longitudinal profile of the shower is needed to constrain the average mass at 10$^{20}$ eV. Another big advantage of this technique with respect to ground based detectors is the uniformity of the sky coverage: JEM–EUSO at 400 km can orbit every 90 minutes the entire earth covering both hemispheres with good uniformity. The main scientific objectives of the project are: the study of the anisotropies at the extreme energies with unprecedented statistics, the identification of the sources (and possibly the reconstruction of their spectra) and the high statistics measurement of the trans–GZK spectrum [@Olinto2015]. We define moreover several other exploratory objectives: the search of UHE neutrinos, the search of UHE gamma photons and the study of the galactic and extragalactic magnetic fields through the measurement of the magnetic point spread function of a source. The nature of the detector allows also to study other phenomena like TLEs, lightnings, the airglow, Auroras, meteors, space debris, bioluminescence in the oceans and to search (or put limits on the flux of) hypothetical phenomena like nuclearites. JEM–EUSO is a refractor consisting of a system of 3 Fresnel lenses focusing the light on the focal surface [@JEInstrument]. The focal surface is made by $\sim$ 5000 Multi Anode Photomultipliers (MAPMT) produced by Hamamatsu (R11265–M64) capable of single photon counting with a double pulse resolution of the order of $\sim$ 10 ns. The chosen PMT is 2.7 cm side and has 64 pixels each of 3 $\times$ 3 mm. Four MAPMTs are organized in the so called Elementary Cells (EC) while 2 ECs ie. a block of 6$\times$6 MAPMTs is making a Photo Detection Module (PDM). The electronics is modular in order to guarantee high redundancy and organized in several levels: the SPACIROC ASIC for the front end electronics, an FPGA for the PDM–level electronics, the Cluster Control Board and the central CPU. The front end electronics counts the photoelectrons within a Gate Time Unit (GTU) window of 2.5 microseconds. The trigger is organized hierarchically and has the challenging task to reduce the trigger rate from $\sim$ 100 GB/s down to the 3 GB/day allowed by the telemetry, still keeping the physical events. Prototypes ========== The high complexity and risk of this mission demands the development of several pathfinders. We will present here all of them: EUSO–TA which is taking data at the Black Rock Mesa (BRM) site of the Telescope Array (TA) observatory, the EUSO–Balloon (operated by CNES) which flew in August 2014, EUSO–SPB which is foreseen to fly in spring 2017 on a NASA Super Pressure Balloon (SPB) to detect CRs and finally Mini–EUSO, the first space prototype to be launched on the ISS within 2017, whose main goal is to measure the airglow and several atmospheric phenomena visible from space. To conclude, we will present the K–EUSO detector, which is capable of doing UHECR science from space and is foreseen to fly on the ISS by the year 2020. EUSO–TA ------- EUSO–TA is a pathfinder of the JEM–EUSO detector [@EUSO-TA] which is taking data at the BRM site of the TA observatory in Utah (see Fig. \[fig3\]). This detector consists of two square Fresnel lenses which measure $\sim$ 1 m side and a single PDM of $\sim$ 16 cm side. ![the EUSO–TA detector in front of the BRM site of the TA observatory.[]{data-label="fig3"}](EUSO-TASite_2.eps){width="65mm"} The detector is pointing to the sky, overlapping with the TA field of view. Each of the 2304 pixels is covering $\sim$ 0.18$\times$0.18 degrees$^2$. The elevation of the telescope can be changed from 0 to 25 degrees. The purpose of this pathfinder is the detection of CRs in coincidence with TA. The trigger signal of TA is therefore used as an external trigger and the acquisition of a packet of 128 GTUs is started in coincidence with the TA trigger. EUSO–TA is detecting in this way several CR events (one example in Fig. \[fig4\]). Another purpose was the calibration of the detector, also relative to TA. We used the TA Central Laser Facility (CLF) and other dedicated mobile laser sources to test the technology with events reproducing the kinematics of CR events. The laser is shot up in the sky and the scattered light was detected by EUSO–TA in such a way as to mimic the signal of a cosmic ray shower. We also used flashers, flat screens, LEDs to perform a complete characterization of the instrument. ![ a CR event detected by EUSO–TA thanks to the TA trigger signal. A very preliminary estimate of the energy is 10$^{18.3}$ eV. The event is falling 2.6 km away and EUSO–TA sees with magnified resolution a part of the shower.[]{data-label="fig4"}](EUSO_TA_Event2_2.eps){width="65mm"} We also detected meteors, lightnings, planes and clouds and therefore we tested the detection and the analysis of such events for JEM–EUSO. The EUSO–TA detector could be also used to develop the JEM–EUSO autonomous trigger. Indeed EUSO–TA data have been used offline to test and optimize the trigger algorithms. As a further step, the PDM board with the autonomous trigger was installed on the EUSO–TA detector and the trigger has been tested online on laser events. The trigger performances are compliant with the expectations. EUSO–Balloon ------------ EUSO–Balloon is a CNES stratospheric balloon pathfinder of the JEM–EUSO mission, which flew on August 25$^\mathrm{th}$ 2014 from Timmins Canada (see Fig. \[fig5\]). The purpose of this pathfinder was to test the JEM–EUSO technology in space environment and to test the response of the detector with respect to several artificial CR events [@EUSO-Balloon]. The detector consisted of a system of two square, 1 m side Fresnel lenses and of a single PDM. The flight lasted one night and has successfully proven the capability of the detector to operate in stratospheric conditions (38 km altitude). Unexpectedly the detector landed in water also proving the water tightness of the instrument. The FOV was 8 $\times$ 8 km$^2$ and each single pixel covered a projected area 120 $\times$ 120 m$^2$ on the ground. The balloon covered the distance of roughly 100 km passing over different landscapes including forest lakes, cities, clear and cloudy sky. ![EUSO–Balloon launch from Timmins, Canada. August 25$^{th}$ 2014. In the box: the integrated image of a laser shot together with the flasher and LED.[]{data-label="fig5"}](paallone_2.eps){width="85mm"} During the flight the balloon was followed at 3,000 m altitude by an helicopter which was equipped with a flasher, a LED and a laser to generate artificial light events (see one example in Fig. \[fig5\]). We could therefore detect several hundreds artificial events which were used to test the electronics. We used such data to test the algorithms for the reconstruction of the direction of the laser events [@eserTrigger]. The RMS of the angular reconstruction was estimated in [@eserTrigger] to be few degrees. A map of the background in the UV range has been produced and is shown in Fig. \[fig7\]. Thanks to the present study we could give a preliminary estimate (except the direct airglow component) of the UV background which will be seen by JEM–EUSO on orbit both in clear sky and cloudy conditions. ![the integrated image of the UV background as measured by EUSO–Balloon during the 8 hours flight.[]{data-label="fig7"}](uvmap.eps){width="85mm"} EUSO–SPB -------- In spring 2017 the EUSO–SPB prototype will be launched from Wanaka, New Zealand [@EUSO-SPB]. The detector will consist of a system of two Fresnel lenses of 1 m side and one single PDM. The balloon flight will make use of a Super Pressure Balloon (SPB), a new technology currently developed by NASA. The test flights performed by NASA in 2015 and 2016 achieved a 32 and 45 days duration. The detector is assembled and currently (Feb. 2017) in New Zealand in the launch preparatory phase. The detector has now to be fully autonomous and must therefore be provided with solar panels, rechargeable batteries, antennas for data and command transfer. The main objective of EUSO–SPB is the autonomous detection of CR events for the first time from above using the fluorescence technique. The electronics is therefore equipped with a fully autonomous trigger which has been already tested on the TA site in October 2016. We show in Fig. \[fig9\] the image of a laser shot which autonomously triggered EUSO–SPB. The collaboration estimated the expected rate of CR to be of the order of few events during the entire flight. ![the integrated image of a laser shot as seen by EUSO–SPB during the October 2016 testing campaign in Utah. A single pixel threshold has been applied.[]{data-label="fig9"}](CLF_Laser_Integrated_2.eps){width="65mm"} Mini–EUSO --------- Mini–EUSO will be launched in 2017 and consists of a system of circular Fresnel lenses and one single PDM to be accommodated inside the ISS [@mini-EUSO]. The detector will be placed in the Russian segment of the ISS behind a UV–transparent downward–looking window and will monitor the atmosphere from 400 km height in the same condition as JEM–EUSO. The main objective of this prototype is the measurement of the atmospheric UV emission from space. Given the small diameter of the lens (25 cm), no detection of CRs can be expected. Despite that, the collaboration plans to use artificial sources (with a luminosity equivalent to a 10$^{21}$ eV CR) as in the balloon missions in order to mimic a CR signal. ![the integrated simulated image of a meteor for the Mini–EUSO configuration. A single pixel threshold has been applied.[]{data-label="fig10"}](meteorMiniEUSO_45deg70kms2s_clean.eps){width="65mm"} Other targets of the missions are the study of TLEs, meteors and auroras. We see in Fig. \[fig10\] the image of a simulated meteor. An interesting application of this prototype is the detection of space debris during twilight, which will fly on a lower orbit with respect to the ISS. K–EUSO ====== K–EUSO is the first large size detector being developed in the framework of the JEM–EUSO project [@K-EUSO]. The optical design follows a so–called Schmitt optics, namely a combination of a mirror and of a corrector lens. The detector is exploiting the experience gathered in the JEM–EUSO and the KLYPVE collaborations. ![the K–EUSO detector exposure curve compared to JEM–EUSO, KLYPVE and ground based detectors [@K-EUSO][]{data-label="fig11"}](efficienza_K-EUSO.eps){width="85mm"} The detector is planned to fly in the early 2020s and to be attached to the Russian section of the ISS. The detector will be the first one capable of doing full scale UHECR science from space through fluorescence. We see in Fig. \[fig11\] the comparison between the exposure curves of several existing and planned detectors compared to K–EUSO. The main goal of this detector is the study of both small and large scale anisotropies, in particular the investigation of the TA hot spot, and of the differences between north and south hemisphere, thanks to its roughly uniform full sky coverage. This represents a major issue and challenge of the current research in the field of UHECRs. Conclusions =========== We gave a brief introduction to the JEM–EUSO program. All the pathfinders were presented and for the already existing ones a brief summary of their main results has been given. The collaboration is on its path for the development of space–based UHECR detectors. The capability of the JEM–EUSO electronics to trigger CR–like events has been proven together with the technological readiness of the detector. The triggered events are used to develop the reconstruction of their direction. The response of the JEM–EUSO detector with respect to the background is being studied. The instrument is being calibrated with respect to ground based observatories. Within 2017 the collaboration will perform autonomous fluorescence CR detection from balloon and a complete mapping of the UV background from space. Such achievements will be useful to build a detector capable of detecting CR from space. K–EUSO should mark the first significant step, providing for the first time a full–sky coverage with a single instrument, with Auger–like statistics in the GZK energy range. This work has been partially funded by the Italian Ministry of Foreign Affairs and International Cooperation [9]{} J. Adams, et al.: “The JEM–EUSO mission: an introduction”, Experimental Astronomy, **Vol. 40**, pp 3–17, 2015 A. Olinto, et al., “JEM–EUSO science”, 34$^{th}$ ICRC, ID 623, 2015 J. Adams, et al., “The JEM–EUSO instrument”, Experimental Astronomy, **Vol. 40**, pp 19–44, 2015 J. Adams, et al., “Ground–based tests of JEM–EUSO components at the Telescope Array site, EUSO–TA”, Experimental Astronomy, **Vol. 40**, pp 301–314, 2015 J. Adams, et al., “The EUSO–Balloon pathfinder”, Experimental Astronomy, **Vol. 40**, pp 281–299, 2015 J. Eser, “EUSO–Balloon: observation and measurement of tracks from a laser in a Helicopter”, 34$^{th}$ ICRC, ID 638, 2015 L. Wiencke, “EUSO–Balloon mission to record air showers from near space” 34$^{th}$ ICRC, 2015 M. Ricci, et al., “Mini–EUSO: a pathfinder for JEM–EUSO to measure Earth’s UV background from the ISS”, 34$^{th}$ ICRC, ID 599, 2015 F. Kajino, et al., “K–EUSO: An improved optical system for KLYPVE ultra–high energy cosmic ray space telescope”, 34$^{th}$ ICRC, ID 634, 2015
--- abstract: 'It is argued that all notions associated with the origin of life should be related with the participatory anthropic principle of Wheeler and must be extended into the realm of the multiverse. Also discussed is the notion that life can only be possible in a given universe during a finite period along which such a universe expands in an accelerated fashion. We advance finally the idea that life, cosmic accelerated expansion and quantum theory are nothing but three distinct faces from a single, unique coin which describes the physical reality.' author: - 'Pedro F. González-Díaz' title: Life originated during accelerating expansion in the multiverse --- [**1.**]{} The idea that life and cosmology are intimately linked to one another is not new \[1\]. Neither is at all recent a set of assumed likely connections between the notion of life and the basic principles of quantum mechanics \[2\]. On the other hand, the current period of accelerating expansion of the universe could be related to a state where the universe would adopt a quantum mechanical behavior (see later) and therefore that period might also be most straightforwardly related with the emergence of life in the universe \[2\]. All of such questions are the subject of a new scientific discipline which could most naturally be dubbed “astrobiology”, an activity which is flourishing in new research centers spread throughout the entire world. The aim of this paper is nevertheless more akin to what can be rather denoted as biocosmology, which would be hypothesized to be a branch of cosmology making use of the above biological and cosmological ideas together with the anthropic principles and the dark and phantom energy help, in the extent that this paper actually aims at presenting the hypothesis that for all notions related with the origin of life and the Wheeler’s participatory anthropic principle \[3\] to become effective, they should be extended into the realm of the multiverse. [**2.**]{} It appears quite a widespread accepted opinion that the origin of life is a cosmological problem \[1\]. In order for life to be an operative concept in this way two conditions are simultaneously required to hold: the formation of self-replicating long molecules and aminoacids and the synthesis of conveniently folded proteins made of out from such aminoacids. However, if we assume these two conditions to be fulfilled as a consequence from the evolution of the universe, then we are confronted with two big problems. On the one hand, such as it was many times stressed by Hoyle \[4\], the probability that self replicating molecules able to support life had been formed at any place of the universe is similar to that a tornado has of being able to mount a Boeing 747 out from the materials of a junk-yard. On the other hand, the well-known Levinthal paradox \[5\] makes it sure that a supercomputer which was based on plausible physical-chemical and spectroscopic rules (such as internal hindered rotation, bending or wagging vibrations, etc) would take $10^{127}$ years in finding the native (active for life) configuration of a protein made of some 100 aminoacids, which is properly folded and has a suitable biological behavior. It follows that during its entire evolution until now the universe would only allow for an extremely tiny room for life to be created anywhere on it. It has been believed during many years that only the smallest particles or objects show a quantum-mechanical behavior. However, recent years have witnessed the emergence of the idea that such a belief is no longer valid in the realm of the accelerating current cosmology. In fact (i) if the present universe is filled with phantom energy (such as it appears to be most supported by astronomical data \[6\]), then the larger the universe size the greater its energy density and therefore a sharper quantum-mechanical behavior should be expected to be manifested for the current universe as far as it rapidly expands with time, tending to a true singularity when the size of the universe and its energy density are both simultaneously infinity, at the big rip \[7\]. On the other hand and quite more importantly, (ii) it has been recently shown \[8\] that the ultimate cause for the current speeding-up of the universe is a universal quantum entanglement and that one should expect that the very existence of the universe implied the violation of the Bell’s inequalities and hence the collapse of the superposed cosmic quantum state into the universe we are able to observe, or its associated complementarity between cosmological and microscopic laws, and any of all other aspects that characterize a quantum system as well. Actually, the formation of molecules able to self replicate is by itself a quantum process. The billions of smaller molecules in the primordial soup collided and quantum-mechanically formed trillions of new molecules throughout quantum processes. In any event, the probability that these random collisions would produce a molecule able to self replicate is tiny actually, so tiny that such a molecule could never have been formed on Earth or on billions of planets alike \[4\]. Likewise, protein folding is also a process governed by quantum rules and describable by a wave equation that contains a power-law potential which can be expressed in terms of an order parameter expressible as a scalar field that jointly represents the set of all internal motions of the molecules along its normal coordinate modes \[9\]. A dependence of such a potential with temperature allows us to express the protein folding process as described by means of a mechanism of spontaneous symmetry breaking, the symmetry being the number of contacts among hydrophobic groups in the protein \[9\]. In any case the probability for the above whole process leading to the emergence of life to occur in a single universe is very small really. It follows that the generation of life in a single universe like ours is an extremely unlikely process, in spite of such a single universe being a quantum-mechanical system without any classical analog and all physical biological processes leading to life have a deep quantum character. Even though there were yet unknown processes that linked protein folding with the creation of molecules able to self replicating in such a way that once such molecules are synthesized the required protein folding process would automatically take place, the creation of life in a single universe would still be extremely unlikely. Anthropic principles correspond to a notion that is somehow against such a conclusion. In particular, the Wheeler notion of the participatory principle \[3\], according to which we exist in a universe which creates itself along a self-reference process. Of course, this idea has a quantum-mechanical origin as well, and predicts the existence of observers who are by themselves able to create all the physical reality that they are able to observe, even the Big Bang and themselves. Thus, rather than intelligent observers being created by the universe, what matters here is a universe which is created and evolved by the observers or at least an entity where one does not know what was before, the hen or the egg. [**3.**]{} Another crucial notion is that of Boltzmann brains \[10\]. Perhaps ours is a typical civilization \[10,11\] that was created by some random fluctuation from vacuum and its condition of mediocrity \[11\] (typicalness) created the universe we are able to observe or imagine. Such a solution to the problem of unlikeness is actually not a solution because the Boltzmann spontaneous fluctuation is a process which is also extremely unlikely to occur along the life of the universe from the big bang until now. It has been hitherto said that the probability that molecules able to self replicate be synthesized and that for even the smallest proteins to properly fold into their native structure, are actually tiny. Even so, strictly speaking, such probabilities are not exactly zero. Therefore, since everything that may happen with whatever small but still nonzero probability actually happens with real certainty in the realm of the quantum multiverse \[12\], our main hypothesis is that molecules able to self replicate and properly fold must have been immediately synthesized, and hence life must necessarily have emerged in the context of the quantum multiverse. That happened and it did rapidly. The solution of the problem of the origin of life in the context of the quantum multiverse must actually be based on an analysis of conditional probability rather than just probability. In fact, one can always be sure that however small can be the probability for protein folding to occur provided the corresponding biological molecules able to self replicate have already been synthesized, it will not be strictly zero and therefore the whole process for originating life in the quantum multiverse must be true certainty actually. However, for that to become physically feasible in the cosmic realm we are dealing with, it ought to conform to both the Wheeler participatory anthropic principle and a civilization able to be typical with respect to the universe we know; that is to say, if we extend the notion of typicalness \[10\], and hence of mediocrity \[11\], to the whole multiverse, a civilization which is typical in a given universe, would be so also in the whole multiverse, that meaning that either the typical observers can in someway observe the universes they are not living in or that such universes do not actually physically exist, at least from a participatory physical standpoint principle. Physical space-time connections between two universes of the multiverse through which the observers can retrieve some relevant physical information from these two universes can only be achieved by means of Lorentzian wormholes with relative speeds between their mouths at all unspecified. The latter feature expresses the otherwise mutual independence between the space-times of the universes that form up the multiverse, and makes it impossible to establish any kind of simultaneity among distinct civilizations potentially living in different universes. Therefore, even though life is originated almost immediately in the whole context of the multiverse, it can only be realized in just one universe if we want observers to be typical, no matter whether they are able to perceive just one universe or many through connections by means of wormholes, the second possibility being more probable certainly. [**4.**]{} The great scientist and scientific divulger Carl Sagan used to declare \[13\] that we all were somehow present in the primeval Big Bang and became later what has been dubbed as powder of stars. If life actually is an endeavor of the whole multiverse rather than a matter that concerns particular universes, then the Sagan’s idea had to be actually extended to the context of the multiverse. Instead of his declared observations, one could well say that we all were somehow present at the moment in which the whole multiverse was created, that is to say eternity, quite likely. Panspermia is an ever credit-gaining theory which shifts the origin of life on Earth from Earth itself to the single cosmological context \[14\]. Etymologically, it means seeds everywhere, so expressing the idea that the seeds of life are spread in a rather homogeneous form throughout the entire universe, and that such seeds once reached the Earth where they developed into the known living beings. If life is a matter concerning the whole extent of the multiverse, then one had to replace the notion of panspermia for that of what could be dubbed as holospermia. Etymologically, holospermia would mean [*seeds in the wholeness*]{} and would express instead the idea that the seeds of life were spread throughout the whole multiverse in our remote past, and that such seeds once reached our own universe, possibly through a wormhole. It was Sir Fred Hoyle who coined the term panspermia for his cosmic theory for the origin of life on Earth \[14\]. It is possible- and actually claimed by the notion of holospermia- that, rather than being present at the Big Bang, we were all originated in the set of all universes making up the multiverse. Actually, if as stressed many times by Hoyle himself \[4\], the probability for life to have been spontaneously generated in our own universe is extremely tiny, and that on the contrary, it becomes full certainty in the set of all universes making up the multiverse, then the probability for holospermia to be responsible for the origin of life in our universe is by far much bigger than for panspermia making that job. [**5.**]{} It has been shown \[15\] that whereas life cannot be maintained in the future of a de Sitter or decelerating universe, it can be extended indefinitely in the case that the universe is filled with dark energy. We can see also in a rather straightforwardly way that the latter result keeps being valid also in the case of a universe filled with phantom energy. In fact, for a constant equation of state with $w= Const. <-1$, the condition $$\frac{dH}{dt}=-\frac{3}{2}(1+w)H^2 ,$$ amounts to $H\propto -t^{-1}$, so that the Hawking temperature will turn out to be expressed as $T_H \propto H \propto -t^{-1}$, which, in the Dyson’s notation implies $q=1$ so preserving the Dyson requirement \[15\] and hence eternal endurance in the future of life in a phantom universe. The emergence of a big rip singularity in a finite time of the future would at first sight seem to indicate that there will be a doomsday at that singularity where life, together with all other physical objects and the laws of science themselves, will inexorably perish. Intervening wormholes connecting both sides of the singularity might slightly -in cosmic terms- delay the final destructive destiny of life, but even though some living patches would bridge the singularity abyss getting on the other side, the space-time there is contracting rather than expanding and hence life would again have its hours counted by application of the Dyson argument. The only way to be followed by future civilizations and living beings to try to get in a future trail getting into eternity would be by using the big trip connections among an infinite number of universes \[16\]. Thus, even in the case that the universes are filled with phantom energy, life could endure eternally in the realm of an infinite number of universes. In what follows we will argue that whereas life will in this way eternally persist in the future of an ever accelerating universe, it is bound to be confined at times longer than the coincidence time in the past of that universe. The reason for that confinement is simple: since any decelerating equation of state does not allow life to persist long enough in the future of any evolutive hypersurface when we trace evolution back to a sufficiently early time, one would always have a situation which is lifeless before the coincidence time. This result would confirm the intuition that ultimately life is nothing but a property of the accelerated period of the universe, a period which, on the other hand, is closely related to the deep quantum-mechanical character of the universe in such a way that it should somehow be connected with sharp quantum properties such as entanglement, wave packet reduction and non-locality \[8\]. In this way, life, cosmic accelerated expansion and quantum theory are nothing but three distinct faces from a single, unique coin. Let us finally briefly consider the issue of life survival in relation with the second law of thermodynamics in the contexts of our single universe and the multiverse. We notice that such an issue can be dealt with by using the following two analogies. On the one hand, one has the well known Schrödinger idea, which was advanced in his famous book “What is life?”, that life is noting but information (in the Shannon sense) or, in Schrödinger terminology, negative entropy or negentropy; on the other hand, it has been many times stressed that the biological process of self-replication is equivalent to computation, that is to say, it is like a computer. By adopting these standpoints, one can deduce that in an accelerating universe, one can see less infinite space rather than more of it. The bounds of the observable universe shrink as the space between objects accelerate and expand as the spaces close because no light from objects outside a range of 13.7 billion light years - the time of the birth of the universe - has enough time to reach the Earth. It follows that entropy in our universe should increase very quickly in the presence of dark or phantom energy. Moreover, since entropy increases rapidly in an ever accelerating universe, a computer could not run forever in an ever accelerating universe like ours. Therefore, life cannot last forever in the presence of dark and phantom energy in a single universe. Clearly, it can only be in the context of the multiverse that this entropic effect can be compensated by the opening of an infinite number of classical (and possibly quantum) information channels which can ultimately render life itself to last forever. Such classical information channels would be made of the above alluded inter-universal wormhole connections. Moreover, a computer (self-replicating biological system) which can run forever this way has a potentially infinite amount of memory available. It then follows that, in the context of the multiverse, every thought will be destined not to be forgotten and then re-discovered, but rather to be preserved forever. In this Letter we have discussed some relations between the current evolution of our accelerating universe and the origin of life and intelligent civilizations in the full context of a multiverse model where the distinct universes are linked to each other by means of traversable Lorentzian wormholes. Adhering to the recent view that the emergence of life is a business of the whole multiverse rather than individual universes, we argue in favor of the ideas that once life appears in the multiverse it lasts forever, and that life, the accelerating universe and the deepest aspects of the quantum theory are nothing but three distinct faces from a single coin describing the physical reality. Also favored is the idea that the knowledge which is being achieved by the civilizations will be accumulated and preserved forever. The author thanks Carmen L. Sigüenza for useful discussions and the members of the theater group of Medellín, Spain, who allowed me to do this work in a nice scientific-artistic working atmosphere. This work was supported by MEC under Research Project No. FIS2008-06332. See, for example, L. Smolin, [*The Life of the Cosmos*]{} (Phoenix, London, UK, 1997), and references therein; Investigación y Ciencia, Temas 52: El Origen de la Vida. , edited by D. Abbott, P.C.W. Davies and A.K. Pati (World Scientific Publishing, UK, 2008). J.A. Wheeler, Law without law. [*In Quantum Theory and Measurement*]{} edited by J.A. Wheeler and W.H. Zurek (Princeton University Press, uSA, 1983) pp. 182 -213 . F. Hoyle, [*The Intelligent Universe*]{} (Michael Joseph Limited, London, UK, 1983), pp. 18-19. C. Levinthal, Are there pathways for protein folding?, J. Chim. Phys. et de Physico-Chemie Biologique 65 (1961) 44 D. J. Mortlock and R. L. Webster, The statistics of wide-separation lensed quasars, Mon. Not. Roy. Astron. Soc.  [**319**]{} (2000) 872 \[arXiv:astro-ph/0008081\]; A. G. Riess [*et al.*]{} \[Supernova Search Team Collaboration\], Astron. J.  [**116**]{} (1998) 1009-1038 \[arXiv:astro-ph/9805201\]; S. Perlmutter [*et al.*]{} \[Supernova Cosmology Project Collaboration\], Astrophys. J.  [**517**]{} (1999) 565 \[arXiv:astro-ph/9812133\]; J. L. Tonry [*et al.*]{} \[Supernova Search Team Collaboration\], Astrophys. J.  [**594**]{} (2003) 1; D. N. Spergel [*et al.*]{} \[WMAP Collaboration\], Astrophys. J. Suppl.  [**148**]{} (2003) 175; C. L. Bennett [*et al.*]{}, Astrophys. J. Suppl.  [**148**]{} (2003) 1; M. Tegmark [*et al.*]{} \[SDSS Collaboration\], Phys. Rev. D [**69**]{} (2004) 103501. R.R. Caldwell, Phys. Lett. B545 (2002) 23; R.R. Caldwell, M. Kamionkowski and N.N. Weinberg, Phys. Rev. Lett. 91 (2003) 071301; P.F. González-Díaz, Phys. Lett. B586 (2004) 1. P.F. González-Díaz and S. Robles-Pérez, Phys. Lett. B679 (2009) 298. P.F. González-Díaz and C.L. Sigüenza, Protein folding and cosmology, astro-ph/9706040 . L. Dyson, M. Kleban and L. Susskind, JHEP210 (2002) 011 . A. Vilenkin, Phys. Rev. Lett. 81 (1998) 5501. See the contributions to [*Universe or Multiverse*]{}, edited by B. Carr (Cambridge University Press, Cambridge, UK, 2007). C. Sagan, TV series programme “Cosmos”, Episode 1 The idea for an origin extraterrestrial of life can be traced back to the Greek philosopher Anaxagoras (V century b.c) and was continued by scientists such as H. von Helmholtz (1879) and S. Arrhenius (1903). The modern theory of panspermia was established by F. Hoyle and N.C. Wickramasinghe, [*Astronomical Origins of Life: Steps towards Panspermia*]{} (Kluwer Academic Press, USA, 2000). F.J. Dyson, Rev. Mod. Phys. 51 (1979) 447. P.F. González-Díaz, Phys. Rev. Lett. 93 (2004) 071301 .
--- abstract: 'We present CARMA CO ($J=1\rightarrow0$) observations and *Herschel* PACS spectroscopy, characterizing the outflow properties toward extremely young and deeply embedded protostars in the Orion molecular clouds. The sample comprises a subset of the Orion protostars known as the PACS Bright Red Sources (PBRS) (Stutz et al.). We observed 14 PBRS with CARMA and 8 of these 14 with *Herschel*, acquiring full spectral scans from 55  to 200 . Outflows are detected in CO ($J=1\rightarrow0$) from 8 of 14 PBRS, with two additional tentative detections; outflows are also detected from the outbursting protostar HOPS 223 (V2775 Ori) and the Class I protostar HOPS 68. The outflows have a range of morphologies, some are spatially compact, $<$10000 AU in extent, while others extend beyond the primary beam. The outflow velocities and morphologies are consistent with being dominated by intermediate inclination angles (80 $\ge$ $i$ $\ge$20). This confirms the interpretation of the very red 24  to 70  colors of the PBRS as a signpost of high envelope densities, with only one (possibly two) cases of the red colors resulting from edge-on inclinations. We detect high-J (J$_{up}$ $>$ 13) CO lines and/or H$_2$O lines from 5 of 8 PBRS and only for those with detected CO outflows. The far-infrared CO rotation temperatures of the detected PBRS are marginally colder ($\sim$230 K) than those observed for most protostars ($\sim$300 K), and only one of these 5 PBRS has detected \[OI\] 63  emission. The high envelope densities could be obscuring some \[OI\] emission and cause a $\sim$20 K reduction to the CO rotation temperatures.' author: - 'John J. Tobin, Amelia M. Stutz, P. Manoj, S. Thomas Megeath, Agata Karska, Zsofia Nagy, Friedrich Wyrowski, William Fischer, Dan M. Watson, Thomas Stanke' bibliography: - 'ms.bib' title: 'Characterizing the Youngest Herschel-detected Protostars II. Molecular Outflows from the Millimeter and the Far-infrared' --- Introduction ============ The earliest stage of the star formation process is characterized by a dense, infalling envelope of gas and dust surrounding a nascent protostar. This early phase, in particular, is known to be associated with powerful outflows [@arce2007; @frank2014]. These outflows may ultimately play a role in halting the mass infall process and dispersing the envelope [@arce2006], thereby contributing to the overall low efficiency of the star formation process [@offner2014]. These outflows develop rapidly and with velocities of $\sim$10 - 100  the outflows may propagate by 0.1 pc in 10,000 yr - 1,000 yr timescales. Therefore, outflows are important to characterize at the youngest possible ages in order to understand their early evolution. The youngest identified protostars are known as Class 0 sources [@andre1993]; they are distinguished from more-evolved Class I sources by their cold bolometric temperatures (T$_{bol}$ $<$ 70 K; [@myers1993]) and/or ratio of submillimeter luminosity (L$_{submm}$) to bolometric luminosity ($L_{bol}$) being $>$ 0.5%. These diagnostics indicate that Class 0 sources typically have denser and more massive infalling envelopes than Class I sources. In addition to the Class 0 sources, an earlier phase of the star formation process has been postulated, the first hydrostatic cores [FHSC; e.g., @larson1969]. A number of candidate FHSCs have been identified [@enoch2010; @chen2010; @pineda2011; @schnee2012]; moreover, candidate FHSCs have quite low luminosities and bear some similarity to the *Spitzer*-identified very low-luminosity sources [VeLLOs @young2004; @dunham2006]. The exact nature of the VeLLOs and candidate FHSCs remains unclear as it is difficult to distinguish bonafide FHSCs from sources that will go on to form very low mass stars [@dunham2014]. As part of the *Herschel* Orion Protostar Survey (HOPS) [e.g., @fischer2010; @stanke2010; @ali2010; @manoj2013; @furlan2016], a sample of 19 protostars with bright 70  and 160  emission and correspondingly faint or undetected (8 sources) 24  emission were detected in the Orion star forming region [@stutz2013 hereafter ST13]. We refer to these protostars as the PACS Bright Red Sources (PBRS); of the 19 PBRS, 12 were first identified as protostars by *Herschel* and 7 *Spitzer*-identified protostars also fulfilled the 24  to 70  color criteria (ST13). The PBRS are *not* low-luminosity like the VeLLOs and candidate FHSCs; they have bolometric luminosities (L$_{bol}$) ranging between 0.65 L$_{\sun}$ and 30.6 L$_{\sun}$, with a median L$_{bol}$ of $\sim$3 L$_{\sun}$. Thus, the PBRS are the largest sample of extremely young protostars with typical luminosities; the median luminosity of Class 0 protostars is 3.5 L$_{\sun}$ in Orion and 1.4 L$_{\sun}$ in the nearby clouds [@dunham2014]. While the PBRS have only been well-characterized in Orion, similar examples are present in more nearby clouds (e.g., VLA 1623, IRAS 16293-2422), and @sadavoy2014 identified several protostars in Perseus that were not classified as protostars in *Spitzer* or undetected at 24  [i.e., HH211-mms @rebull2007]. We further characterized the envelopes of 14 PBRS using observations of the 2.9 mm dust continuum [@tobin2015]; that study, hereafter Paper I, focused specifically on the most deeply embedded and *Herschel*-identified sources. The observed PBRS were all detected and found to have among the largest 2.9 mm luminosities of known Class 0 protostars. We also found that 6 out of 14 have visibility amplitudes that are flat within increasing uv-distance. The flat visibility amplitudes indicate that the 2.9 mm emission is very concentrated, and this finding, together with the high 2.9 mm luminosities, confirms that most PBRS have dense envelopes. This corroborates the interpretation of the spectral energy distribution (SED) model comparisons in ST13. The characterization of the PBRS from both the SEDs and millimeter continuum have led us to conclude that the PBRS may be among the youngest Class 0 objects. If the PBRS represent a distinct portion of early Class 0 evolution, as suggested by ST13, then the relative numbers of PBRS to Class 0 sources in Orion indicates that a ‘PBRS phase’ could last $\sim$25,000 yr. This estimate assumes that the Class 0 phase lasts $\sim$150,000 yr [@dunham2014]. A remaining source of uncertainty in the interpretation of the PBRS as the youngest Class 0 protostars is their unknown disk/envelope inclination angles with respect to the plane of the sky. There is a degeneracy between high envelope densities versus high (nearly edge-on) inclinations that could not be mitigated due to the lack of emission shortward of 10  toward most PBRS [e.g., @whitney2003; @furlan2016]. Assuming that outflows are perpendicular to the disk or envelope midplanes, observations of outflows to constrain their orientations (e.g., in molecular lines) are an excellent way to estimate disk/envelope inclinations and further constrain the envelope properties. Furthermore, if the PBRS are among the youngest Class 0 protostars, then the sample as a whole represents an opportunity to examine the outflow properties of the youngest protostars. The jets and outflows from protostars are detected with a variety of complementary methods and the types of outflows and the ways to detect them also vary with evolution. Collimated jets detected in optical or near-infrared line emission are typically associated with more evolved Class I or Class II sources [@reipurth1997; @reipurth2010 e.g., HH111], while Class 0 protostars typically have a molecular outflow observable in only millimeter lines of CO and other molecules [@arce2007; @frank2014]. However, this does not mean there is no collimated jet emission, just that it may be undetectable due to high levels of obscuration. The molecular outflow emission toward some low-mass protostars has an angular dependence of velocity, with low-velocity material at the edges of the outflow cavity and velocities as high as $\sim$ 100  along the main axis of the outflow [e.g., @santiago2009; @hirano2011]. Jet-like features can also be seen in shock-tracing molecules such as SiO and SO [e.g., @lee2008; @lee2009]. The velocity gradients along the outflow axis also offer crucial information of disk-protostar orientation [e.g., @cabrit1986; @lee2000]. Far-infrared spectroscopy with the *Infrared Space Observatory* and the *Herschel Space Observatory* has also been found to be an excellent probe of the physical conditions of outflows from young stars. The high-J CO (J$_u$ $>$ 13) and H$_2$O transitions, in addition to OH and \[OI\] transitions, probe the warm and hot outflow conditions on scales very near the protostar and the jet driving source [e.g., @vankempen2010; @karska2013; @green2013; @manoj2013]. The lines are thought to be excited primarily by shocks [@manoj2013], with UV radiation photo-dissociating H$_2$O, causing lower abundances relative to non-irradiated shock models [@karska2014]. The initial development of the outflows and their subsequent breakout from their surrounding envelopes are still quite uncertain. Outflows have also been detected from VeLLOs and candidate FHSCs [@dunham2011; @pineda2011; @schnee2012; @tobin2015]. Theory has predicted that such young objects can indeed produce the slow outflows ($\sim$2 - 7 ) that have been observed [@price2012], and the outflows may develop prior to the formation of a rotationally-supported accretion disk [e.g., @li2013; @li2014]. However, it is still uncertain how quickly more powerful outflows emerge in protostars; do the outflows have a steady growth in power as the source luminosity (from accretion) increases or do they only become powerful once a certain threshold in luminosity is reached? In order to examine the outflow conditions from the youngest known Class 0 protostars, we have obtained interferometric observations of the CO ($J=1\rightarrow0$) molecular line and far-infrared spectroscopy with the *Herschel Space Observatory* toward the PBRS in the Orion A and B molecular clouds. The youth and number of PBRS sources in Orion offers an unique opportunity to examine the properties of outflows toward objects that are consistent with being among the youngest protostars. Furthermore, spectrally and spatially resolved observations of the molecular outflows toward these protostars will enable us to constrain the range of possible inclination angles of the protostellar sources, ensuring that their characterization as the youngest protostars is not strongly influenced by orientation. We have observed 14 PBRS (from the full sample of 19 cataloged by @stutz2013 and Paper I) with the Combined Array for Research in Millimeter-wave Astronomy (CARMA), focusing on the *Herschel*-detected PBRS sample. We observed the protostars in both the dust continuum and spectral line emission to examine the envelope and outflow properties of these sources. We discuss the observations in Section 2, our outflow results from CO ($J=1\rightarrow0$) and *Herschel* spectroscopy are presented in Section 3, we discuss the results in Section 4, and summarize our main conclusions in Section 5. Observations and Data Reduction =============================== CARMA Observations ------------------ We conducted observations toward 14 out of 19 of the PBRS identified in ST13 with CARMA in the D-configuration ($\sim$5 resolution) during late 2012 and early 2014 and follow-up observations in C-configuration ($\sim$2  resolution) for some in early 2014. The observations were conducted with the main CARMA array comprised of 6 - 10.4 m and 9 - 6.1 m antennas. We observed two or three sources per track and configured the correlator with four 500 MHz continuum windows, two 8 MHz windows to observe para-NH$_2$D ($J=1_{11}\rightarrow1_{01}$) and C$^{18}$O ($J=1\rightarrow~0$), and the two 31 MHz windows for observation of $^{13}$CO ($J=1\rightarrow0$) and $^{12}$CO ($J=1\rightarrow0$). The C-configuration observations had five 500MHz continuum windows because we did not observe para-NH$_2$D in that configuration. The continuum observations were presented in @tobin2015 and here we will present only the $^{12}$CO ($J=1\rightarrow0$) results because other lines did not yield strong detections. Our sensitivity is typically 0.15 Jy beam$^{-1}$ channel $^{-1}$ for the CO ($J=1\rightarrow0$) in 0.5  channels. We used standard procedures within the MIRIAD software package [@sault1995] to edit, reduce, and image the data; all maps were reconstructed with natural weighting. The CARMA observation log is given in Table 1. The absolute flux calibration uncertainty is $\sim$10-20%. The largest angular scale that can be recovered from observations is $\sim$20; we estimate this number to be twice the minimum baseline length. *Herschel* PACS Spectroscopy Observations ----------------------------------------- We also observed 8 PBRS sources with the Photodetector Array Camera and Spectrometer [PACS; @poglitsch2010] on the *Herschel Space Observatory* [@pilbratt2010] as part of program OT2\_jtobin\_2; we also observed the Class I protostar HOPS 347. The PACS spectrometer is a far-infrared integral field spectrograph with a 5$\times$5 spaxel (spatial pixel) foot print and spaxel sizes of 9, for more information see @poglitsch2010. We conducted full range scans of the entire spectral range from $\sim$55  to $\sim$ 200  in standard chop-nod mode. Table 2 lists the observations dates and observations ids for the observed sources. The PACS range scan spectra were reduced using HIPE 13.0 SPG v11.1.0, calibration tree version 56. The root-mean-squared absolute flux calibration uncertainty of the PACS spectra is $\sim$12%. The line spectroscopy observations of the \[OI\] 63.18  transition were conducted in unchopped mode. The unchopped mode uses separately defined off positions away from the cloud to prevent corrupting the \[OI\] line with a contaminated off position in chop-nod mode. This mode was necessary because extended \[OI\] emission is very prevalent in the Orion molecular cloud. The use of unchopped mode will, however, result in foreground/background \[OI\] emission on the surrounding molecular cloud being preserved, in addition to that of the protostar itself. These observations were taken in bright line mode, which has less redundancy at each wavelength than faint line mode. The data used in this paper are the from the default archive reduction from science product generation version 12.1.0 and utilizing PACS calibration tree version 65. In this paper, we are making use of the flux densities derived from the central spaxel, corrected for the point spread function losses. For flat-fielding, we use the observed relative spectral response function (RSRF) rather that the telescope background method. Magellan Near-infrared Observations ----------------------------------- We observed the source HOPS 68 with the Magellan Baade telescope, located at Las Campanas in Chile on 2009 January 17. The observations were conducted with the Persson Auxiliary Nasmyth Infrared Camera [PANIC, @martini2004], which has 2 $\times$ 2 field of view on a 1024 $\times$ 1024 pixel detector. HOPS 68 was observed in Ks-band using a 3$\times$3 dither pattern with 20 second integrations at each dither position and 15 steps between dither positions. The sky image was constructed from a median combination of the on-source frames thereby losing some large-scale emission. The data were reduced using the Image Reduction and Analysis Facility (IRAF) using standard methods for near-infrared imaging observations; see @tobin2010 for a description of the methods used. Sample and Sub-samples ---------------------- The observations and results presented in this paper are based on sub-samples of the PBRS sample presented in ST13. ST13 identified 18 sources with $[24\,\mu{\rm m}]-[70\,\mu{\rm m}]$ colors (in log ($\lambda$F$_{\lambda}$) space) redder than 1.65. Of this sample, 11 were first discovered with *Herschel* observations and 7 were previously known HOPS protostars from the *Spitzer* surveys of the region that met the redness criteria. Furthermore, an additional PBRS (135003) was not included in ST13, but was first presented in Paper I, bringing the total number of PBRS to 19. We list the full sample of PBRS in Table 3 and identify those that have been followed-up with CARMA and *Herschel* PACS Spectroscopy. The CARMA follow-up concentrated primarily on sources that had not been previously identified by *Spitzer* as protostars due to their deeply embedded nature, rendering them faint or undetected at 24 . The *Herschel* PACS spectroscopy then concentrated on the *Herschel*-identified PBRS that had been found in the HOPS data that had been analyzed prior to the *Herschel* Open Time 2 proposal deadline. Thus, our source follow-up is not homogeneous, but there is enough overlap in order to identify characteristic trends within the sample and sub-samples which we will detail in the following sections. Results ======= We have compiled a significant amount of data to further characterize the PBRS and their outflow properties. We will first discuss the cold molecular outflows probed by CARMA CO ($J=1\rightarrow0$) and probe scales beyond those examined by CARMA using *Spitzer* 4.5  emission. Lastly, we will discuss the results for the warm and hot components of the molecular outflows with *Herschel* PACS spectroscopy and place the properties of the PBRS outflows in the context of larger protostar samples observed with far-infrared spectroscopy. While the three datasets do not cover the same samples (see Table 3) and the spatial scales examined are different, they all contribute to a deeper understanding of the PBRS than when considered on their own. We will attempt to concentrate on overarching trends in the following discussion of results and the discussion of individual sources can be found in the Appendix. Molecular Outflows ------------------ The $^{12}$CO ($J=1\rightarrow0$) molecular line was observed to examine the outflow activity toward each source; this is the canonical tracer of outflowing gas toward protostellar objects [@snell1980]. Outflows are generally characterized by distinct red and blue-shifted emission located on either side of the protostellar source, modulo inclination effects. The pervasiveness of CO in the Orion molecular cloud complicates analysis of outflows. Emission at $\pm$2  around the systemic velocity cannot be analyzed due to the $^{12}$CO ($J=1\rightarrow0$) emission being resolved-out due to confusion with the extended molecular cloud. Therefore, we are generally only able to detect outflow features that have velocities high enough to emit outside the $\pm$2  velocity range. ### Detections and Morphologies We detect clear CO outflows toward 7 PBRS sources 093005, 090003, 082012, 119019, 135003, HOPS 373 and 019004 (Figures \[093005\], \[090003\], \[082012\], \[119019\], \[135003\], \[HOPS373\], & \[019003\]), as well as for the Class I source HOPS 68 in the field of 019003 shown in Figure \[HOPS68\]. Tentative detections are found toward 3 additional PBRS 302002, 061012, and HOPS 372 (Figures \[302002\], \[061012\], & \[082012\]). The HOPS 372 outflow is apparent in the low-velocity panel of Figure \[082012\], but at higher velocities the outflow emission is dominated by 082012. We did not detect outflow emission toward four PBRS 091015, 091016, 097002, and 082005; however, this does not mean that these sources do not have outflows, but that they were not detectable with our resolution and sensitivity. The outflows have a variety of morphologies, there is not a typical CO outflow morphology toward the PBRS sources. The PBRS 093005 and 090003 have spatially compact outflows, with total lengths of the red and blue-shifted lobes being less than 0.05 pc (Figures \[093005\] & \[090003\]). The outflows toward 119019, 082012, 135003, and HOPS 373 all extend outside the CARMA primary beam, with total lengths greater than 0.1 pc (Figures \[082012\], \[119019\], \[135003\], and \[HOPS373\]). The outflows toward 082012 and 135003 also have emission extending to velocities $>$ $\pm$ 10  from the systemic velocity with jet-like morphologies. Toward 061012, there is evidence for an outflow, but this is unclear due to confusion with the wide-angle outflow of its neighbor HOPS 223 (Figure \[061012\]). HOPS 223 (also known as V2775 Ori) is an outbursting Class I source [@fischer2012] and this is the first clear detection of a CO outflow toward this source. However, the *Spitzer* imaging already showed strong evidence for outflow-associated features. Toward 302002 (Figure \[302002\]) there appears to be low-velocity $^{12}$CO emission in its vicinity that appears outflow-like, but its detection is not definitive. The outflow toward 119019 is distinct from the other PBRS in that it has a large spatial extent, but at low-velocities; the full velocity width is only 6  for both the red and blue sides of the outflow. Moreover, the spatial overlap between the redshifted and blueshifted emission is strong evidence that this source is viewed close to edge-on. The non-detections of outflows toward 091015, 091016, 097002, and 082005 could result from the outflows having low-velocities and being confused with the emission from the molecular cloud. Also, there is a tentative trend between detectable outflows and L$_{bol}$. The PBRS 119019 was the lowest luminosity source (L$_{bol}$ = 1.56 L$_{\sun}$) with a clear outflow detection; the tentative outflow detections and non-detections have luminosities between 0.65 L$_{\sun}$ and 1.56 L$_{\sun}$. The outflow properties of individual sources are described in more detail in the Appendix. The outflow from HOPS 68 (Figure \[HOPS68\]) is worth mentioning because it was also found to have quite high velocities, and the relative position angle of the red and blue-shifted lobes changes from high to low-velocity. At low velocities the outflow is oriented northeast to southwest, but at high velocities the red-shifted side is oriented northwest to southeast while the blue lobe still appears extended in the same direction as at low velocities. We overlaid the high-velocity CO contours on a Ks-band (2.15 ) image from Magellan PANIC (Figure \[HOPS68\]) and we see that there are two sets of bow-shock features that overlap with the blue-shifted CO emission. One set of features is in the southeast direction and the other set is in the south west direction. Thus, the change in position angle of the CO emission from low to high velocities is likely indicative of two outflows from HOPS 68. ### Outflow Parameters We calculate the outflow mass, momentum, and energy following the procedure used by @plunkett2013 [based on @bally1999], and give these values in Table 4. The analysis by @plunkett2013 uses $^{13}$CO optical depths and excitation temperature derived from $^{12}$CO (assuming optically thick emission) in order to calculate column densities, from which the mass, momentum, and energy can be calculated. However, our observations did not have enough sensitivity to detect the $^{13}$CO ($J=1\rightarrow0$) outflow emission, we therefore adopted a $^{12}$CO/$^{13}$CO ratio of 62 [@langer1993] and divided the $^{12}$CO ($J=1\rightarrow0$) intensities by this ratio, under the assumption that the $^{12}$CO emission is optically thin at all velocities. This assumption is not valid at all velocities, but probably most reasonable for the higher velocity ($>\pm$10 )emission. The principal effect will be an underestimate of the CO column densities and cause all the outflow parameters to be lower limits. @dunham2014 showed that opacity corrections to the outflow parameters can be up to an order of magnitude; missing flux will also affect the parameters but this is more difficult to quantify since the low-velocity emission with the highest opacity will be the most severely affected by spatial filtering. The $^{13}$CO abundance is taken to be N($^{13}$CO) = N(H$_2$)/7$\times$10$^{5}$ @frerking1982 and the excitation temperature is calculated using the $^{12}$CO brightness temperature and was between 15 K and 40 K in our observations [see Equation 3 in @plunkett2013]. We do not attempt to correct the outflow properties for the effects on inclination. The observed outflow properties (mass, momenta, energy, and force; see Table 4) of the PBRS are generally consistent with results from @plunkett2013; however, there is a general tendency for lower values of mass, momentum, and energy for the PBRS, which could result from the lack of $^{13}$CO. We also computed the outflow force (F$_{CO}$) and dynamical time based on the apparent outflow size and the maximum velocity of observed CO. We examined the relationship between L$_{bol}$ and F$_{CO}$ in Figure \[outflow-lbol\]. For the PBRS with a detected outflow, there is no clear correlation between F$_{CO}$ and L$_{bol}$, but more luminous sources tend to have greater values of F$_{CO}$. We have also plotted the relationships derived by @bontemps1996 and @vdmarel2013 for comparison. The relationships were derived from samples primarily comprised of Class I protostars and Class 0 protostars lie above the relationship and not below; the CO ($J=6\rightarrow5$) measurements from @yildiz2015 for Class 0 sources are also above the @bontemps1996 relationship. The relations do go through our observed the points, but four PBRS are found below the @bontemps1996 relationship. On the other hand we use interferometer data without zero-spacings, while the other studies used single-dish maps. We also did not have $^{13}$ CO detections, making our values lower limits. We do not calculate upper limits for the sources with non-detections because the large amount of resolved emission near the source velocities results in these values having little physical meaning. However, their outflow parameters will (at a minimum) be lower that those measured for 090003/093005. ### Outflow Inclinations The outflow inclinations are difficult to precisely measure; however, we qualitatively compared our data with the simulations of @cabrit1986, which show model PV plots and integrated intensity plots for accelerating outflows. Model outflows are shown for a fixed opening angle and outflow length at inclinations of 5, 30, 50, and 80. As such, the uncertainty in our estimates of the outflow inclination is likely $\pm\sim$20. The outflows of 090003, 093005, HOPS 223, and 019003 are consistent with an outflow inclinations near 30, given their compact extent and distribution of highest velocities near the source. The well-collimated outflows of 082012, 135003, and HOPS 68 appear most consistent with an inclination near 50. Both the wide-angle outflow toward HOPS 373 and the tenuous outflow toward 302002 are consistent with having inclinations between 50 and 80; based on their velocity distributions, HOPS 373 is likely closer to 50, while 302002 is likely closer to 80. The PBRS 061012 appears to have an outflow, but the data do not lend themselves to a reasonable estimate of the inclination. Finally, 119019 is the only PBRS that is consistent with having a near edge-on inclination, as indicated by the CO emission only being detected at low velocities and the extended spatial overlap of the red and blue-shifted emission toward 119019. We can broadly conclude that for the PBRS with detected outflows, extreme edge-on orientations cannot be the cause of their extremely red 24  to 70  colors, except for 119019. The estimated inclinations for the PBRS are also given in Table 5. Furthermore, while there is a large degree in uncertainty in the outflow inclinations, it is most likely that the distribution of inclination angles appears dominated by intermediate inclinations (80 $\ge$ $i$ $\ge$20). While our numbers are small, the distribution is likely consistent with a random distribution of inclinations (the average inclination for a random distribution is 60), which is expected for a collection of sources whose selection criteria is not particularly biased toward a particular geometric orientation; as had been a previous concern with respect to the PBRS was they could have simply been edge-on sources and the outflow data show that this is clearly not the case. Given the uncertainty in the inclination angles, we have not corrected the derived outflow parameters in Table 4 for this effect. Evidence for Extended Outflows ------------------------------ The CARMA $^{12}$CO observations are only sensitive to emission within the 30 (12600 AU) radius primary beam, hence other observations are needed to determine if the outflows extend to larger scales. We examined the *Spitzer* 4.5  images of all the sources from @megeath2012. The emission at 4.5  can trace both scattered light in the outflow cavities near the protostars and shock-excited H$_2$ emission along the outflows. Smooth 4.5  emission near the source is likely indicative of scattered light and knotty or bow shock-like features along the outflow are likely H$_2$ emission [e.g., @tobin2007]. Images of the 4.5  emission are shown for all the sources in Figure \[spitzer-a\] and \[spitzer-b\]. Toward the sources HOPS 373, 093005, 302002, and 090003 there is 4.5  emission within 0.05 pc of the sources and no apparent evidence for emission out on larger-scales that is likely to have originated from these systems. Thus, for 093005, 302002, and 090003 we are likely covering the full extent of the outflow with our CO observations; for HOPS 373, however, the outflow extends out of the primary beam, but perhaps not much further [@gibb2000]. There are a few cases where the association of the 4.5  emission with the outflow is ambiguous. For 135003, there are some knotty features along the direction of the known outflow, extending $\sim$0.15 pc and was identified as an outflow candidate (SMZ 1-38) by @stanke2002. Then in the case of 019003, we see a feature adjacent to the position of the protostellar source from the 2.9 mm continuum, and possibly an extended feature in the direction of the blue-shifted outflow lobe. The crowding and number of imaging artifacts from bright sources make this field difficult to interpret. We only find clear evidence for 4.5  emission extended $>$ 0.1 pc for three sources 082012, 061012, and 119019. The bow-shock directions or trail of H$_2$ knots indicate a likely origin from the PBRS source. The emission from 061012 and 119019 appears to extend $\sim$0.3 pc and the emission from 082012 extends $\sim$0.2 pc. If we assume an outflow propagation speed of 10 - 100 , then the dynamical time is between 3000 - 30000 yr for 0.3 pc and 2000 - 20000 yr for 0.2 pc. Thus, even though there is evidence for outflows toward these sources extending relatively large distances, extreme youth is still likely. Toward the sources without detected CO ($J=1\rightarrow0$) outflows, 091015, 091016, 082005, and 097002, there is also no evidence for 4.5  emission (or emission shortward of 70 ) associated with the sources, as shown in (see Figure \[spitzer-a\] and \[spitzer-b\]). Whereas, the sources with compact emission at 4.5  also had detections of CO outflows. Warm/Hot Outflow Gas -------------------- We obtained *Herschel* PACS spectroscopy toward a subset of the PBRS (eight observed with PACS). This subset samples luminosities between 0.65 L$_{\sun}$ and 12 L$_{\sun}$ and a variety of $^{12}$CO molecular outflow emission properties; thus, this subsample should be reasonably representative of the PBRS as a whole. PACS spectroscopy offers a complementary view of the outflow emission from protostars; rather than the cold, entrained gas traced in the CO ($J=1\rightarrow$0) line, the PACS lines trace the warm/hot shock-heated portion of the outflow concentrated on scales $<$2000 AU. The continuum-subtracted PACS spectra for all observed sources, extracted from the central spaxel, are shown in Figure \[pacsspectra\]. The spectra have a wide variety of emission line strengths; detections in high-J CO and water are found toward 5 out of the 8 PBRS. The spectrum toward HOPS 373 is particularly strong and rich in line emission, detecting CO transitions with J$_{u}$ $>$ 30. Also, lines in the PACS spectrometer range are detected toward all sources that exhibit a clear outflow in the CO ($J=1\rightarrow0$) transition. We calculate the total high-J CO luminosities and give their values in Table 6. Figure \[co14-13spectra\] shows the non-continuum subtracted CO ($J=14\rightarrow13$) spectra for the all observed sources. The PBRS 061012 has a tentative detection (2.5$\sigma$) in the CO ($J=14\rightarrow13$) line, while its detection was not immediately apparent in the full spectrum shown in Figure \[pacsspectra\]. However, 061012 does not have detected emission in the 179.5  H$_2$O 2$_{12}$-1$_{01}$ line which typically has a line flux greater than or equal to the CO ($J=14\rightarrow13$) line. Thus, the detection toward 061012 is considered tentative. Observations were also obtained toward all the PBRS in unchopped line spectroscopy observations of the \[OI\] 63.18  transition. This emission line is thought to be an tracer of the protostellar jet, perhaps even before the molecular outflow is well-established [@hollenbach1989]. Since these observations were conducted in the unchopped mode, extended \[OI\] emission from the cloud is present in the spectral cubes. This extended \[OI\] emission from the cloud must be subtracted from the data in order to isolate \[OI\] emission from the protostar itself. To remove the extended \[OI\] (and continuum emission), we have calculated the median intensity at each wavelength in the spectral cube using the 18 edge spaxels. We also compute the standard deviation of the edge spaxel intensities at each wavelength, this is representative of the uncertainty in the background emission subtracted at each wavelength. We use the median intensity of the edge spaxels rather than the mean because some spaxels have very high intensities and the mean would be skewed toward a value larger than most of the edge spaxel intensities. The background subtracted \[OI\] spectra are shown toward each source in Figure \[OI-spectra\] as the thick solid line and the standard deviation of the background at each wavelength is shown as the thin dashed line in Figure \[OI-spectra\]. The only PBRS with a clear detection of the \[OI\] line is HOPS 373; 019003 at first glance appears to have a detection, but it is $\sim$2$\sigma$ above the uncertainty of the subtracted background \[OI\] emission, so this detection is tentative. Furthermore, 119019 and 061012 have apparent peaks at location of the \[OI\] line; however, both of these are only 2$\sigma$ detections above the noise and other features are found in those spectral with the same significance, but do not correspond to an expected spectral feature. Therefore, neither of these sources are regarded as detections. @nisini2015 showed a sample of protostellar sources with extended \[OI\] emission in their jets and outflows. This highlights the possibility that some of the PBRS may have extended emission along their outflows, and that our subtraction of background \[OI\] emission from the edge spaxels may remove \[OI\] emission from the source. However, we inspected the spectral cubes before and after subtraction of extended \[OI\] emission, and we do not detect any enhancement of \[OI\] emission along the outflow directions (for the PBRS with detected outflows), nor do the 8 pixels adjacent to the central spaxel show emission after background subtraction toward HOPS 373. Therefore, we conclude that the well-detected emission toward HOPS 373 is only detected in the central spaxel and we are not missing extended flux at our sensitivity ($\sigma_{[OI]}$ $\sim$ 1 Jy channel$^{-1}$), and we are not subtracting off extended emission associated with the PBRS outflows. We have examined the \[OI\] line luminosities with respect to larger protostar samples from [@green2013] and Mottram et al. (2016, submitted). The $\sim$2$\sigma$ \[OI\] detections for 119019 and 062012 and 3$\sigma$ upper limits for 093005, 091015, and 091016 have \[OI\] luminosities upper limits consistent with the detected range of \[OI\] luminosities for a given L$_{bol}$ [L$_{[OI]}$ = 10$^{-5}$ - 10$^{-2}$ L$_{\sun}$ ; @green2013]. Thus, the \[OI\] line is not found to be particularly strong toward the PBRS, but we cannot say that the \[OI\] emission anomalously weak toward the PBRS given that the upper limits do not indicate \[OI\] luminosities to be significantly lower than other protostars with a similar L$_{bol}$. In addition to the \[OI\] 63.18  line, we examined the spectra for \[OI\] emission at 145.5   in the range scans. As shown in Figure \[pacsspectra\], this line is only detected toward 019003. However, we do not think this is emission from the protostar itself, but extended emission that was not fully subtracted from the off position as some spaxels have a negative feature, while others have emission. The \[OI\] 63  luminosity from post-J-shock gas can be used to calculate the mass flow rate through the shock [@hollenbach1989; @hollenbach1985]: $\dot{M}$ = L(\[OI\]) $\times$ 8.1$\times$ 10$^{-5}$ M$_{\sun}~{\rm yr}^{-1}$ L$_{\sun}^{-1}$. Since our observations encompass, in each case, all the regions in which the outflows from our targets drive shocks. Thus, the result is the mass-loss rate from the protostar, averaged over the outflow dynamical time. The \[OI\] luminosities and outflow rates inferred from the line luminosities (and their upper limits) are given in Table 6. ### Extended emission The high-J CO and water line emission is extended across multiple spaxels in some sources, the most obvious of which is 135003. We overlay the spectra in each spaxel on the CO ($J=1\rightarrow0$) map in Figure \[135003-footprint\] for the longer and shorter wavelength ends of the PACS spectrometer red channel. H$_2$O and CO emission is a detected in all spaxels that overlap with the blue-shifted side of the CO ($J=1\rightarrow0$) outflow, and the line emission is actually brighter than that of the central spaxel. However, there is not corresponding line emission extended along the red-shifted side of the outflow, possibly indicating that the southern side of the outflow is being driven into a less-dense medium. Similarly, 019003 also has some extended H$_2$O and CO emission on the blue-shifted side of the CO ($J=1\rightarrow0$) and like 135003 the extended emission is also brighter than the central spaxel. ### CO Luminosities and Rotation Temperatures We have calculated the high-J CO luminosities and rotation temperatures for the 5 PBRS with multiple detected CO transitions. We calculate the column densities of each CO line and luminosity of each line following @manoj2013; however, instead of fitting Gaussian functions to the unresolved line profiles, we directly sum the spectral elements around the wavelength of a particular CO line and subtract the background emission estimated from line-free continuum regions adjacent to the emission line. We regard this method as more reliable than fitting Gaussians given the low spectral resolution of the data; similar results are obtained for the Gaussian method, however (Manoj et al. 2016 submitted). We show the rotation diagrams for the 5 sources with robust CO detections in multiple lines in Figure \[rotdiagrams\]. All sources show the characteristic warm component ($\sim$300 K) of the CO rotation diagrams [e.g. @vankempen2010; @karska2013] and only HOPS 373 shows evidence of another temperature component in CO lines with J$_u$ $\ge$25; all other PBRS have non-detections for CO lines with J$_u$ $\ge$25. Thus, we fit a linear slope to the rotation diagrams for all detected CO lines with J$_u$ $\le$25, finding T$_{rot}$ between 216 K and 282 K. HOPS 373 has the highest T$_{rot}$ and 119019 has the lowest T$_{rot}$. We plot the PACS CO luminosities (L(CO)) versus L$_{bol}$ and T$_{bol}$ in Figure \[LCO-others\]. The PBRS have CO luminosities that are consistent with the observations from the HOPS, WISH, WILL and DIGIT[^1] samples [@karska2013; @manoj2013; @green2013; @karska2014 Mottram et al. submitted; Karska et al. in prep.][^2]. However, HOPS 373 has nearly the highest CO line luminosity for all protostars in the samples considered here for protostars with L$_{bol}$ $<$ 30 L$_{\sun}$. Looking at L(CO) vs. T$_{bol}$, also in Figure \[LCO-others\], the PBRS are comparable to other sources with low values of T$_{bol}$. The comparison of CO T$_{rot}$ to the HOPS/WISH/WILL/DIGIT samples is shown in Figure \[Trot-others\]; these rotation temperatures are all measured using CO lines with 14 $\le$ J$_u$ $\le$25. The PBRS have T$_{rot}$ values that are among the lowest observed for all protostars in the other sample at any luminosity. However, given the uncertainties in our own measurements and those in the literature, the PBRS are consistent with the observed distribution of T$_{rot}$, but on the low-side of the distribution. We discuss the possible causes for the PBRS have lower T$_{rot}$ values further in Section 4.4. ### Far Infrared Line Ratios We calculated diagnostic line ratios that have been used by @karska2014 to compare the WISH and WILL observations with various shock models [@kaufman1996; @flower2010; @flower2015] and list them in Table 7 for the sources with detected lines. For most ratios, the values calculated for the PBRS are either within the range observed in the WISH/WILL samples [@karska2014] or the values are within 1$\sigma$ of the observed range. The primary line ratio that is systematically different from the WISH/WILL samples is the CO ($J=16\rightarrow15$)/CO ($J=21\rightarrow20$); the ratios are systematically larger for all the PBRS. This likely reflects the colder CO T$_{rot}$ values that are derived for the PBRS, relative to the WISH/WILL sources. We also list ratios for CO ($J=17\rightarrow16$)/CO ($J=22\rightarrow21$), CO ($J=16\rightarrow15$)/CO ($J=17\rightarrow16$), and CO ($J=21\rightarrow20$)/CO ($J=22\rightarrow21$) because CO ($J=17\rightarrow16$) and CO ($J=22\rightarrow21$) are also accessible from SOFIA[^3]. One source, HOPS 373, also had detections of OH transitions, enabling further comparison to the WISH/WILL results. Note that one of the OH 84  doublet lines is contaminated by CO ($J=31\rightarrow30$) and to correct for this we measured the flux of the uncontaminated doublet line and multiplied its flux by two. The ratio of OH 84  to OH 79  is larger than WISH/WILL, but within the uncertainties, H$_2$O (4$_{04}$-3$_{13}$) to OH 84   is consistent with WISH/WILL, and CO ($J=16\rightarrow15$) to OH 84  is slightly in excess of the WISH/WILL results. Thus, for HOPS 373, the H$_2$O line emission relative to OH is weaker than predicted by the shock models, consistent with the suggested interpretation of @karska2014 that UV irradiation of the shocks is needed in order to explain the H$_2$O and OH line ratios as suggested by @karska2014. Discussion ========== The PBRS have been demonstrated, through multiple lines of evidence, to be consistent with being the youngest known Class 0 protostars. Their SEDs indicate that they are surrounded by very dense envelopes (ST13) and this was further confirmed by the CARMA 2.9 mm dust continuum luminosities (Paper I). If these sources truly are a sample of the youngest protostars, the results from the outflow diagnostics presented here can offer valuable clues to the properties of outflows toward very young protostars. Given the multitude of the data presented, including the continuum results from Paper I, we have compiled a list of PBRS properties determined from the follow-up observational data and present a summary of these data in Table 5. Nature of the PBRS Very Red Colors ---------------------------------- A principal uncertainty in the characterization of the PBRS was if the extremely red 24  to 70  colors observed by ST13 were strongly influenced by source viewing angle. If the PBRS were typical Class 0 sources and observed in exactly edge-on orientation, then the combined opacity of the envelope and disk midplane could result in the very red 24  to 70  colors. However, ST13 showed that even if the PBRS were all viewed edge-on, the envelope densities would still have to be $>$2$\times$ higher than typically found toward HOPS protostars; the median envelope density for Class 0 protostars in HOPS at a radius of 1000 AU is found to be 5.9 $\times$ 10$^{-18}$ g cm$^{-3}$ from SED modeling [@furlan2016]. For the sources with detected CO ($J=1\rightarrow0$) outflows, the clear spatial separation of the blue and redshifted CO emission clearly shows that 093005, 090003, 082012, HOPS 372, HOPS 373, 135003, 019004 are *not* observed with edge-on orientation and must be observed at an intermediate viewing angle (neither edge-on nor face-on). The distribution of inclinations is consistent with being random; therefore, the extremely red colors of these protostars are not the result of extreme edge-on viewing angle, but are due to the high density of the infalling envelope itself. We are unable to make a definitive conclusion about 061012 since the outflow is not clearly detected, but there appear to be separated blue and red lobes. However, for two sources, 119019 and 302002, only low-velocity CO emission is found for those outflows. The outflow toward 302002 (Figure \[302002\]) had a small velocity gradient from across the source, and we mentioned in Section 3.1.3 that the inclination is likely between 50 and 80, but closer to 80. In the more extreme case of 119019, this PBRS had no detectable velocity gradient and there is roughly equal amounts of emission at both blue and redshifted velocities (Figure \[119019\]. Thus, these two sources may only have been classified as PBRS because their of edge-on (or nearly edge-on) orientation. In summary, we confirm that the extremely red colors of the PBRS are not the result of inclination for 7 out of 9 sources with detected CO ($J=1\rightarrow0$) outflows. The sources without detections of CO ($J=1\rightarrow0$) outflows may have low-velocity outflows that are confused with the cloud emission, or the outflows are still too small in spatial extent and are not bright enough to detect with the sensitivity of our current observations. Outflow Properties ------------------ The outflows exhibit a range of masses, momenta, energies, and forces; HOPS 373 has outflow properties typical of those in @plunkett2013 and 082012 has outflow properties in excess (Table 4). In contrast, the two most compact outflows in the sample (090003 and 093005) have quite low outflow masses, momenta, energies, and forces. Since the @plunkett2013 sample includes single-dish data to measure the total flux, a comparison with @arce2006, using interferometer-only data, is more appropriate. The ranges for the observed outflow parameters from @arce2006 and @plunkett2013 are given in Table 4. We note, however, that neither of those studies computed F$_{CO}$. The sources 093005, 090003, and 302002 have values all less than the range from @arce2006, HOPS 223 is within the range, and HOPS 373 and 082012 have values in excess of these numbers. The outflow toward 082012 is truly exceptional, its high-velocity nature was first reported by @sandell1999; it is more energetic and has more momentum than the strongest outflows in the @plunkett2013 sample. The increased collimation and large velocity extent bears resemblance to NGC 1333 IRAS 4A, L1448C [@hirano2011], and IRAS 04166+2706 [@santiago2009]. This outflow has energies and momenta in excess of all the outflows observed by @arce2006 and @plunkett2013, but it is comparable to L1448C [@hirano2011]. The outflow of 082012 is likely even more powerful than we measure it to be, given that our properties are lower-limits due to lack of $^{13}$CO observations to determine the optical depth and because we do not cover the full extent of the outflow. The outflow of 082012 is also likely blended with that of HOPS 372 at low velocities, but at higher velocities it appears to only come from 082012. Even if we are measuring the combined outflow properties, it is very strong relative to those observed in the nearby star forming regions. The outflows from 090003 and 093005 represent the most compact (i.e., shortest) CO outflows found in our data. The outflows of 093005 and 090003 are not observed to extend further than their apparent envelope sizes observed at 870 . This and the compact 4.5  emission may indicate that the outflows are just beginning to break out from their dense, natal envelopes. These outflows are not particularly powerful either, the outflow forces plotted in Figure \[outflow-lbol\] are on the low-end for Class 0 sources and 090003 is lower that the linear relationship from @bontemps1996, above which all Class 0s lie in current data [@yildiz2015 Mottram et al. submitted]. Furthermore, the well-developed outflow from 135003 is also found to lie below the L$_{bol}$ vs. F$_{CO}$ relationship. Alternatively, the outflows could be more powerful, but since their energies and momenta are calculated using entrained material, observed CO ($J=1\rightarrow0$), the outflows only appear weak with these measures. The deeply embedded sources without 4.5  emission or outflow detections (097002, 091015, 091016, and 082005) may have outflows that are too weak/faint to detect in our observations. However, the lack of outflow detections toward these most embedded sources and the lack of particularly powerful outflows from 093005 and 090003, could indicate that outflows may be weak during the early Class 0 phase, given the apparent youth of the sources and small spatial extent of the outflows. Thus, it possible that the outflow momentum/energy/force may be initially small early-on and are rising early in the Class 0 phase such that the Class 0 outflows will be systematically more powerful than Class I outflows [e.g., @bontemps1996; @yildiz2015]. Weak initial outflows from protostars are predicted from simulations of the FHSC phase [@tomida2013; @price2012] where the outflows are $<$15 . If the PBRS have recently transitioned out of the FHSC phase, then they may not have reached their full outflow power as of yet. This will be further studied using single-dish data by Menenchella et al. (in prep.). The absence of detected outflow activity in CO ($J=1\rightarrow0$) toward the four sources mentioned above cannot be construed as evidence of outflow absence because of our finite resolution and sensitivity. For example, the outflow toward OMC MMS6N (also known as HOPS 87) was only detected when it was observed at the highest resolutions with the SMA [@takahashi2011], due to its very small spatial extent. Thus, the non-detected outflows could be very compact and in the process of breaking out from the envelopes, necessitating higher resolution data. On the other hand, OMC MMS6N did have strong H$_2$O and CO emission lines observed in the far-infrared spectrum from *Herschel* [@manoj2013] and 091015/091016 had no detected emission lines in their PACS spectra. In contrast, 091015/091016 are low-luminosity sources (L=0.65 L$_{\sun}$ and 0.81 L$_{\sun}$) and OMC MMS6N is a higher-luminosity source (L $>$30 L$_{\sun}$), making direct comparisons between the sources difficult. Relationship of Outflows and 2.9 mm Continuum Properties -------------------------------------------------------- In Paper I, the 2.9 mm continuum luminosities and visibility amplitude profiles were analyzed. We found that most PBRS had 2.9 mm continuum luminosities (median of 1.0$\times$10$^{-5}$ L$_{\sun}$) and L$_{2.9mm}$/L$_{bol}$ ratios (median of 8.8$\times$10$^{-6}$) greater than most nearby Class 0 protostars, which have a median L$_{2.9mm}$ = 3.2$\times$10$^{-6}$ L$_{\sun}$ and a median L$_{2.9mm}$/L$_{bol}$ = 8.5$\times$10$^{-7}$. The nearby Class 0 continuum samples are drawn from @tobin2011, @looney2000, and @arce2006, which are sensitive to comparable spatial scales; L$_{2.9mm}$ is calculated assuming a 4 GHz bandwidth centered at 2.9 mm. The PBRS have a median L$_{2.9mm}$ that is 3$\times$ larger that typical Class 0s and L$_{2.9mm}$/L$_{bol}$ that is 10$\times$ larger. This means that the more nearby Class 0 protostars with high L$_{2.9mm}$ also have a high L$_{bol}$, whereas the PBRS tend to have lower L$_{bol}$. Furthermore, the highest L$_{2.9mm}$ for nearby Class 0 protostars is 2.9$\times$10$^{-5}$ L$_{\sun}$ toward NGC 1333 IRAS 4A, in contrast to the highest L$_{2.9mm}$ of 3.4$\times$10$^{-5}$ L$_{\sun}$ for the PBRS 082012; see Figure 2 from Paper I. Finally, 6 out of 14 PBRS (093005, 090003, 091016, 091015, 097002, and 082005) had flat visibility amplitude profiles (and small 5 k$\lambda$ to 30 k$\lambda$ visibility amplitude ratios), consistent with most emission being emitted from scales $<$ 2000 AU (Figures 3 and 4 from Paper I). Thus, the PBRS tend to have more massive envelopes relative to their bolometric luminosities as compared to other Class 0 sources and the flat visibility amplitude ratios indicate high densities in the inner envelopes (Paper I). Here we more closely examine the two PBRS have apparent inclination angles that are close to edge-on: 119019, being almost exactly edge-on, and 302002 being near 80(between 50 to 80). The PBRS 119019 has L$_{2.9mm}$/L$_{bol}$ (1.47$\times$10$^{-6}$) and L$_{2.9mm}$ (2.3$\times$10$^{-6}$ L$_{\sun}$) values consistent with typical Class 0 protostars from the literature. Thus, in addition to having an nearly edge-on outflow, the 2.9 mm continuum emission from 119019 is not consistent with it having a massive, dense envelope like the rest of the PBRS (Table 5). This points to 119019 perhaps being more evolved than the rest of the PBRS and its very red colors can be attributed to an edge-on inclination. On the other hand, 302002 has values of L$_{2.9mm}$/L$_{bol}$ (1.2$\times$10$^{-5}$) and L$_{2.9mm}$ (1.0$\times$10$^{-5}$ L$_{\sun}$) consistent with rest of the PBRS. Both of these sources also have declining visibility amplitudes (Paper I). We also find a tendency for the PBRS with flat visibility amplitudes to show either a compact outflow or have no detectable outflow in the CO ($J=1\rightarrow0$) line and *Spitzer* 4.5  emission. We suggested in Paper I that the PBRS with flat visibility amplitudes might be less-evolved than the PBRS with more rapidly declining visibility amplitudes. The sources with rapidly declining visibility amplitudes tend to have more extended, well-developed outflows (i.e., 082012, HOPS 373, and 119019) than sources with flat visibility amplitudes. Therefore we suggest that the flat visibility amplitude sources have outflows that are only beginning to break out of their envelopes. Thus, the PBRS with flat visibility amplitudes may indeed be the initial stages of the Class 0 protostellar phase. The change in visibility amplitude profile could be related to the outflows carving out cavities and lowering the overall mass of the inner envelope. On the other hand, if the inner envelope mass is rapidly accreted onto the protostar, then the visibility amplitude profiles would also dramatically decline. Using the example from Paper I, the free-fall time of 2 $M_{\sun}$ confined to a constant density sphere with R = 1500 AU is only the $\sim$10,000 yr, quite short on the timescale of protostellar collapse. For the case of inside-out collapse [@shu1977], the rarefaction wave would take $\sim$36,000 yr to propagate out 1500 AU (assuming a sound speed of 0.2 ), the boundary of the rarefaction wave is where the density profile changes from r$^{-2}$ to r$^{-1.5}$, reflecting free-fall collapse. Moreover, in the case of strong rotation, a portion of the density profile inside of the rarefaction wave can have a density profile of r$^{-0.5}$ [@cassen1981; @tsc1984]. Thus, in either case, the density structure of the inner envelopes can be significantly altered on a timescale shorter than the Class 0 phase [$\sim$150,000 yr, @dunham2014]. Thus, the outflow detection and extents may simply correlate with the decrease in the visibility amplitude profiles and not cause it. Lastly, the only flat visibility amplitude source with detected far-infrared line emission is 093005; only continuum emission was detected toward 091015 and 091016. The remaining sources with line emission had declining or uncertain visibility amplitude profiles. Far-Infrared Diagnostics in the Context of the PBRS --------------------------------------------------- A key finding of our study is that in the absence of other outflow indicators (CO ($J=1\rightarrow0$), *Spitzer* 4.5  scattered light/H$_2$), the PACS line emission (CO, H$2$O, or \[OI\]) does not independently show evidence for outflows in the form of shocks from the inner envelopes of the protostars. Thus, we only find far-infrared line emission toward sources that have detected CO ($J=1\rightarrow0$) outflows. This hints at a strong link between the mechanisms that produce the cold CO outflows and the warm/hot component observed in the far-infrared. Furthermore, the \[OI\] 63  transition is only convincingly detected toward 1 PBRS (HOPS 373) out of the 6 PBRS for which we could reliably subtract the background \[OI\] emission from the edge spaxels. We do not consider the detections and non-detections of 135003 and 019004 meaningful because of the strong, extended, and spatially variable \[OI\] emission in the OMC2/3 region. HOPS 373 has one of the more well-developed outflows, has an H$_2$O maser [@haschick1983], and has the brightest line spectrum of all the PBRS. @hollenbach1989 predict strong far-infrared CO and \[OI\] 63  emission for densities $>$ 10$^3$ cm$^{-3}$ for fast, dissociative J-shocks with velocities $>$30 . The \[OI\] luminosity detected toward HOPS 373 is comparable to other protostars with similar luminosity [@green2013]. While the tentative detections and non-detections toward the remaining PBRS do not point to anomalously weak \[OI\], we can confirm that the PBRS do not have exceptionally strong \[OI\] emission. Thus, we conclude that the outflows from the PBRS that give rise to the \[OI\] and high-J CO luminosities appear comparable in those tracers of other Class 0 protostars. If PBRS are typical of the youngest protostars, early Class 0 protostars, then we posit that outflows may be very weak initially. At a minimum, the PACS \[OI\] and CO observations, in addition to CO ($J=1\rightarrow0$), demonstrate that the PBRS are not accompanied by significantly stronger outflows than typical Class 0 protostars. While the PBRS are inconsistent with the expected properties of first hydrostatic cores (FHSC) due to their luminosities and colors (ST13), the outflows predicted from FHSCs are quite weak $<$ 15  [@tomida2013; @price2012]. The outflows are expected to increase in velocity as the source evolves, though the simulations did not follow the longer term evolution. Such slow outflows from the PBRS would be consistent with them having recently transitioned out of a FHSC phase. If the outflow power is directly linked to the mass accretion rate, then the time in which protostars have very low outflow power is likely quite short $<$ 10000 yr, consistent with the apparent youth of the PBRS. Alternatively, at 63  the opacity from the infalling envelopes may be obscuring the \[OI\] emission. Following @kch1993, the optical depth through an envelope with a density profile consistent with free-fall (r$^{-1.5}$) density profile [@ulrich1976] is given by $$\tau_{\lambda} = \frac{\kappa_{\lambda}\dot{M}}{2\pi(2GM_*)^{1/2}}r^{-1/2}$$ where $\kappa_{\lambda}$ is the wavelength dependent dust opacity, $G$ is the gravitational constant, $\dot{M}$ is the mass infall rate, $M_{*}$ is the protostar mass, and $r$ is the inner radius for which the optical depth is being calculated. M$_{*}$ is taken to be 0.5 M$_{\sun}$, which is adopted to set the envelope density for a given infall rate; the absolute value for the mass is not important, only the envelope density. Under the assumption of free-fall collapse, the infall rate is directly proportional to the envelope density $$\rho_{1000} = 2.378 \times 10^{-18} \left(\frac{\dot{M}_{env}}{10^{-5} M_{\sun} yr^{-1}}\right) \left(\frac{M_*}{0.5 M_{\sun}} \right) g\ cm^{-3}$$ which is the volume density at a radius of 1000 AU, following the notation of @furlan2016. From spectral energy distribution model fitting to the Orion protostars [@furlan2016], the Class 0 protostars in Orion had median $\rho_{1000}$ of 5.9$\times$10$^{-18}$ g cm$^{-3}$ with upper and lower quartiles of 1.8$\times$10$^{-18}$ g cm$^{-3}$ and 1.8$\times$10$^{-17}$ g cm$^{-3}$. The PBRS considered here are modeled by @furlan2016 to have a median $\rho_{1000}$ of 1.8$\times$10$^{-17}$ g cm$^{-3}$, and the SED fits tend to prefer densities of 3$\times$ to 10$\times$ higher than the typical and lowest density Class 0 protostars, respectively. This difference in density translates to significantly more opacity at 63  for the PBRS, a factor of 4$\times$ to 13$\times$ higher than the median Class 0 density and lower quartile; this results in a transmission of only 0.09 for a typical PBRS, versus 0.55 and 0.84 for the Class 0 median and lower Class 0 quartile, respectively. High opacity may be a particularly important consideration for 093005 which has a clear outflow in CO ($J=1\rightarrow0$), PACS CO, and H$_2$O emission but without \[OI\] emission. The high envelope opacities can also influence the CO rotation temperatures because the increasing optical depth at shorter wavelengths would cause the rotation temperatures to decrease due to flux attenuation of the line emission. To characterize the magnitude of this effect, we examined the difference in transmission for the PACS CO lines down to a radius of 1000 AU [where much of the PACS CO emission appears to be emitted, @green2013; @manoj2013]. For typical Class 0 envelope densities ($\rho_{1000}$ = 5.9$\times$10$^{-18}$ g cm$^{-3}$), the typical density of the PBRS envelopes $\rho_{1000}$ = 1.8$\times$10$^{-17}$ g cm$^{-3}$, and assuming dust opacities from @ossenkopf1994 [Table 1, column 5], we found that the 3$\times$ higher envelope density could decrease the CO rotation temperatures by $\sim$20 K. Thus, the CO rotation temperatures of 220 K - 230 K would be higher if corrected for optical depth, making them even more consistent with the WISH/WILL/DIGIT/HOPS samples Summary and Conclusions ======================= We have presented an observational study of both the cold and warm/hot molecular gas in outflows from the youngest known protostars in the Orion molecular clouds, the PACS Bright Red Sources (PBRS). The cold gas was probed toward 14 out of 19 PBRS using observations of the CO ($J=1\rightarrow0$) transition from CARMA, and the warm/hot gas was examined for 8 out of the 19 PBRS using full spectral scans (55  to 200 ) from the *Herschel* PACS far-infrared spectrometer. Finally, we also examined *Spitzer* 4.5  imaging to look for evidence of both compact and extended outflow activity from both scattered light and shocked H$_2$ emission. The results from the follow-up work done in this study and Paper I demonstrate the critical need for complementary data in the determining the nature of protostellar sources that are otherwise only characterized by their SEDs. Our main conclusions are as follows. 1\. We detect clear outflows toward 8 out of 14 PBRS (119019, 090003, 093005, 135003, HOPS 373, 082012, and 019003) in the CO ($J=1\rightarrow0$) molecular transition. There is tentative evidence for outflows toward an additional three PBRS (HOPS 372, 302002, and 061012). We also detect outflows from two non-PBRS HOPS 223, a FU Ori-like outbursting protostar [@fischer2012] and HOPS 68 [@poteet2011]; the HOPS 68 outflow also appears to be quadrupolar. No detectable outflow activity is found toward the PBRS 097002, 082005, 091015, and 091016 in CO ($J=1\rightarrow0$), 4.5  emission, or far-infrared spectroscopy (only 091015 and 091016). 2\. The outflows toward 090003 and 093005 are the most compact, subtending less than 20 (8400 AU) in total extent, having dynamical ages $\le$2,500 yr. These outflows are also found to have momenta, energies, and forces that are at the low end for Class 0 protostars. This observation, in addition to the lack of detectable outflows toward several other PBRS, leads us to suggest that outflows may start out weak in protostellar sources and become more energetic with time. These sources are also the only ones with flat visibility amplitudes to have detected outflows and we find a tentative tendency for the sources with flat visibility amplitudes in the 2.9 mm continuum (see Paper I) to either have no detected outflow activity or the most spatially compact outflows. This is further evidence for the sources with flat visibility amplitude being among the youngest protostars and the youngest PBRS. 3\. The outflow from 082012 is extremely powerful, with red-shifted emission detected out to $+$40  from line center and extent greater than the CARMA primary beam. Its total energy is in excess of any individual outflow in the NGC 1333 star forming region [@plunkett2013] and comparable to some of the most powerful known outflows from Class 0 protostars [e.g., @hirano2011 L1448C]. 4\. We detect far-infrared CO emission lines toward 6 out of the 8 PBRS observed. H$_2$O lines are detected toward 5 out of 8 PBRS, and OH and \[OI\] are detected toward 1 PBRS. The far-infrared CO, H$_2$O, and \[OI\] lines do not reveal outflows in the absence of outflow detections from other diagnostics. The CO luminosities and \[OI\] detections/upper limits are consistent with the results from larger samples of Class 0 protostars. However, the CO rotation temperatures tend to be lower than the typically observed 300 K CO rotation temperature for protostars; however, given the uncertainties the PBRS are consistent with the larger samples. Nevertheless, with a simple calculation of envelope opacity to a radius of 1000 AU, we find that the observed rotation temperatures of the PBRS could appear $\sim$20 K lower due to envelope opacity, given that the PBRS seem to have denser envelopes than typical Class 0 protostars. We wish to thank the anonymous referee for excellent suggestions which have significantly improved the quality of the manuscript. The authors also wish to acknowledge fruitful discussions with M. Dunham, L. Kristensen, and J. Mottram regarding this work. J.J.T. is currently supported by grant 639.041.439 from the Netherlands Organisation for Scientific Research (NWO). J.J.T acknowledges past support provided by NASA through Hubble Fellowship grant \#HST-HF-51300.01-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. The work of A.M.S. was supported by the Deutsche Forschungsgemeinschaft priority program 1573 (’Physics of the Interstellar Medium’). AK acknowledges support from the Foundation for Polish Science (FNP) and the Polish National Science Center grant 2013/11/N/ST9/00400. This work is based in part on observations made with Herschel, a European Space Agency Cornerstone Mission with significant participation by NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech. We are very grateful to have had the opportunity to conduct these follow-up observations with the CARMA array in California. The discontinuation of support for this productive facility is a loss that will continue to be felt into the future. Support for CARMA construction was derived from the states of Illinois, California, and Maryland, the James S. McDonnell Foundation, the Gordon and Betty Moore Foundation, the Kenneth T. and Eileen L. Norris Foundation, the University of Chicago, the Associates of the California Institute of Technology, and the National Science Foundation. Ongoing CARMA development and operations are supported by the National Science Foundation under a cooperative agreement, and by the CARMA partner universities. [*Facilities:*]{} , , , Individual Sources ================== HOPS 373 -------- HOPS 373 is the close neighbor of 093005, located 110  to the south. The dust continuum emission observed in D-configuration only showed some asymmetry and the combined D and C configuration data resolved a second component, separated by 4 (Paper I). An outflow was previously detected in CO ($J=3\rightarrow2$) observations with the JCMT [@gibb2000] and an associated water maser by @haschick1983. Our observations of CO ($J=1\rightarrow0$) in Figure \[HOPS373\] show that the outflow has quite a wide angle and is extended beyond the primary beam. We also tentatively detected an outflow originating from the secondary source that has blue and red-shifted lobes opposite of the main outflow. The wide separation of the blue and red-shifted lobes indicates that the source is viewed at an inclination angle between 50 and 80. There is higher-velocity redshifted emission observed away from the continuum source toward the edge of the primary beam. The far-infrared line emission from this source is quite intense, detecting \[OI\], OH, CO, and H$_2$O. The line emission from this source is the third brightest of all HOPS protostars and the only PBRS in our sample with confidently detected \[OI\] and OH emission. 093005 ------ The reddest PBRS, 093005, is located at the intersection of three filaments and $\sim$110 north of HOPS 373 (ST13). At wavelengths shorter than 70 , 093005 was only detected in *Spitzer* 3.6  and 4.5  imaging (Figure \[spitzer-a\]). The 4.5  emission could be indicative of shocked H$_2$ emission and/or scattered light in an outflow cavity. Thus, a detection at 4.5 is indicative of possible outflow activity toward this source. We clearly detect the CO outflow originating from 093005, as shown in Figure \[093005\]. The outflow appears compact with an offset between the red and blueshifted lobes of only $\sim$3. The position-velocity diagram of the outflow simply shows high-velocity emission offset from the protostar position, not the typical increasing velocity with distance as typical for many protostellar outflows [@arce2007]. The features could result from a compact bow shock component as the outflow begins to break out from its envelope. However, the resolution of our observations was only $\sim$3 (1200 AU), making clear determinations as to the nature of the high-velocity features difficult. The relative velocities of the red and blue-shifted lobes and their close spatial location indicate that the source is not oriented edge-on and is at an inclination angle of $\sim$30. Compact bow-shocks viewed at an intermediate inclination could show observed morphology [@arce2007]. Far-infrared CO and H$_2$O line emission is also clearly detected toward this source. 090003 ------ The PBRS 090003 [also called Orion B9 SMM 3; @miettinen2012a] is located in a loose filamentary complex north of NGC 2024 with several protostars and starless cores over a 0.5 pc region [@miettinen2012a]. Much like 093005, the only detection shortward of 24  for this source is at 4.5 , where there is a small feature offset from the location of the protostar. This may be indicative of a knot of shocked H$_2$ emission [@miettinen2012a; @stutz2013]. The CO ($J=1\rightarrow0$) outflow from this source appears similar to that of 093005 and is indicative of $\sim$30 inclination, as shown in Figure \[090003\]; however, in contrast, there is a more spatially extended, low-velocity component. The high-velocities near the source and low velocities extended away from the source could be indicative of a wide-angle wind driving this outflow. Moreover, only $\pm$1  around the systemic velocity is corrupted by $^{12}$CO emission from the cloud, so we are able to see lower-velocity features than in 093005. @miettinen2012a observed $^{13}$CO ($J=2\rightarrow1$) with APEX ( $\sim$ 30 resolution) and did not detect any indication of outflow emission from 090003, suggesting that the outflow is quite compact. 082012 and HOPS 372 ------------------- The outflow from 082012 is the brightest and one of the two most spatially extended outflows in the sample. Moreover, the outflow is visible over the largest velocity range (aside from HOPS 68) as shown by the 3 panels integrated at low, moderate, and high velocities in Figure \[082012\]. @sandell1999 previously reported single-dish CO ($J=3\rightarrow2$) and continuum maps at 450  and 850 toward this region. They resolved the dust emission around both protostars, and found a high-velocity outflow, consistent with our data, but mapped over a larger region, $\pm$150 from the source. The clear separation of the blue and redshifted lobes indicates an intermediate orientation of the source(s). The driving source of the collimated, high-velocity emission seems to be 082012; however, at lower velocities the red-shifted lobe extends back to HOPS 372 and there is blue-shifted emission that appears associated with HOPS 372 as well. Thus, the two outflows are nearly parallel and are perhaps interacting, but at a minimum their emission is clearly overlapping at lower velocities. The highest observed outflow velocities toward 082012 are in excess of $\pm$40  with multiple components being evident in the PV diagram and we can see the characteristic ‘Hubble-flow’ in the PV diagram. Furthermore, there are also red and blue-shifted CO emission clumps nearly orthogonal to the main outflow of 082012 which could be yet another outflow in the region. Furthermore, there are extended H$_2$ knots along the position angle of the outflow from 082012 as shown in Figure \[spitzer-a\]. 135003 ------ The PBRS 135003 is located in the OMC2/3 region of the Orion A cloud and located near OMC2-FIR6. The outflow from 135003 is well-collimated on the blue-shifted side, another source with a characteristic ‘Hubble-flow’ in the PV diagram, see Figure \[135003\]. The red-shifted, however, side does not appear as well-collimated near the source, but there is another red-shifted feature along the position angle, but outside the primary beam. An initial outflow detection was reported for this source by @shimajiri2009, consistent with our measured position angle. Moreover, their single-dish CO ($J=3\rightarrow2$) data show that the outflow does extend outside our primary beam. The *Spitzer* 4.5  map in Figure \[spitzer-b\] shows a few knots of emission extending along the blue-shifted side of the outflow. H$_2$ imaging from @stanke2002 (SMZ 1-38) shows emission along both the northern (blue-shifted) and southern (red-shifted sides of the outflow). This source also shows bright far-infrared CO and H$_2$O features along its outflow, coinciding with the blue-shifted side of the outflow as shown in Figure \[135003-footprint\]. We do not detect an outflow from its neighbor HOPS 59 within our sensitivity limits in low-J CO or PACS far-infrared lines. 019003 ------ The PBRS source 019003 is also located in the OMC 2/3 region, northward of 135003. In Paper I, we detected 2 continuum sources toward the location of 019003 that were separated by $\sim$10; the source associated with the PBRS is 019003-A and the other appears starless and is denoted 019003-B (Paper I). We detect an apparent outflow from 019003-A as shown in Figure \[019003\] and the 4.5  emission is also offset from the main outflow axis, similar to 090003, HOPS 373, and 302002. The surface brightness of the outflow is low, thus its detection is not as definitive as some of the others due to the crowded, confused region. Finally, there was no complementary detection in H$_2$ from @stanke2002. HOPS 68 ------- The Class I protostar HOPS 68 is detected at the edge of the primary beam in the 019003 field. An outflow is well-detected from this source; the red-shifted lobe falls within the half-power point of the primary beam, while the blue-shifted lobe is located just outside the half-power point. The velocity distribution of the outflow indicates that it is located at an inclination angle of 50 from comparison to the models of @cabrit1986. An intermediate outflow inclination was necessary for a model by [@poteet2011] to explain the relatively flat SED between 3.6  to 24 , deep silicate absorption feature, and crystalline silicate features observed with *Spitzer*; the crystalline dust is postulated to have been formed by shocks in the outflow cavity walls. Furthermore, there are apparently two outflows from this source, a lower velocity outflow that is more east-west in orientation (PA $\sim$ 230), but at higher velocities there is an apparent shift and the outflow is more north-south in orientation (PA $\sim$ 160). However, the blue-shifted side has both components out to the highest velocities we can measure. We overlay the integrated intensity maps of the high-velocity emission on a Ks-band image in Figure \[HOPS68\], and there are apparent outflow features associated with both the northwest-southeast component and the northeast-southwest component. H$_2$ imaging by @stanke2002 confirms that these knots are H$_2$ emission (SMZ 1-28) and they also suggested that the driving source was FIR2 from @chini1997 (coincident with HOPS 68). 302002 ------ The PBRS 302002 is located at the end of an isolated filamentary structure in NGC 2068, it is located $\sim$20 to the south of a Class I source (HOPS 331). 302002 is undetected at 24 but does seem to have emission at 4.5 , indicative of an outflow cavity or shocked H$_2$ emission in the outflow, see Figure \[spitzer-b\]. We show in Figure \[302002\] that there appears to be outflow emission associated with this source; however, the blue and red-shifted emission are not located on the same position angle from the source. The blue-shifted emission is narrow and extends out to the edge of the primary beam. The red-shifted emission on the other hand is quite compact and located in a single clump offset west of the protostar. The CO outflow direction is marginally consistent with the apparent orientation in the 4.5  imaging. The poorly detected outflow and low-velocity of the emission may indicate that this source is close to edge-on. From the comparison to @cabrit1986, the outflow could be between 50 and 80 but likely closer to 80. 061012 and HOPS 223 ------------------- The PBRS 061012 is located very near the outbursting protostar V2775 Ori (HOPS 223) in the L1641 region [@fischer2012]. The outflow toward 061012 cannot be unambiguously disentangled from that of HOPS 223 in the integrated velocity map shown in Figure \[061012\]. However, looking at the PV diagram of the $^{12}$CO emission centered on the continuum source of 061012, we do see evidence of higher velocity emission near the protostar. The position angle of the outflow is estimated from the resolved 4.5  emission shown in @stutz2013 and there are H$_2$ emission knots at 4.5  extending almost 0.3 pc from the source (Figure \[spitzer-b\]). Thus, we may be detecting an inner, compact outflow toward this protostar. The outflow from HOPS 223 appears quite wide, bright and clumpy in the integrated intensity map and PV diagram in Figure \[061012\]. The clumpiness could in part result from spatial filtering and that the source is toward the edge of the primary beam with increased noise. However, episodic mass ejection episodes could contribute to the clumpiness of the outflow emission, which has been seen in outflow data toward HH 46/47 [@arce2013]. 091015 and 091016 ----------------- The PBRS 091015 and 091016 are close neighbors in NGC 2068, 091016 being $\sim$40 east of 091015; these sources are completely undetected at wavelengths shortward of 70 . We do not detect evidence of outflow emission from these sources at any wavelength. Given that a substantial amount of cloud emission is resolved-out at line center, there could be lower-velocity outflow emission associated with these sources that we simply cannot detect with the current data. Observations of higher-excitation CO transitions at higher resolution may better distinguish potential outflow emission from these sources. However, we also did not detect any far-infrared line emission from these sources, a further indication that any outflows from these sources may be weak, or completely buried within their the optically thick envelopes. 082005 ------ The PBRS 082005 is located about 4 south of 082012, and these sources are connected by a filamentary structure detected at 870  and 160 . This source is also undetected at wavelengths shorter than 70 . No CO outflow emission was detected in our CARMA observations toward this source and we see no evidence for outflow emission from the *Spitzer* 4.5  maps in Figure \[spitzer-c\]. 097002 ------ The PBRS 097002 is found near a bright 4.5  and 24  source as seen in *Spitzer* data shown by @stutz2013; however, this short wavelength emission is not from 097002, which is only detected at 70  and longer wavelengths. We do not detect an outflow from this source in our CO ($J=1\rightarrow0$) maps, but the continuum is quite bright (Paper I). However, there is some emission detected near line-center at the source position. 119019 ------ The outflow toward 119019 has complete spatial overlap between the red and blue-shifted emission meaning that this source is viewed almost exactly edge-on. This source was also one of the fainter continuum sources detected in (Paper I). Therefore, unlike the rest of the sample, this source may only have been identified as a PBRS due to an edge-on orientation. The outflow extends outside the CARMA primary beam and the velocity width of the outflow is quite narrow, only $\pm$3 ; however, the outflow may have greater speeds given that we are viewing it in the plane of the sky. Some diffuse emission is detected at 4.5  near the protostar location and along the outflow in figure \[spitzer-b\]; @davis2009 also detects H$_2$ knots that appear to be part of this outflow (DFS 136). This source also has the faintest far-infrared line emission for which we have a confident detection. ![HOPS 68– Zoom in on the HOPS 68 outflow(s). We show plots of the low velocity CO contours (3.5 - 6.5 ; 16.5 - 19.5 ) overlaid on the 2.9 mm continuum image and we overlaid the high velocity CO (30 - 40 ; -22 - -10 ) contours on the Ks-band (2.15 ) image from Magellan PANIC. Between the high and low velocity ranges, the apparent position angle of the outflow changes from about 225 to 170. The contours in all plots are plot are \[$\pm$6, 9, 12, 15 ,20, 30, 40, 50, 60, 70\] $\times$ $\sigma$; $\sigma_{red}$ = 1.1 K and 1.39 K for the low and high velocities. For the blue contours, $\sigma_{blue}$ = 1.0 K and 1.55 K for the low and high velocities. The PV plot contours are \[-6, -3, 3, 5, 7, 9, 12, 15, 18, 21, 24, 27, 30, 35,..., 60\] $\times$ $\sigma$ and $\sigma$ = 0.45 K. The half-power point of the primary beam is plotted as the dashed arc. []{data-label="HOPS68"}](f8a.pdf "fig:") ![HOPS 68– Zoom in on the HOPS 68 outflow(s). We show plots of the low velocity CO contours (3.5 - 6.5 ; 16.5 - 19.5 ) overlaid on the 2.9 mm continuum image and we overlaid the high velocity CO (30 - 40 ; -22 - -10 ) contours on the Ks-band (2.15 ) image from Magellan PANIC. Between the high and low velocity ranges, the apparent position angle of the outflow changes from about 225 to 170. The contours in all plots are plot are \[$\pm$6, 9, 12, 15 ,20, 30, 40, 50, 60, 70\] $\times$ $\sigma$; $\sigma_{red}$ = 1.1 K and 1.39 K for the low and high velocities. For the blue contours, $\sigma_{blue}$ = 1.0 K and 1.55 K for the low and high velocities. The PV plot contours are \[-6, -3, 3, 5, 7, 9, 12, 15, 18, 21, 24, 27, 30, 35,..., 60\] $\times$ $\sigma$ and $\sigma$ = 0.45 K. The half-power point of the primary beam is plotted as the dashed arc. []{data-label="HOPS68"}](f8b.pdf "fig:") ![HOPS 68– Zoom in on the HOPS 68 outflow(s). We show plots of the low velocity CO contours (3.5 - 6.5 ; 16.5 - 19.5 ) overlaid on the 2.9 mm continuum image and we overlaid the high velocity CO (30 - 40 ; -22 - -10 ) contours on the Ks-band (2.15 ) image from Magellan PANIC. Between the high and low velocity ranges, the apparent position angle of the outflow changes from about 225 to 170. The contours in all plots are plot are \[$\pm$6, 9, 12, 15 ,20, 30, 40, 50, 60, 70\] $\times$ $\sigma$; $\sigma_{red}$ = 1.1 K and 1.39 K for the low and high velocities. For the blue contours, $\sigma_{blue}$ = 1.0 K and 1.55 K for the low and high velocities. The PV plot contours are \[-6, -3, 3, 5, 7, 9, 12, 15, 18, 21, 24, 27, 30, 35,..., 60\] $\times$ $\sigma$ and $\sigma$ = 0.45 K. The half-power point of the primary beam is plotted as the dashed arc. []{data-label="HOPS68"}](f8c.pdf "fig:") ![HOPS 68– Zoom in on the HOPS 68 outflow(s). We show plots of the low velocity CO contours (3.5 - 6.5 ; 16.5 - 19.5 ) overlaid on the 2.9 mm continuum image and we overlaid the high velocity CO (30 - 40 ; -22 - -10 ) contours on the Ks-band (2.15 ) image from Magellan PANIC. Between the high and low velocity ranges, the apparent position angle of the outflow changes from about 225 to 170. The contours in all plots are plot are \[$\pm$6, 9, 12, 15 ,20, 30, 40, 50, 60, 70\] $\times$ $\sigma$; $\sigma_{red}$ = 1.1 K and 1.39 K for the low and high velocities. For the blue contours, $\sigma_{blue}$ = 1.0 K and 1.55 K for the low and high velocities. The PV plot contours are \[-6, -3, 3, 5, 7, 9, 12, 15, 18, 21, 24, 27, 30, 35,..., 60\] $\times$ $\sigma$ and $\sigma$ = 0.45 K. The half-power point of the primary beam is plotted as the dashed arc. []{data-label="HOPS68"}](f8d.pdf "fig:") ![Measured outflow forces (F$_{CO}$) versus L$_{bol}$ for all sources with detected outflows. There is no clear relationship between these two source properties in our data; however, there is a general tendency of high luminosity sources having greater outflow forces. We plot the relationships that have been found in the literature for larger samples of objects from @vdmarel2013 (dotted line) and @bontemps1996 (dashed line). The literature relationships utilized single-dish data, while our data are interferometric; thus, missing flux could cause F$_{CO}$ to be systematically underestimated. Furthermore, the Class 0 sources in the literature have F$_{CO}$ and $L_{bol}$ values that are above the @bontemps1996 relation and the plotted relationships are principally fit to the Class I protostars. []{data-label="outflow-lbol"}](f11.pdf) ![*Spitzer* 4.5  images of the PBRS sample also having CO ($J=1\rightarrow0$) observations. This particular *Spitzer* band has a bright H$_2$ feature commonly associated with shock-excited outflow emission but is also sensitive to scattered light in the outflow cavities. The protostars positions are marked with either white crosses or small circles and the outflow position angles are denoted by the red and blueshifted arrows. The PBRS 093005, 090003, HOPS 373, and 302002 have compact 4.5  emission near the location of the protostars and no extended H$_2$ knots. The PBRS 061012, HOPS 223, 082012, and 119019 have indications of H$_2$ emission extended $\ga$0.1 pc from the protostars. Moreover, in the 061012 field the protostars HOPS 221 shows another apparent east-west outflow. The protostars 091015, 091016, 097002, and 082005 do not show evidence of any emission shortward of 70 . The source near the location of 097002 is another young star and not the PBRS (ST13).[]{data-label="spitzer-a"}](f12a.pdf "fig:") ![*Spitzer* 4.5  images of the PBRS sample also having CO ($J=1\rightarrow0$) observations. This particular *Spitzer* band has a bright H$_2$ feature commonly associated with shock-excited outflow emission but is also sensitive to scattered light in the outflow cavities. The protostars positions are marked with either white crosses or small circles and the outflow position angles are denoted by the red and blueshifted arrows. The PBRS 093005, 090003, HOPS 373, and 302002 have compact 4.5  emission near the location of the protostars and no extended H$_2$ knots. The PBRS 061012, HOPS 223, 082012, and 119019 have indications of H$_2$ emission extended $\ga$0.1 pc from the protostars. Moreover, in the 061012 field the protostars HOPS 221 shows another apparent east-west outflow. The protostars 091015, 091016, 097002, and 082005 do not show evidence of any emission shortward of 70 . The source near the location of 097002 is another young star and not the PBRS (ST13).[]{data-label="spitzer-a"}](f12b.pdf "fig:") ![*Spitzer* 4.5  images of the PBRS sample also having CO ($J=1\rightarrow0$) observations. This particular *Spitzer* band has a bright H$_2$ feature commonly associated with shock-excited outflow emission but is also sensitive to scattered light in the outflow cavities. The protostars positions are marked with either white crosses or small circles and the outflow position angles are denoted by the red and blueshifted arrows. The PBRS 093005, 090003, HOPS 373, and 302002 have compact 4.5  emission near the location of the protostars and no extended H$_2$ knots. The PBRS 061012, HOPS 223, 082012, and 119019 have indications of H$_2$ emission extended $\ga$0.1 pc from the protostars. Moreover, in the 061012 field the protostars HOPS 221 shows another apparent east-west outflow. The protostars 091015, 091016, 097002, and 082005 do not show evidence of any emission shortward of 70 . The source near the location of 097002 is another young star and not the PBRS (ST13).[]{data-label="spitzer-a"}](f12c.pdf "fig:") ![*Spitzer* 4.5  images of the PBRS sample also having CO ($J=1\rightarrow0$) observations. This particular *Spitzer* band has a bright H$_2$ feature commonly associated with shock-excited outflow emission but is also sensitive to scattered light in the outflow cavities. The protostars positions are marked with either white crosses or small circles and the outflow position angles are denoted by the red and blueshifted arrows. The PBRS 093005, 090003, HOPS 373, and 302002 have compact 4.5  emission near the location of the protostars and no extended H$_2$ knots. The PBRS 061012, HOPS 223, 082012, and 119019 have indications of H$_2$ emission extended $\ga$0.1 pc from the protostars. Moreover, in the 061012 field the protostars HOPS 221 shows another apparent east-west outflow. The protostars 091015, 091016, 097002, and 082005 do not show evidence of any emission shortward of 70 . The source near the location of 097002 is another young star and not the PBRS (ST13).[]{data-label="spitzer-a"}](f12d.pdf "fig:") ![[]{data-label="spitzer-b"}](f12e.pdf "fig:") ![[]{data-label="spitzer-b"}](f12f.pdf "fig:") ![[]{data-label="spitzer-b"}](f12g.pdf "fig:") ![[]{data-label="spitzer-b"}](f12h.pdf "fig:") ![[]{data-label="spitzer-b"}](f12i.pdf "fig:") ![[]{data-label="spitzer-b"}](f12j.pdf "fig:") ![[]{data-label="spitzer-c"}](f12k.pdf) ![Continuum subtracted PACS Spectra for the observed sample; the sources are plotted in descending order of line brightness. The wavelengths of common spectral lines are marked with arrows with labels located to the right of the plot. Negative features are not absorption, but reflect line contamination in the off position. Only \[OI\] (63  and 143 ) and \[CII\] were found to be contaminated by the off positions for some sources. []{data-label="pacsspectra"}](f13.pdf) ![PACS spectra centered on the CO ($J=14\rightarrow13$) transition without continuum subtraction. The downward pointing arrow marks the wavelength of CO ($J=14\rightarrow13$). We also note the peak line flux density, rms of the continuum, and continuum level in each plot; the peak line flux density is relative to the continuum level. PBRS 119019 only has a 2.9$\sigma$ detection of the CO ($J=14\rightarrow13$), but other CO transitions are detected with higher significance, thus we regard this as a robust detection. On the other hand the PBRS 061012 has only a tentative (2.5$\sigma$) detection of CO ($J=14\rightarrow13$) and no other CO transitions detected; 091015 and 091016 do not have detections. HOPS 347 has a peak at the expected wavelength of CO ($J=14\rightarrow13$) but it is not significant given the noise around the line. The peak line flux density, RMS, and continuum level are denoted in each plot. []{data-label="co14-13spectra"}](f14a.pdf "fig:") ![PACS spectra centered on the CO ($J=14\rightarrow13$) transition without continuum subtraction. The downward pointing arrow marks the wavelength of CO ($J=14\rightarrow13$). We also note the peak line flux density, rms of the continuum, and continuum level in each plot; the peak line flux density is relative to the continuum level. PBRS 119019 only has a 2.9$\sigma$ detection of the CO ($J=14\rightarrow13$), but other CO transitions are detected with higher significance, thus we regard this as a robust detection. On the other hand the PBRS 061012 has only a tentative (2.5$\sigma$) detection of CO ($J=14\rightarrow13$) and no other CO transitions detected; 091015 and 091016 do not have detections. HOPS 347 has a peak at the expected wavelength of CO ($J=14\rightarrow13$) but it is not significant given the noise around the line. The peak line flux density, RMS, and continuum level are denoted in each plot. []{data-label="co14-13spectra"}](f14b.pdf "fig:") ![PACS spectra centered on the CO ($J=14\rightarrow13$) transition without continuum subtraction. The downward pointing arrow marks the wavelength of CO ($J=14\rightarrow13$). We also note the peak line flux density, rms of the continuum, and continuum level in each plot; the peak line flux density is relative to the continuum level. PBRS 119019 only has a 2.9$\sigma$ detection of the CO ($J=14\rightarrow13$), but other CO transitions are detected with higher significance, thus we regard this as a robust detection. On the other hand the PBRS 061012 has only a tentative (2.5$\sigma$) detection of CO ($J=14\rightarrow13$) and no other CO transitions detected; 091015 and 091016 do not have detections. HOPS 347 has a peak at the expected wavelength of CO ($J=14\rightarrow13$) but it is not significant given the noise around the line. The peak line flux density, RMS, and continuum level are denoted in each plot. []{data-label="co14-13spectra"}](f14c.pdf "fig:") ![PACS spectra centered on the CO ($J=14\rightarrow13$) transition without continuum subtraction. The downward pointing arrow marks the wavelength of CO ($J=14\rightarrow13$). We also note the peak line flux density, rms of the continuum, and continuum level in each plot; the peak line flux density is relative to the continuum level. PBRS 119019 only has a 2.9$\sigma$ detection of the CO ($J=14\rightarrow13$), but other CO transitions are detected with higher significance, thus we regard this as a robust detection. On the other hand the PBRS 061012 has only a tentative (2.5$\sigma$) detection of CO ($J=14\rightarrow13$) and no other CO transitions detected; 091015 and 091016 do not have detections. HOPS 347 has a peak at the expected wavelength of CO ($J=14\rightarrow13$) but it is not significant given the noise around the line. The peak line flux density, RMS, and continuum level are denoted in each plot. []{data-label="co14-13spectra"}](f14d.pdf "fig:") ![PACS spectra centered on the CO ($J=14\rightarrow13$) transition without continuum subtraction. The downward pointing arrow marks the wavelength of CO ($J=14\rightarrow13$). We also note the peak line flux density, rms of the continuum, and continuum level in each plot; the peak line flux density is relative to the continuum level. PBRS 119019 only has a 2.9$\sigma$ detection of the CO ($J=14\rightarrow13$), but other CO transitions are detected with higher significance, thus we regard this as a robust detection. On the other hand the PBRS 061012 has only a tentative (2.5$\sigma$) detection of CO ($J=14\rightarrow13$) and no other CO transitions detected; 091015 and 091016 do not have detections. HOPS 347 has a peak at the expected wavelength of CO ($J=14\rightarrow13$) but it is not significant given the noise around the line. The peak line flux density, RMS, and continuum level are denoted in each plot. []{data-label="co14-13spectra"}](f14e.pdf "fig:") ![PACS spectra centered on the CO ($J=14\rightarrow13$) transition without continuum subtraction. The downward pointing arrow marks the wavelength of CO ($J=14\rightarrow13$). We also note the peak line flux density, rms of the continuum, and continuum level in each plot; the peak line flux density is relative to the continuum level. PBRS 119019 only has a 2.9$\sigma$ detection of the CO ($J=14\rightarrow13$), but other CO transitions are detected with higher significance, thus we regard this as a robust detection. On the other hand the PBRS 061012 has only a tentative (2.5$\sigma$) detection of CO ($J=14\rightarrow13$) and no other CO transitions detected; 091015 and 091016 do not have detections. HOPS 347 has a peak at the expected wavelength of CO ($J=14\rightarrow13$) but it is not significant given the noise around the line. The peak line flux density, RMS, and continuum level are denoted in each plot. []{data-label="co14-13spectra"}](f14f.pdf "fig:") ![PACS spectra centered on the CO ($J=14\rightarrow13$) transition without continuum subtraction. The downward pointing arrow marks the wavelength of CO ($J=14\rightarrow13$). We also note the peak line flux density, rms of the continuum, and continuum level in each plot; the peak line flux density is relative to the continuum level. PBRS 119019 only has a 2.9$\sigma$ detection of the CO ($J=14\rightarrow13$), but other CO transitions are detected with higher significance, thus we regard this as a robust detection. On the other hand the PBRS 061012 has only a tentative (2.5$\sigma$) detection of CO ($J=14\rightarrow13$) and no other CO transitions detected; 091015 and 091016 do not have detections. HOPS 347 has a peak at the expected wavelength of CO ($J=14\rightarrow13$) but it is not significant given the noise around the line. The peak line flux density, RMS, and continuum level are denoted in each plot. []{data-label="co14-13spectra"}](f14g.pdf "fig:") ![PACS spectra centered on the CO ($J=14\rightarrow13$) transition without continuum subtraction. The downward pointing arrow marks the wavelength of CO ($J=14\rightarrow13$). We also note the peak line flux density, rms of the continuum, and continuum level in each plot; the peak line flux density is relative to the continuum level. PBRS 119019 only has a 2.9$\sigma$ detection of the CO ($J=14\rightarrow13$), but other CO transitions are detected with higher significance, thus we regard this as a robust detection. On the other hand the PBRS 061012 has only a tentative (2.5$\sigma$) detection of CO ($J=14\rightarrow13$) and no other CO transitions detected; 091015 and 091016 do not have detections. HOPS 347 has a peak at the expected wavelength of CO ($J=14\rightarrow13$) but it is not significant given the noise around the line. The peak line flux density, RMS, and continuum level are denoted in each plot. []{data-label="co14-13spectra"}](f14h.pdf "fig:") ![PACS spectra centered on the CO ($J=14\rightarrow13$) transition without continuum subtraction. The downward pointing arrow marks the wavelength of CO ($J=14\rightarrow13$). We also note the peak line flux density, rms of the continuum, and continuum level in each plot; the peak line flux density is relative to the continuum level. PBRS 119019 only has a 2.9$\sigma$ detection of the CO ($J=14\rightarrow13$), but other CO transitions are detected with higher significance, thus we regard this as a robust detection. On the other hand the PBRS 061012 has only a tentative (2.5$\sigma$) detection of CO ($J=14\rightarrow13$) and no other CO transitions detected; 091015 and 091016 do not have detections. HOPS 347 has a peak at the expected wavelength of CO ($J=14\rightarrow13$) but it is not significant given the noise around the line. The peak line flux density, RMS, and continuum level are denoted in each plot. []{data-label="co14-13spectra"}](f14i.pdf "fig:") ![PACS spectra around the \[OI\] 63.18  transition. The solid-line is the foreground/background subtracted spectrum. The foreground/background is measured from the edge spaxels and the fine dashed line is the standard deviation of the foreground/background spectrum. This is a good representation of the noise level in the spectral band since the \[OI\] variations will dominate the noise. Only HOPS 373 has a convincing detection in the \[OI\] line, the detection of 019003 is tentative (2.2$\sigma$) given the large variations in the foreground/background spectrum. There are features at the expected wavelength of \[OI\] toward 061012 and 119019, but there are other features that have the same level of peak intensity that do not correspond to an expected spectral line. The foreground/background \[OI\] near 135003 is highly variable, resulting in the negative spectrum. The peak line flux density, spectrum RMS, and the RMS of the background (BG RMS Peak) emission at the wavelength of the \[OI\] line are noted in each panel. []{data-label="OI-spectra"}](f15a.pdf "fig:") ![PACS spectra around the \[OI\] 63.18  transition. The solid-line is the foreground/background subtracted spectrum. The foreground/background is measured from the edge spaxels and the fine dashed line is the standard deviation of the foreground/background spectrum. This is a good representation of the noise level in the spectral band since the \[OI\] variations will dominate the noise. Only HOPS 373 has a convincing detection in the \[OI\] line, the detection of 019003 is tentative (2.2$\sigma$) given the large variations in the foreground/background spectrum. There are features at the expected wavelength of \[OI\] toward 061012 and 119019, but there are other features that have the same level of peak intensity that do not correspond to an expected spectral line. The foreground/background \[OI\] near 135003 is highly variable, resulting in the negative spectrum. The peak line flux density, spectrum RMS, and the RMS of the background (BG RMS Peak) emission at the wavelength of the \[OI\] line are noted in each panel. []{data-label="OI-spectra"}](f15b.pdf "fig:") ![PACS spectra around the \[OI\] 63.18  transition. The solid-line is the foreground/background subtracted spectrum. The foreground/background is measured from the edge spaxels and the fine dashed line is the standard deviation of the foreground/background spectrum. This is a good representation of the noise level in the spectral band since the \[OI\] variations will dominate the noise. Only HOPS 373 has a convincing detection in the \[OI\] line, the detection of 019003 is tentative (2.2$\sigma$) given the large variations in the foreground/background spectrum. There are features at the expected wavelength of \[OI\] toward 061012 and 119019, but there are other features that have the same level of peak intensity that do not correspond to an expected spectral line. The foreground/background \[OI\] near 135003 is highly variable, resulting in the negative spectrum. The peak line flux density, spectrum RMS, and the RMS of the background (BG RMS Peak) emission at the wavelength of the \[OI\] line are noted in each panel. []{data-label="OI-spectra"}](f15c.pdf "fig:") ![PACS spectra around the \[OI\] 63.18  transition. The solid-line is the foreground/background subtracted spectrum. The foreground/background is measured from the edge spaxels and the fine dashed line is the standard deviation of the foreground/background spectrum. This is a good representation of the noise level in the spectral band since the \[OI\] variations will dominate the noise. Only HOPS 373 has a convincing detection in the \[OI\] line, the detection of 019003 is tentative (2.2$\sigma$) given the large variations in the foreground/background spectrum. There are features at the expected wavelength of \[OI\] toward 061012 and 119019, but there are other features that have the same level of peak intensity that do not correspond to an expected spectral line. The foreground/background \[OI\] near 135003 is highly variable, resulting in the negative spectrum. The peak line flux density, spectrum RMS, and the RMS of the background (BG RMS Peak) emission at the wavelength of the \[OI\] line are noted in each panel. []{data-label="OI-spectra"}](f15d.pdf "fig:") ![PACS spectra around the \[OI\] 63.18  transition. The solid-line is the foreground/background subtracted spectrum. The foreground/background is measured from the edge spaxels and the fine dashed line is the standard deviation of the foreground/background spectrum. This is a good representation of the noise level in the spectral band since the \[OI\] variations will dominate the noise. Only HOPS 373 has a convincing detection in the \[OI\] line, the detection of 019003 is tentative (2.2$\sigma$) given the large variations in the foreground/background spectrum. There are features at the expected wavelength of \[OI\] toward 061012 and 119019, but there are other features that have the same level of peak intensity that do not correspond to an expected spectral line. The foreground/background \[OI\] near 135003 is highly variable, resulting in the negative spectrum. The peak line flux density, spectrum RMS, and the RMS of the background (BG RMS Peak) emission at the wavelength of the \[OI\] line are noted in each panel. []{data-label="OI-spectra"}](f15e.pdf "fig:") ![PACS spectra around the \[OI\] 63.18  transition. The solid-line is the foreground/background subtracted spectrum. The foreground/background is measured from the edge spaxels and the fine dashed line is the standard deviation of the foreground/background spectrum. This is a good representation of the noise level in the spectral band since the \[OI\] variations will dominate the noise. Only HOPS 373 has a convincing detection in the \[OI\] line, the detection of 019003 is tentative (2.2$\sigma$) given the large variations in the foreground/background spectrum. There are features at the expected wavelength of \[OI\] toward 061012 and 119019, but there are other features that have the same level of peak intensity that do not correspond to an expected spectral line. The foreground/background \[OI\] near 135003 is highly variable, resulting in the negative spectrum. The peak line flux density, spectrum RMS, and the RMS of the background (BG RMS Peak) emission at the wavelength of the \[OI\] line are noted in each panel. []{data-label="OI-spectra"}](f15f.pdf "fig:") ![PACS spectra around the \[OI\] 63.18  transition. The solid-line is the foreground/background subtracted spectrum. The foreground/background is measured from the edge spaxels and the fine dashed line is the standard deviation of the foreground/background spectrum. This is a good representation of the noise level in the spectral band since the \[OI\] variations will dominate the noise. Only HOPS 373 has a convincing detection in the \[OI\] line, the detection of 019003 is tentative (2.2$\sigma$) given the large variations in the foreground/background spectrum. There are features at the expected wavelength of \[OI\] toward 061012 and 119019, but there are other features that have the same level of peak intensity that do not correspond to an expected spectral line. The foreground/background \[OI\] near 135003 is highly variable, resulting in the negative spectrum. The peak line flux density, spectrum RMS, and the RMS of the background (BG RMS Peak) emission at the wavelength of the \[OI\] line are noted in each panel. []{data-label="OI-spectra"}](f15g.pdf "fig:") ![PACS spectra around the \[OI\] 63.18  transition. The solid-line is the foreground/background subtracted spectrum. The foreground/background is measured from the edge spaxels and the fine dashed line is the standard deviation of the foreground/background spectrum. This is a good representation of the noise level in the spectral band since the \[OI\] variations will dominate the noise. Only HOPS 373 has a convincing detection in the \[OI\] line, the detection of 019003 is tentative (2.2$\sigma$) given the large variations in the foreground/background spectrum. There are features at the expected wavelength of \[OI\] toward 061012 and 119019, but there are other features that have the same level of peak intensity that do not correspond to an expected spectral line. The foreground/background \[OI\] near 135003 is highly variable, resulting in the negative spectrum. The peak line flux density, spectrum RMS, and the RMS of the background (BG RMS Peak) emission at the wavelength of the \[OI\] line are noted in each panel. []{data-label="OI-spectra"}](f15h.pdf "fig:") ![PACS spectra around the \[OI\] 63.18  transition. The solid-line is the foreground/background subtracted spectrum. The foreground/background is measured from the edge spaxels and the fine dashed line is the standard deviation of the foreground/background spectrum. This is a good representation of the noise level in the spectral band since the \[OI\] variations will dominate the noise. Only HOPS 373 has a convincing detection in the \[OI\] line, the detection of 019003 is tentative (2.2$\sigma$) given the large variations in the foreground/background spectrum. There are features at the expected wavelength of \[OI\] toward 061012 and 119019, but there are other features that have the same level of peak intensity that do not correspond to an expected spectral line. The foreground/background \[OI\] near 135003 is highly variable, resulting in the negative spectrum. The peak line flux density, spectrum RMS, and the RMS of the background (BG RMS Peak) emission at the wavelength of the \[OI\] line are noted in each panel. []{data-label="OI-spectra"}](f15i.pdf "fig:") ![PACS Spectrometer footprint observed toward PBRS 135005 ans overlaid on the CO ($J=1\rightarrow0$) contours from Figure \[135003\], plotting the positive contours only. The plots of the two long wavelength channels of the spectrometer show remarkable correspondence between the blue-shifted (north) side of the CO outflow and high-J water and CO line emission. The wavelength range from 140  to 190 is shown in (a) and 100  to 140  is shown in (b). There is an apparent lack of similar high-J CO and water emission on the red-shifted side of the outflow (south); however, maps of the far-infrared and submillimeter continuum show that there is extended cold dust emission north of 135003 but not south. Therefore, the blue-shifted outflow is likely impacting ambient material causing shocks, while the red-shifted outflow is being driven into a less dense medium. The green cross in the central spaxel marks the location of the 2.9 mm continuum source, where the red and blue-shifted contours meet. []{data-label="135003-footprint"}](f16a.pdf) ![](f16b.pdf) ![Rotation diagrams of PACS CO emission, assuming optically thin emission. The quantity plotted on the y-axis is the natural logarithm of the total number of CO molecules in the J$th$ state divided by the degeneracy of that state. []{data-label="rotdiagrams"}](f17a.pdf "fig:") ![Rotation diagrams of PACS CO emission, assuming optically thin emission. The quantity plotted on the y-axis is the natural logarithm of the total number of CO molecules in the J$th$ state divided by the degeneracy of that state. []{data-label="rotdiagrams"}](f17b.pdf "fig:") ![Rotation diagrams of PACS CO emission, assuming optically thin emission. The quantity plotted on the y-axis is the natural logarithm of the total number of CO molecules in the J$th$ state divided by the degeneracy of that state. []{data-label="rotdiagrams"}](f17c.pdf "fig:") ![Rotation diagrams of PACS CO emission, assuming optically thin emission. The quantity plotted on the y-axis is the natural logarithm of the total number of CO molecules in the J$th$ state divided by the degeneracy of that state. []{data-label="rotdiagrams"}](f17d.pdf "fig:") ![Rotation diagrams of PACS CO emission, assuming optically thin emission. The quantity plotted on the y-axis is the natural logarithm of the total number of CO molecules in the J$th$ state divided by the degeneracy of that state. []{data-label="rotdiagrams"}](f17e.pdf "fig:") ![CO luminosity versus L$_{bol}$ (left) and T$_{bol}$ (right) for the PBRS, and WISH/WILL/DIGIT/HOPS samples. The CO luminosity for the PBRS and HOPS sources is a summation of all detected CO lines in the PACS spectral range for the PBRS, and WISH/WILL/DIGIT/HOPS samples. The WISH and WILL CO luminosities are calculated by extrapolation of the CO ladder given that not all CO lines were observed. []{data-label="LCO-others"}](f18a.pdf "fig:") ![CO luminosity versus L$_{bol}$ (left) and T$_{bol}$ (right) for the PBRS, and WISH/WILL/DIGIT/HOPS samples. The CO luminosity for the PBRS and HOPS sources is a summation of all detected CO lines in the PACS spectral range for the PBRS, and WISH/WILL/DIGIT/HOPS samples. The WISH and WILL CO luminosities are calculated by extrapolation of the CO ladder given that not all CO lines were observed. []{data-label="LCO-others"}](f18b.pdf "fig:") ![CO rotation temperatures (T$_{rot}$) of the PBRS relative to the WISH/WILL/DIGIT/HOPS samples. The T$_{rot}$ values for the PBRS are among the lowest measured for the luminosity range sampled and are lower that most protostars with similar T$_{bol}$ measurements. The source with the lowest L$_{bol}$ is IRAM 04191 from the DIGIT sample [@green2013]. []{data-label="Trot-others"}](f19a.pdf "fig:") ![CO rotation temperatures (T$_{rot}$) of the PBRS relative to the WISH/WILL/DIGIT/HOPS samples. The T$_{rot}$ values for the PBRS are among the lowest measured for the luminosity range sampled and are lower that most protostars with similar T$_{bol}$ measurements. The source with the lowest L$_{bol}$ is IRAM 04191 from the DIGIT sample [@green2013]. []{data-label="Trot-others"}](f19b.pdf "fig:") [^1]: WISH stands for Water In Star forming regions with *Herschel*, WILL stands for WIlliam Herschel Line Legacy, and DIGIT stands for Dust, Ice, and Gas, In Time. WISH and DIGIT were *Herschel* key programmes and WILL was an Open Time 2 programme. [^2]: The PACS CO data to be published in Karska et al. is a synthesis and updated analysis of the WISH, WILL, and DIGIT data, while Mottram et al. focuses on the \[OI\], HIFI, and ground-based low-J CO observations of the WILL survey only. [^3]: Stratospheric Observatory For Infrared Astronomy https://www.sofia.usra.edu/
--- abstract: 'The dissipative dynamics of a Josephson junction in the Bose-gases is considered within the framework of the model of a tunneling Hamiltonian. The effective action which describes the dynamics of the phase difference across the junction is derived using functional integration method. The dynamic equation obtained for the phase difference across the junction is analyzed for the finite temperatures in the low frequency limit involving the radiation terms. The asymmetric case of the Bose-gases with the different order parameters is calculated as well.' author: - 'R.A. Barankov,$^{1}$ S.N. Burmistrov$^{2}$' bibliography: - 'joseph.bib' title: 'Dissipative Dynamics of a Josephson Junction In the Bose-Gases' --- Introduction ============ The experimental realization of Bose-Einstein condensation in atomic vapors [@Anderson95; @Davis95; @Mewes96; @Bradley97] has allowed to observe a great variety of macroscopic quantum effects. In particular, there arises a considerable interest to the study of the Josephson effect in the Bose-condensed gases as one of intriguing possibilities to explore the macroscopic quantum effects related directly to the broken symmetry in the quantum systems. The dynamics of the Josephson effect is governed by the difference between the phases of the condensates, playing a role of macroscopic quantum variable. The theoretical treatment of the Josephson effect includes both the internal effect for atoms of a gas in the different hyperfine states and the case of the Bose-condensates spatially separated with a potential barrier which acts as a tunneling junction. The latter case due to its direct analogy with superconductors seems us more attractive. A lot of work has already been done in this direction. In [@Dalfovo96] the behavior of the condensate density near the potential boundary has been discussed and the quasiclassical expression for the current through a potential barrier has been obtained. The articles [@Jack96; @Steel98] are devoted to an applicability of the two-mode approximation in the Josephson junction dynamics. Milburn [*et al*]{} in [@Milburn97] have shown an existence of the self-trapping effect as well as the collapse and revival sequence in the relative population. In [@Smerzi97; @Raghavan99; @Smerzi00] the nonlinear Josephson dynamics and macroscopic fluctuations have been considered, resulting in the optimum conditions [@Williams01] to observe the Josephson oscillations. Zapata [*et al*]{} [@Zapata98] have presented a semiclassical description of the Josephson junction dynamics. The time-dependent variational analysis of the Josephson effect is given in [@Lin00]. One of the most interesting and important aspects in the Josephson junction dynamics from both the theoretical and the experimental viewpoints is the dephasing of the Josephson oscillations due to coupling between the macroscopic relative phase variable and the infinite number of the microscopic degrees of freedom  [@Villain99; @Meier01]. Historically, in the case of the superconducting systems such description of the phase dynamics was developed in the middle of 1980’s [@Amb82; @Larkin83; @Eckern84]. The most important result was a successive derivation of the effective action for the relative phase, revealing the key role of the microscopic degrees of freedom in the irreversible dynamics of the superconducting Josephson junctions. From the mathematical point of view the response functions in the effective action, which prove to be nonlocal in time, give the full information on the dynamics of a junction. The employment of the low frequency expansion for the response functions allows one to obtain the dissipative dynamics of a superconducting junction, involving Josephson energy, renormalization of the junction capacity (inverse effective mass), and resistance (effective friction) of a junction. For the system of two Bose-condensates connected with a weakly coupled junction, it is very desirable to trace and explore the dynamics of the relative phase, generalizing the method of the derivation of the effective action from the superconducting case to the case of the Bose-condensed systems. As we will show in the next sections, the gapless sound-like spectrum of low energy excitations in the Bose-condensed gases results in a qualitative change of the irreversible phase dynamics compared with that of the superconducting junctions. So, the main aim of the paper is to derive the effective action for the Bose point-like junction within the framework of the functional integration method in order to find the explicit expressions for the response functions and analyze the low frequency dynamics of a Bose junction. The plan of the article is the following. First, we derive the general expression for the effective action depending only on the relative phase for the system of two Bose-condensates connected by a point-like junction. Then we consider the case of zero temperature. As a next step, we investigate the effect of finite temperature on the phase dynamics. In addition, from the low frequency expansion of the response functions we find the Josephson energy, renormalization of the effective mass, friction coefficient, and the radiation corrections. The latter can be interpreted as a sound emission from the region of a Bose junction. Finally, we present the case of an asymmetric junction in the Appendix and summarize the results in the Conclusion. Effective action ================ First, it may be useful to make some remarks on the geometry of the Bose junction and condensates. We keep in mind the case of a point or weakly coupled junction due to a large potential barrier between the two macroscopic infinite reservoirs containing Bose-condensates. So, we can neglect the feed-back effect of the junction on the Bose-condensates and assume that the both condensates are always in the thermal equilibrium state with the constant density depending on the temperature alone. The traditional image of such system is two bulks with one common point through which the transmission of particles is only possible with some tunneling amplitude. So, our starting point is the so-called tunneling Hamiltonian ($\hbar=1$, volume $V=1$) $$\label{ham} H=H_{l}+H_{r}+H_{u}+H_{t},$$ where $H_{l,r}$ describes the bulk Bose-gas on the left-hand and right-hand sides, respectively, $$H_{l,r}=\int d^{3}r~\Psi_{l,r}^{+}\left( -\frac{\Delta}{2m}-\mu +\frac{u_{l,r}}{2}\Psi_{l,r}^{+}\Psi_{l,r}\right) \Psi_{l,r}.$$ The coupling constant $u_{l,r}=4 \pi a_{l,r}/m$ where as usual $a_{l,r}$ is the scattering length. The energy $$H_{u}=\frac{U}{2}\left( \frac{N_{l}-N_{r}}{2}\right) ^{2},$$ is analogous to the capacity energy of a junction in the case of superconductors. The constant $U$ can be associated with the second derivative of the total energy $E=E(N_l, N_r)$ with respect to the relative change in the number of particles across the junction $$U= \left(\frac{\partial ^{2} }{\partial N_l^2 }+\frac{\partial ^{2}}{\partial N_r^2 }\right)E,$$ and usually is estimated as $U=(\partial\mu _{l}/\partial N_{l} +\partial\mu _{r}/\partial N_{r})$ [@Zapata98]. In general, it may depend on the concrete type of the Bose-junction and simply describes that the energy of the system on the whole may depend on the relative number of particles from each bulk. The total number of the particles in each bulk is given by $$N_{l,r}=\int d^{3}r~\Psi_{l,r}^{+}\Psi_{l,r}.$$ The term $$H_{t}=-\int\limits_{r\in l,r^{\prime}\in r}d^{3}rd^{3}r^{\prime}~\left[\Psi_{l}^{+}\left( {\bf r}\right) I\left( {\bf r,r^{\prime}}\right)\Psi_{r}\left({\bf r^{\prime}}\right) +h.c. \right]$$ is responsible for the transitions of particles from the right-hand to the left-hand bulk and [*vice versa*]{}. To study the properties of the system described by (\[ham\]), we calculate the partition function using the analogy of the superconducting junction $$Z=\int {\cal D}^2 \Psi _{l}{\cal D}^2 \Psi _{r}~\exp \left[ -S_E\right],$$ where the action on the Matsubara (imaginary) time reads $$\begin{array}{c} S_E=\int\limits_{-\beta /2}^{\beta /2}d\tau ~L_E,\\ L_E=\int d^{3}r\left\{ \Psi _{l}^{+}~\frac{\partial }{\partial \tau }~\Psi _{l}+\Psi _{r}^{+}~\frac{\partial }{\partial \tau }~\Psi _{r}\right\} + H. \end{array}$$ To eliminate the quartic term in the action which comes from the “capacity" energy $H_{u}$, we use the Hubbard-Stratonovich procedure by introducing an additional gauge field $V(\tau)$ on the analogy with the so-called plasmon gauge field in metals $$\begin{array}{c} \exp \left[ -\frac{U}{2}\int d\tau ~\left( \frac{N_{l}-N_{r}}{2}\right) ^{2} \right] = \int {\cal D} V\exp \left\{ -\int d\tau ~\left[ \frac{V^{2}\left( \tau \right) }{2U}+i\frac{N_{l}-N_{r}}{2}V\left( \tau \right) \right] \right\},\\ \int {\cal D}V\exp \left[ -\int d\tau ~\frac{V^{2}\left( \tau \right) }{2U}\right]=1. \end{array}$$ Next, we follow the Bogoliubov method of separating the field operators into the condensate and non-condensate fractions, i.e., $\Psi_{l,r}=c_{l,r}+\Phi_{l,r}$. Denoting $x=(\tau,{\bf r})$ and introducing convenient Nambu spinor notations for the field operators and, correspondingly, matrices for the Green functions and tunneling amplitudes, we arrive at the following expression for the partition function $$Z=\int {\cal D}V{\cal D}^2C\exp \left[ -S_{0}\right] \int {\cal D}^2\Phi \exp\left[ -S_{\Phi }\right].\\$$ Thus, we split the initial action into the two parts which correspond to the condensate and noncondensate fields, respectively, $$\label{action} \begin{array}{l} S_{0}=\int d\tau \left( \begin{array}{l} c_{l}^{+}\left( \frac{\partial }{\partial \tau }-\mu +i\frac{V\left( \tau \right) }{2}\right) c_{l}+\frac{u_{l,r}}{2} c_{l}^{+}c_{l}^{+}c_{l}c_{l}+(l\rightarrow r, V\rightarrow -V)\\ -I_{0}\left( c_{l}^{+}c_{r}+c_{r}^{+}c_{l}\right) +\frac{V^{2}\left( \tau \right) }{2U} \end{array} \right),\\ \\ S_{\Phi }=\int dxdx^{\prime }\left\{ \Phi ^{+}\left( \widetilde{G}^{\left( 0\right) -1}-\widetilde{I}\right) \Phi -C^{+}\widetilde{I}\Phi -\Phi ^{+}\widetilde{I}C\right\}. \end{array}$$ For the field operators we used the spinor notations $$\label{notation} \Phi =\frac{1}{\sqrt{2}}\left( \begin{array}{c} \Xi _{l} \\ \Xi _{r} \end{array} \right) , \,\, \Xi _{l,r}=\left( \begin{array}{c} \Phi _{l,r} \\ \Phi _{l,r}^{+} \end{array} \right), \,\, C=\frac{1}{\sqrt{2}}\left( \begin{array}{c} C_{l} \\ C_{r} \end{array} \right), \,\, C_{l,r}=\left( \begin{array}{c} c_{l,r} \\ c_{l,r}^{+} \end{array} \right).$$ In the expression for the condensate part $S_{0}$ we define the amplitude $I_{0}$ equal to $$I_0 = \int d^3 r\, d^3 r^{\prime}\, I ({\bf r}, {\bf r^{\prime}}).$$ and corresponding to the tunneling process of the condensate-to-condensate particles. It is straightforward to obtain the following expressions for the matrix Green functions $$\widetilde{G}^{\left( 0\right) -1}=\left( \begin{array}{cc} \widehat{G}_{l}^{\left( 0\right) -1} & 0 \\ 0 & \widehat{G}_{r}^{\left( 0\right) -1} \end{array} \right) \delta \left( x-x^{\prime }\right),$$ $$\widehat{G}_{l,r}^{\left( 0\right) -1}=\left( \begin{array}{cc} G_{l,r}^{\left( 0\right) -1}+\Sigma _{11}^{l,r} & \Sigma _{20}^{l,r} \\ \Sigma _{02}^{l,r} & \overline{G}_{l,r}^{\left( 0\right) -1}+\overline{\Sigma } _{11}^{l,r} \end{array} \right),$$ where the inverse Green functions and self-energy parts are given by the well-known expressions $$\label{Green} \begin{array}{l} G_{l,r}^{\left( 0\right) -1}=\frac{\partial }{\partial \tau }-\frac{\Delta }{ 2m}-\mu \pm i\frac{V\left( \tau \right) }{2},\\ \\ \Sigma _{11}^{l,r}=2u_{l,r}\, c_{l,r}^{+}c_{l,r}, \,\, \Sigma _{20}^{l,r}=u_{l,r}\, c_{l,r}c_{l,r}, \,\, \Sigma _{02}^{l,r}=u_{l,r} \,c_{l,r}^{+}c_{l,r}^{+}. \end{array}$$ Accordingly, the matrix Green function $\widehat{G}^{(0)}_{l,r}$ can be represented as $$\label{GF} \widehat{G}^{(0)}_{l,r} =\left( \begin{array}{cc} G_{l,r} & F_{l,r} \\ F^{+}_{l,r} & \overline{G}_{l,r} \end{array} \right).$$ The transfer matrix here has the form $$\widetilde{I} =\left( \begin{array}{cc} 0 & \widehat{I} \\ \widehat{I}^{\ast } & 0 \end{array} \right),\,\,\, \widehat{I}=\left( \begin{array}{cc} I & 0 \\ 0 & \overline{I} \end{array} \right),\,\,\, I=I\left(x,x^{\prime }\right) =I\left( {\bf r},{\bf r^{\prime }}\right) \delta \left( \tau -\tau ^{\prime }\right).$$ As one can readily see, if we employ a gauge transformation of the field operators $$\Psi_{l,r} \rightarrow \exp \left[i\varphi _{l,r}\left( \tau \right)\right] \Psi_{l,r},$$ and impose the conditions $\dot{\varphi} _{l}=-V/2$, $\dot{\varphi} _{r}=V/2$, i.e., $$\dot{\varphi}=V, \,\,\, \varphi =\varphi _{r} -\varphi_{l}, \label{fidot}$$ both normal $G_{l,r}$ and anomalous Green functions $F_{l,r}$ (\[GF\]) gain additional phase factors with respect to the functions in the lack of an external field (notations of [@AGD]). $$\begin{array}{l} G_{l,r}\left( \tau ,\tau ^{\prime }\right) \rightarrow \exp \left( i [\varphi _{l,r}\left( \tau \right) -\varphi _{l,r}\left( \tau ^{\prime }\right)] \right) G_{l,r}\left( \tau -\tau ^{\prime }\right),\\ \\ F_{l,r}\left( \tau ,\tau ^{\prime }\right) \rightarrow \exp \left( i [\varphi _{l,r}\left( \tau \right) +\varphi _{l,r}\left( \tau^{\prime } \right) ]\right) F_{l,r}\left( \tau -\tau ^{\prime }\right). \end{array}$$ The part of action $S_\Phi$ in (\[action\]) is quadratic in the non-condensate field operators so we can integrate them out. To perform the integration, we employ the well-known formula $$\int {\cal D}^2 \Phi \exp \left[ -\Phi ^{+}\alpha \Phi +\beta ^{+}\Phi +\Phi ^{+}\beta \right] =\exp \left[ \beta ^{+}\alpha ^{-1}\beta -\mathop{\rm Tr} \left[ \ln \left( \alpha \right) \right] \right],$$ with $\alpha = {G}^{\left( 0\right) -1}-\widetilde{I}$ and $\beta =\widetilde{I} C$ to arrive at the partition function $$Z=\int {\cal D}\varphi{\cal D}^2C\exp \left[ -S\right]$$ with the effective action given by $$\label{preciseaction} S=S_0-{\rm Tr} \left[C^{+}\widetilde{I}\left(\widetilde{G}^{\left( 0\right) -1}-\widetilde{I} \right)^{-1}\widetilde{I} C\right] +\mathop{\rm Tr} \left[ \ln \left( \widetilde{G}^{\left( 0\right) -1}-\widetilde{I} \right) \right].$$ In order to run analytically further, it is necessary to make the following approximations. First, we expand the second and third terms of (\[preciseaction\]) in powers of the tunneling amplitude $I$ to the first nonvanishing order. Then, as is stated above, we consider the simplest case of a point-like junction putting $I\left( x,x^{\prime }\right) =I_{0}\delta \left( {\bf r}\right) \delta \left( {\bf r^{\prime }}\right) \delta \left( \tau -\tau ^{\prime }\right)$. The latter also allows us to escape from the problem of summing all higher-order terms in the tunneling amplitude $I$, which is inherent in a junction of the plane geometry [@Meier01; @Babich01] with the conservation of the tangential components of the momentum of a tunneling particle. The problem in essence becomes one-dimensional [@Babich01; @Gesh01] and results in a strongly dissipative low-frequency dynamics independent of the tunneling amplitude and governed by the bulk relaxation alone. In our consideration this would correspond to the amplitude independent on the $x$ and $y$ coordinates, i.e., $I\propto \delta (z)\delta (z^{\prime})$. Finally, the third approximation we use is a saddle-point approximation for the condensate part of the partition function. Substituting $c_{l,r}=\sqrt{n_{0 l,r}}$ where $n_{0 l,r}$ is the density of particles in the condensate fraction, we obtain the expression for the partition function depending on the phase difference alone $$Z_{\varphi}=\int {\cal D} \varphi ~\exp \left( -S_{eff}\left[ \varphi \right] \right).$$ The corresponding effective action reads $$\label{effaction} \begin{array}{l} S_{eff}\left[ \varphi \right] =\int d\tau ~\left[ \frac{1}{2U}\left( \frac{d\,\varphi }{d\,\tau }\right) ^{2}-2I_{0}\sqrt{n_{0 l}n_{0 r}}\cos \varphi \right] - \\ I_{0}^{2}\int d\tau d\tau ^{\prime }~\left\{ \begin{array}{l} \alpha \left( \tau -\tau ^{\prime }\right) \cos \left[ \varphi \left( \tau \right) -\varphi \left( \tau ^{\prime }\right) \right] \\ +\beta \left( \tau -\tau ^{\prime }\right) \cos \left[ \varphi \left( \tau \right) +\varphi \left( \tau ^{\prime }\right) \right] \end{array} \right\}. \end{array}$$ Here the response functions can be written using the Green functions $$\label{response} \begin{array}{l} \alpha \left( \tau \right) =n_{0r}g^{+}_l\left( \tau \right)+ n_{0 l}g^{+}_r\left( \tau \right) +G\left(\tau\right),\\ \\ \beta \left( \tau \right) =n_{0 r}f_l\left( \tau \right) +n_{0 l}f_r\left( \tau \right) +F\left(\tau\right), \end{array}$$ where $$\begin{array}{l} g^{\pm}_{l,r}\left( \tau \right) =\int \frac{d^{3}p}{2\left( 2\pi \right) ^{3}} \left[G_{l,r}\left( p,\tau \right)\pm G_{l,r}\left( p,-\tau \right)\right],\, f_{l,r}\left( \tau \right) =\int \frac{d^{3}p}{\left( 2\pi \right) ^{3}} F_{l,r}\left( p,\tau \right),\\ \\ G\left(\tau\right)=2\left[g^{+}_l\left( \tau \right)g^{+}_r\left( \tau \right) -g^{-}_l\left( \tau \right) g^{-}_r\left( \tau \right)\right],\, F\left(\tau\right)=2f_l\left(\tau\right) f_r\left(\tau \right). \end{array}$$ The Fourier components of the Green functions of a weakly interacting Bose-gas are given by [@AGD] $$\label{bosegreen} \begin{array}{l} G_{l,r}\left( p,\omega _{n}\right) =\frac{i\omega _{n}+\xi _{p}+\Delta_{l,r}}{\omega _{n}^{2}+\varepsilon_{l,r} ^{2}\left( p\right) },\, F_{l,r}\left( p,\omega _{n}\right) =-\frac{\Delta_{l,r} }{\omega _{n}^{2}+\varepsilon_{l,r} ^{2}\left( p\right) },\\ \\ \varepsilon_{l,r} ^{2}\left( p\right) =\xi _{p}^{2}+2\Delta_{l,r} \xi _{p},\quad \xi _{p}=\frac{ p^{2}}{2m}\,\, , \quad \Delta _{l,r}=u_{l,r}n_{0 l,r}. \end{array}$$ Thus, in order to comprehend the dynamics of the relative phase difference $\varphi$ across the junction, one should analyze the behavior of the response functions $\alpha$ and $\beta$ as a function of time. The response functions. ======================= The calculation of the response functions in the general form is a rather complicated problem. However, keeping in view, first of all, the study of the low frequency dynamics of a junction, we can restrict our calculations by analyzing the behavior of the response functions on the long-time scale. This means that we should find the low frequency decomposition of the response functions in the Matsubara frequencies. Next, we will use the procedure of analytical continuation in order to derive the dynamic equation which the relative phase $\varphi$ obeys. Fourier transformation of the $\alpha$-response. ------------------------------------------------ From the analysis of the Green function behavior $g^{\pm}_{l,r}(\tau )$ it follows that zero Fourier component of the $\alpha$-response function diverges. The simplest way to avoid this obstacle is to deal with the difference $\tilde\alpha\left( \omega_{n} \right) =\alpha \left( \omega_{n} \right)-\alpha (0)$. This corresponds to the substitution $\alpha(\tau)=\tilde\alpha(\tau)+\alpha(0)\delta(\tau)$ into effective action (\[effaction\]) and the second term $\alpha (0)\delta (\tau )$ yields a physically unimportant time-independent contribution into the action, meaning a shift of the ground state energy of a junction. Following this procedure, we can arrive at the explicit expressions of the Fourier components for all the terms in (\[response\]). For the first two terms in the $\alpha$-response function of (\[response\]), we have a simple formula $$\tilde g^{+}\left( \omega _{n}\right) =-\frac{\pi \nu }{\sqrt{2} } \left[ \sqrt{1+\left|\frac{\omega _{n}}{\Delta} \right|}-1\right],$$ and the corresponding expansion to third order in $\omega /\Delta $ is $$\tilde g^{+}\left( \omega _{n}\to 0\right) \approx -\frac{\pi \nu }{2\sqrt{2}}\left|\frac{\omega _{n} }{\Delta }\right|\left[ 1-\frac{1}{4}\left|\frac{\omega _{n}}{\Delta }\right|+\frac{1}{8}\left| \frac{\omega _{n}}{\Delta }\right|^{2}-\ldots \right].$$ Here, $\nu =m\sqrt{ m\Delta}/(\sqrt{2}\pi ^{2})$ is the density of states at the energy equal to $\Delta$ in the normal gas. In the case of $\tilde G$ the calculations of the Fourier components are much more complicated. So, we could find the expressions only in the case of zero temperature and the first nonvanishing order in temperature. Note that nonzero temperature effects are connected with the behavior of $\tilde G(\omega)=\tilde G_0(\omega)+\tilde G_T(\omega)$. For zero-temperature part of $\tilde G(\omega)$, we obtain $$\label{gzero} \begin{array}{l} \tilde G_0\left( \omega\right) =-\nu _{l}\nu _{r}\Delta _{l}\Delta _{r}\omega ^{2}\int\limits_{0}^{\infty }\int\limits_{0}^{\infty }\frac{dxdy\;\sqrt{ (\sqrt{x^{2}+1}-1)(\sqrt{y^{2}+1}-1)}}{\left[ \left( \Delta _{l}x+\Delta _{r}y\right) ^{2}+\omega ^{2}\right] \left( \Delta _{l}x+\Delta _{r}y\right) }\\ \left( 1-\frac{xy}{\sqrt{\left(x^{2}+1\right)\left(y^{2}+1\right)}}\right). \end{array}$$ Unfortunately, we could not evaluate expression (\[gzero\]) in the explicit analytic form. Thus we report here the Fourier expansion up to third order in $\omega$ $$\tilde G_0\left( \omega\to 0\right) =-\frac{\pi\nu _{l}\nu _{r}\omega ^{2}}{8\sqrt{\Delta_l\Delta_r}}\left[\phi\left(\frac{\Delta_l-\Delta_r}{\Delta_l + \Delta_r}\right)- \frac{1}{3}\frac{|\omega|}{\sqrt{\Delta_l\Delta_r}}+\dots\right].$$ Here we introduced the function $$\label{dilog} \begin{array}{r} \phi(q)=\frac{2\pi}{3}+\frac{1}{q^2}\left(1-3 q^2-\sqrt{1-q^2}\right)-\frac{2}{\pi}\ln^2\left[\sqrt{\frac{1+q}{1-q}}\right]-\\ \frac{2}{\pi}\left( \begin{array}{l} {\rm Li}_2\left[-\sqrt{\frac {1+q}{1-q}}\right] +{\rm Li}_2\left[-\sqrt{\frac {1-q}{1+q}}\right]+\\ {\rm Li}_2\left[1-\sqrt{\frac {1+q}{1-q}}\right]+{\rm Li}_2\left[1-\sqrt{\frac {1-q}{1+q}}\right] \end{array} \right), \end{array}$$ where ${\rm Li_2(z)}=\int^{0}_z d\,t \ln(1-t)/t$ is the dilogarithm function. The Taylor expansion of $\phi(q)$ in the case of $|q|\ll 1$ reads: $$\phi(q)\approx\pi-\frac{5}{2}+\frac{q^2}{8}+\ldots$$ In the case of the equal order parameters $\Delta=\Delta_l=\Delta_r$ we obtain: $$\tilde G_0\left(\omega \to 0\right) \approx -\frac{\pi \nu ^{2} \Delta}{8 }\left|\frac{\omega}{\Delta}\right|^{2} \left[ \pi -\frac{5}{2}-\frac{1}{3}\left|\frac{\omega }{\Delta }\right|+\ldots \right].$$ The other finite-temperature contribution into $\tilde G$ can readily be evaluated as $$\tilde G_T\left( \omega \right) =\frac{\pi \nu _{l}\nu _{r}\sqrt{\Delta _{l}\Delta _{r}}}{24}\left( \frac{2\pi T}{\sqrt{\Delta _{l}\Delta _{r}}}\right) ^{2}\left[ \sqrt{\frac{\Delta _{l}}{\Delta _{r}}}+\sqrt{\frac{\Delta _{r}}{ \Delta _{l}}}-\sqrt{\frac{\Delta _{l}+|\omega _{n}|}{\Delta _{r}}}-\sqrt{ \frac{\Delta _{r}+|\omega _{n}|}{\Delta _{l}}}\right]+\dots,$$ which in the case of $\Delta=\Delta_l=\Delta_r$ gives: $$\label{gnonzero} \tilde G_T\left( \omega \right) =\frac{\pi \nu ^2\Delta}{12}\left( \frac{2\pi T}{\Delta }\right) ^{2}\left[ 1-\sqrt{1+\left|\frac{\omega_n}{\Delta}\right|}\right]+\dots.$$ Expanding this expression to third order in $\omega _n /\Delta$ and including zero temperature terms yields $$\label{lzero} \tilde G\left(\omega _{n}\to 0\right) \approx - \frac{\pi \nu^2 \Delta }{24}\left|\frac{\omega _{n}}{\Delta}\right|\left[ \begin{array}{l} \left(\frac{2\pi T}{\Delta }\right) ^{2}+\left|\frac{\omega _{n}}{\Delta}\right|\left[3\pi -\frac{15}{2}-\frac{1}{4}\left(\frac{2\pi T}{\Delta }\right) ^{2}\right]-\\ \left|\frac{\omega_{n}}{\Delta}\right|^2 \left[1-\frac{1}{8}\left(\frac{2\pi T}{\Delta }\right) ^{2}\right]-\ldots \end{array} \right].$$ It is worthwhile to emphasize the appearance a linear term in $|\omega_{n}|$ in Eq.(\[lzero\]) at finite temperatures. As we will see below, this results in the manifestation of an additional temperature-dependent contribution into the dissipation of the junction. The effect can be interpreted as a tunneling of normal thermal excitations existing in the system due to finite temperatures. Fourier transformation of the $\beta$-response. ----------------------------------------------- Like the preceding section, we can evaluate the Fourier components and find the low frequency expansion for the anomalous Green functions $f(\tau)$. For the first two terms in the $\beta$-response function (\[response\]), we obtain $$f\left( \omega _{n}\right) =-\frac{\pi \nu }{\sqrt{2}\sqrt{ 1+|\omega _{n}/\Delta |}} \mathrel{\mathop{\approx }\limits_{\omega _{n} \to 0}} -\frac{\pi \nu }{\sqrt{2} }\left( 1-\frac{1}{2}\left|\frac{\omega _{n}}{ \Delta }\right|+\frac{3}{8}\left|\frac{\omega _{n}}{\Delta }\right|^{2}-\frac{5}{16}\left|\frac{ \omega _{n}}{\Delta }\right|^{3}+\ldots \right).$$ The calculation of $F(\omega_n)$ requires a special attention. In fact, analyzing the formula of the Fourier transform $$F\left( \omega _{n}\right) =\frac{\pi ^{2}\nu _{l}\nu _{r}\sqrt{\Delta _{l}\Delta _{r}}}{\beta}\sum\limits_{k=-\infty}^{\infty }\frac{1}{ \sqrt{\left(\Delta _{l}+|\omega _{k}|\right)\left(\Delta _{r}+|\omega _{k}-\omega _{n}|\right)}},$$ we see that the zero-temperature expression $$F\left( \omega \right) =\frac{\pi \nu _{l}\nu _{r}\sqrt{\Delta _{l}\Delta _{r}}}{2 }\int\limits_{-\infty }^{\infty }d\omega ^{\prime }\frac{1}{\sqrt{\left(\Delta _{l}+|\omega ^{\prime }|\right)\left(\Delta _{r}+|\omega ^{\prime }-\omega |\right)}},$$ diverges logarithmically at $\omega=0$ due to behavior of the integrand at large frequencies $\omega^{\prime}\to\infty$. To perform integration over $\omega^{\prime}$, we should first pay attention that so far we could neglect the dependence on the momentum in the self-energy parts since all the integrals gain their values in the region of small momenta. In fact, the self-energy parts depend on the momentum and we must use the exact expressions $\Sigma _{20}\left(p\right) =\Sigma_{02}\left( p\right) =n_{0}U\left( p\right) $ where $U\left( p\right) $ is a Fourier component of the interaction between particles, $\Sigma _{20}\left(0\right)$ being $n_{0}U\left( 0\right) = 4\pi a n_{0}/m$. Here $a$ is the scattering length, $n_{0}$ is the particle density, and $m$ is the mass of a particle. The order of the magnitude for the momentum at which $U\left( p\right)$ decays can be estimated roughly as a reciprocal of the scattering length, i.e., $\ p\simeq 1/a$. So, we put the upper limit of integration equal to the cutoff frequency $\omega_{c}\simeq 1/ma^{2}$ within the logarithmic accuracy. Finally, representing $F(\omega _n)$ as a sum of zero-temperature and finite temperature contributions $F(\omega_n)=F_0(\omega_n)+F_T(\omega_n)$, we find the zero-temperature term $$F_0\left( \omega _{n}\right) =\frac{\pi \nu _{l}\nu _{r}\sqrt{\Delta _{l}\Delta _{r}}}{2}\left( \begin{array}{c} 2\ln \left[ \frac{4\omega _{c}}{\left( \sqrt{\Delta _{l}}+\sqrt{\Delta _{r}} \right) ^{2}}\right] -2\ln \left[ \frac{\left( \sqrt{\Delta _{l}}+\sqrt{\Delta _{r}+|\omega _{n}|}\right) \left( \sqrt{\Delta _{r}}+\sqrt{\Delta _{l}+|\omega _{n}|}\right) }{\left( \sqrt{\Delta _{l}}+\sqrt{\Delta _{r}} \right) ^{2}}\right] + \\ +\arctan \frac{|\omega _{n}|+\Delta _{l}-\Delta _{r}}{2\sqrt{\left( |\omega _{n}|+\Delta _{l}\right) \Delta _{r}}}+\arctan \frac{|\omega _{n}|+\Delta _{r}-\Delta _{l}}{2\sqrt{\left( |\omega _{n}|+\Delta _{r}\right) \Delta _{l}}} \end{array} \right).$$ and the finite temperature one, respectively $$F_T\left( \omega _{n}\right) =\frac{\pi \nu _{l}\nu _{r}\sqrt{\Delta _{l}\Delta _{r}}}{24}\left( \frac{2\pi T}{\sqrt{\Delta _{l}\Delta _{r}}}\right) ^{2}\left( \begin{array}{l} \frac{\Delta _{l}}{\sqrt{\Delta _{r}\left( \Delta _{l}+|\omega _{n}|\right) }}+\frac{\Delta _{r}}{\sqrt{\Delta _{l}\left( \Delta _{r}+|\omega _{n}|\right) }}\\ -\frac{\Delta _{l}+\Delta _{r}}{2\sqrt{\Delta _{l}\Delta _{r}}} \end{array} \right)+\dots.$$ These functions in the case of $\Delta_l=\Delta_r=\Delta$ take the form: $$F_0\left( \omega _{n}\right) =\pi \nu^2\Delta\left( 2\ln \left[ \frac{2\sqrt{\omega _{c}/\Delta}}{1+\sqrt{1+|\omega_n/\Delta|}}\right] +\arctan \frac{|\omega _{n}/\Delta|}{2\sqrt{1+|\omega _{n}/\Delta|}} \right),$$ $$F_T\left( \omega _{n}\right) =\frac{\pi\nu^2\Delta}{12}\left( \frac{2\pi T}{\Delta}\right) ^{2}\left( \frac{1}{\sqrt{1+|\omega _{n}/\Delta| }}-\frac{1}{2}\right)+\dots.$$ Expanding these expressions to third order in $\omega _n /\Delta$ yields $$\label{fzero} F\left(\omega _{n}\to 0\right) \approx \frac{\pi \nu^2\Delta}{24} \left( \begin{array}{l} 24\ln \left[ \frac{\omega _{c}}{\Delta} \right] +\left( \frac{2\pi T}{\Delta}\right) ^{2}-\left|\frac{\omega _{n}}{\Delta }\right|\left( \frac{2\pi T}{\Delta }\right) ^{2}- \\ \left|\frac{\omega _n}{\Delta }\right|^2\left[ \frac{3}{2}-\frac{3 }{4}\left( \frac{2\pi T}{\Delta } \right) ^{2}\right] + \\ \left|\frac{\omega _{n}}{\Delta}\right|^3 \left[ 1-\frac{5 }{8}\left( \frac{2\pi T}{\Delta}\right) ^{2}\right] -\dots \end{array} \right).$$ Here, as in the case of $\alpha$-response we emphasize the appearance a linear term in $|\omega_{n}|$ in Eq. (\[fzero\]) at finite temperatures which contributes to the dissipation and interpreted as a tunneling of normal thermal excitations existing in the system at finite temperatures. Functional series of the response functions. -------------------------------------------- The Fourier components of the response functions in the form of a series in $|\omega_n|$ up to third order can be written in the form $$\label{abresp} \begin{array}{l} \tilde\alpha \left( \omega_{n} \right) =-\alpha_{1}|\omega_{n}|+\alpha_{2}|\omega_{n}|^{2}-\alpha_{3} |\omega_{n}|^{3}+\dots,\\ \\ \beta \left( \omega_{n} \right) =-\beta_{0}+\beta_{1}|\omega_{n}|-\beta_{2}|\omega_{n}|^{2}+ \beta_{3}|\omega_{n}|^{3}+\dots. \end{array}$$ Accordingly, the expressions for the response functions in the imaginary time representation read as $$\begin{array}{l} \tilde\alpha (\tau) =\alpha_{1}\frac{1}{\pi}\left(\frac{\pi T}{\sin(\pi T \tau)}\right)^{2}-\alpha_{2}\delta^{\prime\prime}(\tau)-\alpha_{3}\frac{2 (\pi T)^{4}}{\pi}\left[\frac{3}{\sin^{4}(\pi T \tau)}-\frac{2}{\sin^{2}(\pi T \tau)}\right]+\dots,\\ \\ \beta (\tau) =-\beta_{0}\delta(\tau)-\beta_{1}\frac{1}{\pi}\left(\frac{\pi T}{\sin(\pi T \tau)}\right)^{2}+\beta_{2}\delta^{\prime\prime}(\tau)+ \beta_{3}\frac{2 (\pi T)^{4}}{\pi}\left[\frac{3}{\sin^{4}(\pi T \tau)}-\frac{2}{\sin^{2}(\pi T \tau)}\right]+\dots. \end{array}$$ For the sake of brevity, we present expressions for $\alpha_i(T)$ and $\beta_i(T)$ in the symmetric case of $\Delta_l=\Delta_r=\Delta$. The general case of $\Delta_l\ne\Delta_r$ will be considered in the Appendix. $$\begin{array}{l} \alpha_{1}=\frac{\gamma}{2}\left\{1+\frac{1}{3}\sqrt{\frac{na^{3}}{\pi}} \left (\frac{2\pi T}{\Delta}\right)^{2}\right\},\\ \alpha_{2}=\frac{\gamma}{8 \Delta}\left\{1-4\sqrt{\frac{na^{3}}{\pi}}\left[\pi-\frac{5}{2}-\frac{1}{12} \left(\frac{2\pi T}{\Delta}\right)^{2}\right]\right\},\\ \alpha_{3}=\frac{\gamma}{16 \Delta^{2}}\left\{1-\frac{8}{3}\sqrt{\frac{na^{3}}{\pi}}\left[1-\frac{1}{8} \left(\frac{2\pi T}{\Delta}\right)^{2}\right]\right\},\\ \beta_{0}=\gamma\Delta\left\{1-4\sqrt{\frac{na^{3}}{\pi}}\left[\ln(\frac{1}{ na^{3}})+\frac{1}{24}\left(\frac{2\pi T}{\Delta}\right)^{2}\right]\right\},\\ \beta_{1}=\frac{\gamma}{2}\left\{1-\frac{1}{3}\sqrt{\frac{na^{3}}{\pi}}\left (\frac{2\pi T}{\Delta}\right)^{2}\right\},\\ \beta_{2}=\frac{3\gamma}{8 \Delta}\left\{1+\frac{2}{3}\sqrt{\frac{na^{3}}{\pi}}\left[1-\frac{1}{2} \left(\frac{2\pi T}{\Delta}\right)^{2}\right]\right\},\\ \beta_{3}=\frac{5\gamma}{16 \Delta^{2}}\left\{1+\frac{8}{15}\sqrt{\frac{na^{3}}{\pi}}\left[1-\frac{5}{8} \left(\frac{2\pi T}{\Delta}\right)^{2}\right]\right\}. \end{array}$$ Here $T/\Delta\ll 1$, $\gamma=\sqrt{2}\pi\nu n_0/\Delta$, $n$ is the total density of a gas, $n_0$ is the condensate fraction density, and $a$ is the scattering length. Note that the gas parameter $n a^3\ll 1$ naturally enters the equations. First of all, from the Fourier expansion of $g(\omega_n)$, $f(\omega_n)$, $G(\omega_n)$, and $F(\omega_n)$ we conclude that the dissipation in a point-like junction due to the presence of the linear $|\omega_n|$ term can be associated with the various physical processes. Thus in the case of the $g$ and $f$ contribution the dissipation can be ascribed to the noncondensate-condensate particle tunneling process and exists down to zero temperature [@Meier01]. On the other hand, the inspection of the expressions for $G$ and $F$ shows that these terms, producing no contribution into dissipation at zero temperature, will be responsible for the explicit $T^2$-behavior of the dissipative effects in the junction. The origin of this finite temperature contribution can be attributed to the tunneling of thermal phonon-like excitations across the junction. The meaning of the other dynamical renormalizations and its temperature behavior will be discussed below. Josephson equation. =================== To consider the dynamic behavior of the relative phase $\varphi$ in the real time, we now follow the standard procedure of analytical continuation. Accordingly, the substitution $|\omega_n|\rightarrow -i\omega$ in the Fourier transform of the Euler-Lagrange equation $\delta S_{eff}/\delta\varphi (\tau) =0$ in imaginary time entails the classical equation of motion for the relative phase with the next inverse Fourier transformation to the real-time representation. In particular, this means that we should replace $|\omega_n|$ with $-i\omega$ in the above expressions for the Fourier transform of the response functions $\alpha$ and $\beta$. The effective action $S_{eff}[\varphi (t)]$ in the real time, which variation $\delta S_{eff}/\delta\varphi (t) =0$ yields the real-time equation of motion, can be given by the expression $$\begin{array}{l} S_{eff}\left[ \varphi \right] =\int d\,t ~\left[ \frac{1}{2U}\left( \frac{d\,\varphi }{d\,t }\right) ^{2}+2I_{0}\sqrt{n_{0 l}n_{0 r}}\cos \varphi \right] - \\ I_{0}^{2}\int d\,t d\,t ^{\prime }~\left\{ \begin{array}{l} \tilde\alpha \left( t -t ^{\prime }\right) \cos \left[ \varphi \left( t \right) -\varphi \left( t ^{\prime }\right) \right] \\ +\beta \left( t -t ^{\prime }\right) \cos \left[ \varphi \left( t \right) +\varphi \left( t ^{\prime }\right) \right] \end{array} \right\}. \end{array}$$ In the limit of the slowly varying phase the response functions in the real-time representation can be represented in the form of a functional series. According to Eqs.(\[abresp\]), we have $$\label{dec} \begin{array}{l} \tilde\alpha (t) =\alpha_{1}\delta^{\prime}(t)-\alpha_{2}\delta^{\prime\prime}(t)+\alpha_{ 3}\delta^{\prime\prime\prime}(t)+\dots,\\ \\ \beta (t) =\beta_{0}\delta(t)-\beta_{1}\delta^{\prime}(t)+\beta_{2} \delta^{\prime\prime}(t)-\beta_{3}\delta^{\prime\prime\prime}(t) +\cdots. \end{array}$$ Employing variational principle to the effective action $S_{eff}[\varphi (t)]$ and using decomposition (\[dec\]), we can derive the Josephson equations valid for the slow variations of the phase provided the typical time of its evolution is longer than $1/\Delta$: $$\label{equation} \stackrel{\cdots}\varphi G_{3}(\varphi)+(3/2)\ddot{\varphi}\dot{\varphi} G_{3}^{\prime}(\varphi) - \dot{\varphi}^{3}G_{3}(\varphi) + \ddot{\varphi}G_{2}(\varphi) +(1/2)\dot{\varphi}^{2}G_{2}^{\prime}(\varphi )+ \dot{\varphi}G(\varphi) +U^{\prime}(\varphi) =0,$$ and in accordance with Eq.(\[fidot\]) $$\dot{\varphi} =-\delta\mu (t)=\mu_1-\mu_2.$$ Here we have retained the time derivatives of $\varphi (t)$ to third order corresponding to radiation corrections. In the following, for the sake of brevity we will consider the case of $\Delta_l=\Delta_r=\Delta$. The general expressions for the coefficients for $\Delta_l\ne\Delta_r$ will be considered in the Appendix. The potential energy of a junction is given by the well-known relation $$U(\varphi)=-E_{J}\cos\varphi + (1/2)E_{2J}\cos 2\varphi,$$ with the coefficients $$\begin{aligned} E_{J}=2 n_0 I_0,\nonumber\\ E_{2J}=G_0\Delta\left\{1-4\sqrt{\frac{na^3}{\pi}}\left[\ln\frac{1}{na^3} +\frac{1}{24}\left(\frac{2 \pi T}{\Delta}\right) ^2\right]\right\},\end{aligned}$$ where $$\begin{aligned} \label{conduct} G_0=2\gamma I_0^2=16\pi \sqrt{\frac{na^3}{\pi}}(n_0 I_0/\Delta)^2,\nonumber\\ \Delta=n_0 u.\end{aligned}$$ In the above relations the condensate density $n_0=n_0(T)$ depends explicitly on temperature according to the well-known expression for a depletion of the condensate fraction of the weakly interacting Bose gas [@Fetter]. An increase of the temperature leads obviously to decreasing the Josephson energy. The friction coefficient $G(\varphi)$ determining the Ohmic dissipation is given by $$\label{friction} G(\varphi)=G_{0}\left[\cos^{2}\varphi+\frac{1}{3} \sqrt{\frac{n a^{3}}{\pi}}\left(\frac{2\pi T}{\Delta}\right)^{2}\sin^{2}\varphi\right].$$ It is natural that nonzero temperature enhances the energy dissipation of a junction due to appearance of thermal excitations in the Bose-condensed gas. We may compare $G_0$ of the Bose-gas to the analogous conductance of a normal Fermi gas with the same density of states: $$G_N=4\pi I_0^2\nu^2(\Delta),\,\, G_N=8\sqrt{\frac{n a^3}{\pi}}G_0.$$ The inverse effective mass of a junction is determined by $$\begin{aligned} G_{2}(\varphi)= U^{-1} -\alpha _2 - \beta _2 \cos 2\varphi, \\ \alpha _2 =\frac{G_0}{8\Delta}\left\{1-4\sqrt{\frac{na^3}{\pi}}\left[ \pi -\frac{5}{2} -\frac{1}{12}\left(\frac{2\pi T}{\Delta }\right) ^2\right]\right\}, \nonumber \\ \beta _2 = \frac{3G_0}{8\Delta}\left\{ 1+\frac{2}{3}\sqrt{\frac{na^3}{\pi}} \left[ 1-\frac{1}{2}\left(\frac{2 \pi T}{\Delta}\right) ^2\right]\right\}. \nonumber \\\end{aligned}$$ The renormalization of the effective mass results from the both condensate-noncondensate and noncondensate-noncondensate particle tunneling processes. The coefficient $G_3(\varphi)$ responsible for the radiation effects reads $$\begin{aligned} G_3(\varphi)=\alpha _3 +\beta _3\cos 2\varphi, \nonumber\\ \alpha_{3}=\frac{G_{0}}{16\Delta^{2}} \left\{1-\frac{8}{3}\sqrt{\frac{na^{3}}{\pi}}\left[1-\frac{1}{8} \left(\frac{2\pi T}{\Delta}\right)^{2}\right]\right\},\\ \beta_{3}=\frac{5G_{0}}{16 \Delta^{2}}\left\{1+\frac{8}{15}\sqrt{\frac{na^{3}}{\pi}}\left[1- \frac{5}{8}\left(\frac{2\pi T}{\Delta}\right)^{2}\right]\right\}. \nonumber\end{aligned}$$ These effects can be associated with emitting a sound from the region of a junction during the tunneling of particles across the junction. On the other hand, the radiation effects can be treated as a frequency dispersion of the effective friction coefficient. We will not enter here in details of the conditions which should be imposed on the coefficients of the Josephson equation (53) in order to observe the well-defined Josephson effect since this topic is already discussed much in the literature. In essence, the necessary condition reduces to the requirement of smallness either quantum zero-point or thermal fluctuations for the phase difference across a junction, i.e., mean square value $\langle (\Delta\varphi )^2\rangle \ll 1$. Involving that $\langle (\Delta\varphi )^2\rangle \sim T/E_{J}$ in the thermal activation region with the crossover at $T_0\sim E_{J}/max \{ G, \sqrt{G_{2}E_{J}} \}$ to $\langle (\Delta\varphi )^2 \rangle \sim\hbar /max\{ G, \sqrt{G{_2}E_{J}} \}$ in the quantum fluctuation regime at lower temperatures, it is desirable to have sufficiently large Josephson energy $E_J$ or, correspondingly, not too large potential barriers. Another interesting aspect of such kind of experiment is an investigation of the effects beyond the mean-field approximation of a very dilute gas $na^3 \ll 1$, in particular, observation of the temperature effects in the dynamics of a junction. As we have seen, the scale of the temperature effects should reach the order of $\sqrt{na^3}$ at $2\pi T\sim\Delta \ll T_c$. This is of much interest since the temperature behavior of the Josephson dynamics is closely related to the properties of elementary excitations in a condensed Bose-gas. The gas parameter $na^3$ under the conditions typically realized is at most of order of 10$^{-4}$, e.g., $n\sim 10^{15}$cm$^{-3}$ and $a=5$nm for $^{87}$Rb. In principle, it is possible to approach $na^3 \sim 1$, increasing $a\rightarrow \infty$ by appropriate tuning of magnetic field with the Feshbach resonance. However, too large values of scattering length can facilitate rapid three-body recombination. On the other hand, according to recent work [@Cornish00] such recombination may not be inevitable at approaching $na^3 \sim 10^{-2}$. For such values of $na^3$, the effects beyond the mean-field approximation becomes well noticeable of about 10%. The second interesting aspect is connected with the analog of the voltage-current characteristic which is an important attribute of a superconducting junction. This kind of experiment implies maintenance of the constant bias $\delta\mu$ for the chemical potentials across a junction, corresponding to constant pressure drop $\delta P = mn\delta\mu$ and time dependence of the relative phase $\varphi (t) = - \delta\mu t +\varphi _0$. One of possibilities is to use the field of Earth’s gravity for this purpose. As a result, we arrive at the following mean value of the particle current across a Bose junction for a sufficiently large period of time as a function of bias $\delta\mu$ $$\label{current} \langle I \rangle = \frac{\delta\mu}{\hbar} \langle G(\varphi )\rangle \left[ 1- \left( \frac{\delta\mu}{\hbar}\right)^2 \frac{\langle G_3 (\varphi )\rangle}{\langle G(\varphi )\rangle }\right],$$ where $$\langle G(\varphi )\rangle = \frac{1}{2} G_0 \left[ 1+\frac{1}{3} \sqrt{\frac{na^3}{\pi}}\left(\frac{2\pi T}{\Delta }\right) ^2\right],$$ and $\langle G_3 (\varphi )\rangle =\alpha _3$. This experiment may give an information on the dissipative properties of a junction and also on the nonlinear effects to be of the order of $(\delta\mu /\Delta )^3$. Note that nonlinear effects do not contain smallness of $(na^3 )^{1/2}$. Conclusion ========== To summarize, in this paper we have used a functional integration approach for the model of a tunneling Hamiltonian in order to analyze the dynamics of a point-like Josephson junction between two weakly non-ideal Bose gases. The effective action and response functions which describe completely the dynamics of a junction are found. Using the low frequency decomposition of the response functions, the quasiclassical Josephson equation which the time evolution of the phase difference $\varphi$ across the junction obeys is obtained to the terms of third order in time derivatives. The corresponding kinetic coefficients are calculated analytically, involving the finite-temperature corrections. The temperature effect on the kinetic coefficients demonstrates the $T^2$-behavior. Like a junction of the planar geometry [@Meier01; @Babich01; @Gesh01], the dynamics of a point-like junction concerned here has an Ohmic dissipative type due to the gapless character of excitations in a Bose-condensed gas. Note only that the scale of the dissipative effect in the latter case is significantly less. The behavior of the dissipative particle current on the temperature and the difference of the chemical potentials in Eq. (\[current\]) can be used to investigate the role of quasiparticle excitations in the dynamics of a Bose junction. Since the dissipation is closely related to the tunneling process of condensate particles and depends on the structure of a junction, the effect of dissipation in the junction dynamics can be reduced provided the structure of a junction prevents condensate particles from tunneling across the junction. We believe this question deserves further study. Acknowledgement =============== We wish to thank V.S. Babichenko and Yu. Kagan for discussions. One of us (S.B.) is grateful to “Statistical Physics Program” and Russian Foundation of Basic Researches for support. Appendix ======== Here we present the general case of $\Delta_l\ne\Delta_r$. To derive expressions for the coefficients in the equation for the phase (\[equation\]), we first find the Taylor expansion of the response functions $\alpha$ and $\beta$. Finally, for the coefficients in the functional series (\[abresp\]) we arrive at the expressions: $$\begin{array}{l} \alpha_{1}=\frac{\gamma}{2}\left\{b^{(2)}_{lr}+\frac{1}{3}\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4} \left (\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\right\},\\ \alpha_{2}=\frac{\gamma}{8 \sqrt{\Delta_l\Delta_r}}\left\{b^{(3)}_{lr}-4\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left[\phi_{lr} - \frac{\Delta_l+\Delta_r} {24\sqrt{\Delta_l\Delta_r}} \left(\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\right]\right\},\\ \alpha_{3}=\frac{\gamma}{16 \Delta_l\Delta_r}\left\{b^{(4)}_{lr}-\frac{8}{3}\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left[1 - \frac{\Delta_l^2+\Delta_r^2} {16\Delta_l\Delta_r} \left(\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\right]\right\},\\ \beta_0=\sqrt{\Delta_l\Delta_r}\gamma\left\{b^{(1)}_{lr}-4\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left[\ln \left[ \frac{4\omega _{c}}{\left( \sqrt{\Delta _{l}}+\sqrt{\Delta _{r}} \right) ^{2}}\right] +\frac{\Delta _{l}+\Delta _{r}}{48\sqrt{\Delta _{l}\Delta _{r}}}\left( \frac{2\pi T}{\sqrt{\Delta _{l}\Delta _{r}}}\right) ^{2}\right]\right\},\\ \beta_{1}=\frac{\gamma}{2}\left\{b^{(2)}_{lr}-\frac{1}{3}\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4} \left (\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\right\},\\ \beta_{2}=\frac{3\gamma}{8 \sqrt{\Delta_l\Delta_r}}\left\{b^{(3)}_{lr}+\frac{2}{3}\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left[\frac{4\sqrt{\Delta_l\Delta_r}}{\left(\sqrt{\Delta_l} +\sqrt{\Delta_r} \right )^2 } - \frac{\Delta_l+\Delta_r} {4\sqrt{\Delta_l\Delta_r}} \left(\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\right]\right\},\\ \beta_{3}=\frac{5\gamma}{16 \Delta_l\Delta_r}\left\{b^{(4)}_{lr}+\frac{8}{15}\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left[1 - \frac{5\left(\Delta_l^2+\Delta_r^2\right)} {16\Delta_l\Delta_r} \left(\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\right]\right\}.\\ \end{array}$$ Here we use the following notations $$\begin{aligned} \phi_{lr}=\phi\left(\frac{\Delta_l-\Delta_r}{\Delta_l+\Delta_r}\right),\, \gamma=\pi\sqrt{\frac{2\nu_l\nu_r n_{0l} n_{0r}}{\Delta_l \Delta_r}},\\ b^{(n)}_{lr}=\frac{1}{2}\left[\left(\frac{\Delta_r}{\Delta_l}\right)^{n/2}\left( \frac{ n _ l a_l^3}{n_r a_r^3}\right)^{1/4}+\left(\frac{\Delta_l}{\Delta_r}\right)^{n/2}\left(\frac{n_r a_r^3}{n_l a_l^3}\right)^{1/4}\right].\nonumber\end{aligned}$$ and $\phi$ is introduced in (\[dilog\]). Then, for the potential energy of a junction we have $$\begin{array}{l} E_J=2 I_0\sqrt{n_{0l} n_{0r}},\\ E_{2J}=G_0\sqrt{\Delta_l\Delta_r}\left\{b^{(1)}_{lr}-4\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left[\ln \left[ \frac{4\omega _{c}}{\left( \sqrt{\Delta _{l}}+\sqrt{\Delta _{r}} \right) ^{2}}\right] +\frac{\Delta _{l}+\Delta _{r}}{48\sqrt{\Delta _{l}\Delta _{r}}}\left( \frac{2\pi T}{\sqrt{\Delta _{l}\Delta _{r}}}\right) ^{2}\right]\right\}. \end{array}$$ $G_0$ is given by $$\begin{aligned} G_0=2\gamma I_0^2=16\pi\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left(I_0\sqrt{\frac{n_{0l} n_{0r}}{\Delta_l \Delta_r}}\right)^2,\\ \Delta_{l,r}=n_{0 l,r} u_{l,r}.\nonumber\end{aligned}$$ In the above relations the condensate densities $n_0=n_0(T)$ depend on temperature according to the well-known expression for the depletion of the condensate fraction of the weakly interacting Bose gas [@Fetter]. The friction coefficient $G(\varphi)$ determining the Ohmic dissipation is represented by $$G(\varphi)=G_{0}\left[b^{(2)}_{lr}\cos^{2}\varphi+\frac{1}{3}\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4} \left (\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\sin^{2}\varphi\right].$$ It is natural that nonzero temperature enhances the energy dissipation of a junction due to appearance of thermal excitations in the Bose-condensed gas. Comparing $G_0$ for the Bose-gases with the conductance of normal Fermi gases with the same densities of states, we have $$G_N=4\pi I_0^2\nu_l\nu_r,\,G_N=8\left(\frac{n_l a_l^3 n_r a_r^3}{\pi^2}\right)^{1/4}G_0.$$ The inverse effective mass of a junction is determined by $$\begin{aligned} G_{2}(\varphi)= U^{-1} -\alpha _2 - \beta _2 \cos 2\varphi, \\ \alpha _2 =\frac{G_0}{8\sqrt{\Delta_l\Delta_r}}\left\{b^{(3)}_{lr}-4\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left[\phi_{lr} - \frac{\Delta_l+\Delta_r} {24\sqrt{\Delta_l\Delta_r}} \left(\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\right]\right\}, \nonumber \\ \beta _2 = \frac{3G_0}{8\sqrt{\Delta_l\Delta_r}} \left\{b^{(3)}_{lr}+\frac{2}{3}\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left[\frac{4\sqrt{\Delta_l\Delta_r}}{\left(\sqrt{\Delta_l} +\sqrt{\Delta_r} \right )^2 } - \frac{\Delta_l+\Delta_r} {4\sqrt{\Delta_l\Delta_r}} \left(\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\right]\right\}. \nonumber \\ \end{aligned}$$ The renormalization of the effective mass results from the both condensate-noncondensate and noncondensate-noncondensate particle tunneling processes. The coefficient $G_3(\varphi)$ responsible for the radiation effects reads $$\begin{aligned} G_3(\varphi)=\alpha _3 +\beta _3\cos 2\varphi, \nonumber\\ \alpha_{3}=\frac{G_{0}}{16\Delta_l\Delta_r} \left\{b^{(4)}_{lr}-\frac{8}{3}\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left[1 - \frac{\Delta_l^2+\Delta_r^2} {16\Delta_l\Delta_r} \left(\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\right]\right\}, \\ \beta_{3}=\frac{5 G_{0}}{16\Delta_l\Delta_r}\left\{b^{(4)}_{lr}+\frac{8}{15}\left(\frac{n_l a_l^{3}n_r a_r^ { 3 } }{\pi^2}\right)^{1/4}\left[1 - \frac{5\left(\Delta_l^2+\Delta_r^2\right)} {16\Delta_l\Delta_r} \left(\frac{2\pi T}{\sqrt{\Delta_l\Delta_r}}\right)^{2}\right]\right\}. \nonumber\end{aligned}$$ The expression derived for the dynamical coefficients in equation (\[equation\]) governing the phase difference across a point-like junction allows us to describe the low frequency dynamics in the asymmetric case of Bose gases with the different order parameters. [23]{} M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman, E.A. Cornell, Science [**269**]{}, 198 (1995) K.B. Davis, M.-O. Mewes, M.R. Andrews, N.J. van Druten, D.S. Durfee,D.M. Kurn, W. Ketterle, Phys.Rev.Lett. [**75**]{}, 3969 (1995) M.O. Mewes, M.R. Andrews, N.J. van Druten, D.M. Kurn, D.S. Durfee, W. Ketterle, Phys.Rev.Lett. [**77**]{}, 416 (1996) C.C. Bradley, C.A. Sackett, R.G. Hulet, Phys.Rev.Lett. [ **78**]{},985 (1997) F. Dalfovo, L. Pitaevskii, S. Stringari, Phys.Rev.A [ **54**]{}, 4213 (1996) M.W. Jack, M.J. Collett, and D.F. Walls, Phys.Rev.A [**54**]{}, R4625 (1996) M.J. Steel, M.J. Collett, Phys.Rev.A [**57**]{}, 2920 (1998) G.J. Milburn, J. Corney, E.M. Wright, and D.F. Walls, Phys.Rev.A [**55**]{}, 4318(1997) A. Smerzi, S. Fantoni, S. Giovanazzi, S.R. Shenoy, Phys.Rev.Lett. [**79**]{},4950 (1997) S. Raghavan, A. Smerzi, S. Fantoni, S.R. Shenoy, Phys.Rev.A [**59**]{}, 620(1999) A. Smerzi, S. Raghavan, Phys.Rev.A [**61**]{}, 063601 (2000) J.E. Williams, Phys.Rev.A [**64**]{}, 013610 (2001) I. Zapata, F. Sols, A.J. Leggett, Phys.Rev.A [**57**]{}, R28(1998) Chi-Yong Lin, E.J.V. de Passos, Da-Shin Lee, Phys.Rev.A [ **62**]{}, 055603(2000) H.P. Büchler, V.B. Geshkenbein, G. Blatter, Phys.Rev.Lett. [**87**]{}, 100403 (2001) P. Villain, M. Lewenstein, Phys.Rev.A [**59**]{}, 2250 (1999) F. Meier, W. Zwerger, Phys.Rev.A [**64**]{}, 033610 (2001) V. Ambegaokar, U. Eckern, G. Schön, Phys.Rev.Lett. [**48**]{}, 1745 (1982) A.I. Larkin, Yu.N. Ovchinnikov, Phys.Rev.B [**28**]{}, 6281 (1983) U. Eckern, G. Schön, V. Ambegaokar, Phys.Rev.B [**30**]{}, 6419 (1984) A.A. Abrikosov, L.P. Gorkov, I.E. Dzyaloshinski, Methods of Quantum Field Theory in Statistical Mechanics, Dover Publications Inc., New York, 1975 V.S. Babichenko, cond-mat/0109248. A. L. Fetter, J. D. Walecka, “Quantum theory of many-particle systems", San Francisco, McGraw-Hill, 1971 S.L. Cornish, N.R. Claussen, J.L. Roberts, E.A. Cornell, C.E. Wieman, Phys.Rev.Lett. [**85**]{}, 1795 (2000)
--- abstract: 'We show that there exist smooth, simply connected, four-dimensional [*spin*]{} manifolds which do not admit Einstein metrics, but nevertheless satisfy the strict Hitchin-Thorpe inequality. Our construction makes use of the Bauer/Furuta cohomotopy refinement of the Seiberg-Witten invariant [@baufu; @bauer2], in conjunction with curvature estimates previously proved by the second author [@lric]. These methods also easily allow one to construct examples of topological $4$-manifolds which admit an Einstein metric for one smooth structure, but which have infinitely many other smooth structures for which no Einstein metric can exist.' author: - 'Masashi Ishida and Claude LeBrun[^1]' date: | July 14, 2001\ Revised October 28, 2001 title: | Spin Manifolds, Einstein Metrics,\ and Differential Topology --- Introduction {#sec:intro} ============ A smooth Riemannian metric $g$ is said to be [*Einstein*]{} if its Ricci curvature, considered as a function on the unit tangent bundle, is constant. As recently as the 1960’s, it could have seemed reasonable to hope that every smooth compact simply connected manifold might admit such a metric; and, indeed, it was apparently with this goal in mind that Yamabe [@yamabe] carried out his trail-blazing work on the total scalar curvature. In the late 1960’s, however, Thorpe [@tho] observed, as a parenthetical remark, that a compact oriented $4$-dimensional Einstein manifold $(M^{4},g)$ must satisfy the inequality $$2\chi (M) \geq 3|\tau (M)|, \label{eq:ht}$$ where $\chi$ denotes the Euler characteristic and $\tau$ denotes the signature. This inequality was rediscovered several years later by Hitchin [@hit], who went on to give examples of simply connected $4$-dimensional manifolds which violate inequality (\[eq:ht\]), and thus do not admit Einstein metrics. In light of this, (\[eq:ht\]) has come to be known as the [*Hitchin-Thorpe inequality*]{} [@bes]. Perhaps the deepest results in Hitchin’s paper concern the boundary case of inequality (\[eq:ht\]); for it was shown in [@hit] that a simply connected $4$-dimensional Einstein manifold with $2\chi = -3\tau$ must be Ricci-flat, Kähler, and diffeomorphic to the Kummer surface $K3$. In particular, this implies that most simply connected $4$-manifolds satisfying $2\chi = 3 |\tau|$ do not admit Einstein metrics; and this applies not only to examples of the ‘wrong’ homotopy type — e.g. $\CP_{2}\# 9 \overline{\CP}_{2}$ —- but also [@poonmail] to an infinite class of manifolds [@frmo; @fsicm] now known to be homeomorphic but not diffeomorphic to $K3$; cf. [@kot]. Since Yau [@yau] showed that $K3$ actually admits Einstein metrics of the kind described in Hitchin’s result, this shows, in particular, that the existence of Einstein metrics on a $4$-manifold is a matter of diffeotype, and not just of homeotype. By contrast, for simply connected $4$-manifolds satisfying the so-called [*strict Hitchin-Thorpe inequality*]{} $$2\chi (M) > 3|\tau (M)|, \label{eq:strict}$$ obstructions to the existence of Einstein metrics remained unknown until 1995, when the second author [@lno] used scalar curvature estimates derived from the Seiberg-Witten equations to display an infinite class of $4$-manifolds which do not admit Einstein metrics, but nonetheless satisfy (\[eq:strict\]). However, the examples given there were non-minimal complex surfaces of general type, and thus, in particular, were all [*non-spin*]{}. Even though these results have been subsequently improved [@lric] via the introduction of Weyl-curvature terms into the estimates, the non-spin character of the examples seemed to reflect an essential feature of the construction. This made it natural to wonder about the following: \[pair\] Are there simply connected $4$-dimensional [**spin**]{} manifolds which do not admit Einstein metrics, but which nonetheless satisfy the strict Hitchin-Thorpe inequality (\[eq:strict\])? In Theorem \[spin\] below, we will see that the answer is affirmative. Our construction depends crucially on a recent breakthrough in Seiberg-Witten theory, due to Furuta and Bauer [@baufu; @bauer2], used in conjunction with recent curvature estimates due to the second author [@lric]. Another curious defect of the examples constructed in [@lno; @lebweyl; @lric] is that, when oriented so that the signature is negative, all have the the property that $b_{+}$ is odd. This merely reflects the fact that these examples are all almost-complex, and the integrality of the Todd genus implies that $b_{+}$ must be odd for any simply connected almost-complex $4$-manifold. However, the Einstein condition is independent of orientation, and these examples can certainly have $b_{-}$ of either parity. Might it not be reasonable to hope that this geographical curiosity was merely an artifact of the construction? \[geo\] Are there simply connected $4$-manifolds, with $\tau < 0$ and $b_{+}$ [**even**]{}, which do not admit Einstein metrics, but which nonetheless satisfy the strict Hitchin-Thorpe inequality (\[eq:strict\])? In Theorem \[molti\] below, we will see that the present methods allow one to construct such examples in considerable abundance. Finally, we return to the heavy dependence of these questions on the choice of smooth structure. As already noted, while $K3$ admits a smooth structure for which Einstein metrics exist, it also admits infinitely many smooth structures for which Einstein metrics [*do not*]{} exist. Is this typical? \[change\] Let $M$ be a compact simply connected topological $4$-manifold which admits at least one smooth structure. Are there always [**infinitely many**]{} distinct smooth structures on $M$ for which no compatible Einstein metric exists? While we are certainly not able to answer this question in full generality, we are, in Theorems \[spin\], \[molti\] and \[lots\], at least able to give an affirmative answer for infinitely many homeotypes satisfying (\[eq:strict\]). Interestingly, many of these examples also have the property that, like $K3$, they also admit smooth structures for which compatible Einstein metrics [*do*]{} exist. For some earlier results and ruminations in this direction, see [@kot]. The infinite-fundamental-group analogue of Question \[change\] would still appear to be a topic worthy of investigation. By contrast, however, the infinite-fundamental-group analogues of Questions \[pair\] and \[geo\] are susceptible to the homotopy-theoretic volume estimates pioneered by Gromov, and affirmative answers [@grom; @samba] to the corresponding questions have therefore already been known for some time. Monopole Classes and Curvature {#sec:curv} ============================== If $M$ is a smooth oriented $4$-manifold, we can always find Hermitian line bundles $L\to M$ such that $c_{1}(L)\equiv w_{2}(TM) \bmod 2$. For any such $L$, and any Riemannian metric $g$ on $M$, one can then find rank-$2$ Hermitian vector bundles ${\mathbb V}_\pm$ which formally satisfy $${\mathbb V}_\pm= {\mathbb S}_{\pm}\otimes L^{1/2},$$ where ${\mathbb S}_{\pm}$ are the locally defined left- and right-handed spinor bundles of $(M,g)$. Such a choice of ${\mathbb V}_{\pm}$, up to isomorphism, is called a spin$^{c}$ structure $\frak c$ on $M$, and is determined, modulo the $2$-torsion subgroup of $H_{1}(M,\ZZ)$, by the first Chern class $c_{1}(L)= c_{1}({\mathbb V}_{\pm}) \in H^{2}(M,\ZZ )$ of the spin$^{c}$ structure. Because ${\mathbb S}_{+}$ is a quaternionic line bundle, there is a canonical anti-linear isomorphism $$\begin{aligned} {\mathbb V}_{+} & \longrightarrow & {\mathbb V}_{+}\otimes L^{*} \\ \Phi & \mapsto & \bar{\Phi}\end{aligned}$$ called ‘complex conjugation.’ More importantly, every unitary connection $A$ on $L$ induces a Dirac operator $$D_{A}: \Gamma ({\mathbb V}_{+})\to \Gamma ({\mathbb V}_{-}).$$ If $A$ is such a connection, and if $\Phi$ is a section of ${\mathbb V}_{+}$, the pair $(\Phi , A)$ is said to satisfy the [*Seiberg-Witten equations*]{} [@witten] if $$\begin{aligned} D_{A}\Phi &=&0\label{drc}\\ F_{A}^+&=&-\frac{1}{2}\Phi \odot \bar{\Phi}.\label{sd}\end{aligned}$$ Here $F_{A}^+$ is the self-dual part of the curvature of $A$, and we have identified $\Lambda^{+}\otimes \CC$ with $[\odot^{2}{\mathbb V}_{+}]\otimes L^{*} = \odot^{2}{\mathbb S}_{+}$ in the canonical manner. For the $4$-manifolds of primary interest here, there turn out to be certain spin$^{c}$ structures for which there exists a solution of the Seiberg-Witten equations for each metric $g$. This situation is neatly codified by the following terminology [@K]: Let $M$ be a smooth compact oriented $4$-manifold with $b_{+}\geq 2$. An element $a\in H^{2}(M,\ZZ )/ \mbox{\rm torsion}$ will be called a [**monopole class**]{} of $M$ iff there exists a spin$^{c}$ structure $\mathfrak c$ on $M$ with first Chern class $$c_{1}(L)\equiv a ~~~\bmod \mbox{\rm torsion}$$ which has the property that the corresponding Seiberg-Witten equations (\[drc\]–\[sd\]) have a solution for every Riemannian metric $g$ on $M$. Because the Seiberg-Witten equations imply a uniform [*a priori*]{} bound on $|F_{A}^{+}|$ for any given metric on $g$ on $M$, it follows [@witten] that the set $\mathfrak C$ of monopole classes of any $4$-manifold $M$ is necessarily finite. Also note that $a\in {\mathfrak C}$ $\Longleftrightarrow$ $(-a) \in {\mathfrak C}$, since complex conjugation sends solutions of the Seiberg-Witten equations to solutions of the Seiberg-Witten equations. Finally, notice that, for any $a,b \in {\mathfrak C}$, $a-b$ is automatically divisible by $2$ in the lattice $H^{2}(M,\ZZ )/ \mbox{\rm torsion}$, since the corresponding first Chern classes are both $\equiv w_{2} \bmod 2$. With these observations in mind, we will now introduce a crude but effective diffeomorphism invariant of $4$-manifolds. \[band\] Let $M$ be a smooth compact oriented $4$-manifold with $b_{+}(M)\geq 2$. Let ${\mathfrak C}\subset H^{2}(M, \ZZ ) /\mbox{\rm torsion}$ be the set of monopole classes of $M$. If ${\mathfrak C}$ contains a non-zero element (and hence at least two elements), we define the [**bandwidth**]{} of $M$ to be $$\BW (M ) ~= \max ~ \left\{ n\in \ZZ^{+}~|~ \exists ~a,b \in {\mathfrak C}, ~a\neq b, ~{\rm s.t.}~ 2n|(a-b) ~ \right\} .$$ If, on the other hand, ${\mathfrak C}\subset \{ 0\},$ we define the bandwidth $\BW (M )$ to be $0$. As was first pointed out by Witten [@witten], the existence of a monopole class implies an [*a priori*]{} lower bound on the $L^{2}$ norm of the scalar curvature of Riemannian metrics. More recently, the second author then discovered [@lebweyl; @lric] that (\[drc\]–\[sd\]) also imply a family of analogous estimates involving the self-dual Weyl curvature. For our present purposes, the most useful of these estimates is the following [@lric]: \[est\] Let $M$ be a smooth compact oriented $4$-manifold with monopole class $a\in H^{2}(M,\ZZ)/\mbox{\rm torsion}\subset H^{2}(M,\RR )$. Let $g$ be any Riemannian metric on $M$, and let $a^{+}\in H^{2}(M,\RR )$ be the self-dual part of $a$ with respect to the ‘Hodge’ decomposition $$H^{2}(M,\RR )= {\mathcal H}^{+}_{g}\oplus {\mathcal H}^{-}_{g}$$ of second de Rham cohomology, identified with the space of $g$-harmonic $2$-forms, into eigenspaces of the $\star$ operator. Then the scalar curvature $s$ and self-dual Weyl curvature $W_{+}$ of $g$ satisfy $$\frac{1}{4\pi^{2}}\int_{M}\left( \frac{s^{2}}{24} + 2|W_{+}|^{2}\right) d\mu \geq \frac{2}{3} (a^{+})^{2} ,$$ where $d\mu$ denotes the Riemannian volume form of $g$, and where the point-wise norms are calculated with respect to $g$. Moreover, if $a^{+}\neq 0$, and if $a$ is not the first Chern class of a symplectic structure on $M$, then the inequality is necessarily strict. The Bauer-Furuta Invariant {#sec:bauer} ========================== The usual Seiberg-Witten invariant [@taubes3; @ozsz] of a smooth compact oriented $4$-manifold with $b_{+}\geq 2$, equipped with some fixed spin$^{c}$ structure $\mathfrak c$, is obtained considering the moduli space of solutions of (\[drc\]–\[sd\]), for a generic metric $g$, as a homology class in the configuration space $${\cal B}=\left( [\Gamma ({\mathbb V}_{+})-0]\times \{ \mbox{smooth unitary connections on }L ~\}\right) /{\mathcal G} ,$$ where ${\mathcal G}= \{ u: M\stackrel{C^{\infty}}{\longrightarrow} S^{1}\}$. However, one may instead think of this moduli space as defining a framed bordism class in the space ${\cal D} \subset {\cal B}$ of solutions of (\[drc\]). Pursuing this idea, Furuta and Bauer [@baufu; @bauer2] were independently able to define a refinement of the Seiberg-Witten invariant which takes values in a cohomotopy group; for example, if $b_{1}(M)=0$, the invariant takes values in $\pi^{b_{+}(M)-1}(\CP_{d-1}),$ where $d=[c_{1}^{2}(L)-\tau(M)]/8$. If this Bauer-Furuta stable cohomotopy invariant is non-zero, $c_{1}(L)$ is a monopole class. But, remarkably, this invariant is not killed off by the sort of connect sum operation that would eliminate the usual Seiberg-Witten invariant [@bauer2]: Let $X$, $Y$, and $Z$ be compact oriented $4$-manifolds with $b_{1}=0$ and $b_{+}\equiv 3\bmod 4$. Suppose, that ${\mathfrak c}_{X}$, ${\mathfrak c}_{Y}$, and ${\mathfrak c}_{Z}$ are spin$^{c}$ structures of almost-complex type on $X$, $Y$, and $Z$ for which the mod-$2$ Seiberg-Witten invariant is non-zero. Then the induced spin$^{c}$ structures ${\mathfrak c}_{X\# Y}$ and ${\mathfrak c}_{X\# Y\# Z}$ on $X\# Y$ and $X\# Y \# Z$ have non-zero Bauer-Furuta stable cohomotopy invariant. Now, recall that a celebrated result of Taubes [@taubes] asserts that the mod-$2$ Seiberg-Witten invariant is non-zero for the canonical spin$^{c}$ structure of any symplectic $4$-manifold. Since the non-vanishing of the Bauer-Furuta invariant forces the existence of solutions of the Seiberg-Witten equations, we immediately obtain the following consequence: \[voila\] Let $X$, $Y$, and $Z$ be compact oriented simply connected symplectic $4$-manifolds with $b_{+}\equiv 3\bmod 4$. Then, with respect to the canonical isomorphisms $$\begin{aligned} H^{2}(X\# Y, \ZZ) & = & H^{2}(X, \ZZ)\oplus H^{2}(Y, \ZZ) \\ H^{2}(X\# Y\# Z, \ZZ) & = & H^{2}(X, \ZZ)\oplus H^{2}(Y, \ZZ) \oplus H^{2}(Z, \ZZ) , \end{aligned}$$ the cohomology classes $$\begin{aligned} \pm c_{1}(X) \pm c_{1}(Y) & \in & H^{2}(X, \ZZ)\oplus H^{2}(Y, \ZZ) \\ \pm c_{1}(X) \pm c_{1}(Y) \pm c_{1}(Z) & \in & H^{2}(X, \ZZ)\oplus H^{2}(Y, \ZZ) \oplus H^{2}(Z, \ZZ) \end{aligned}$$ are monopole classes on $X\# Y$ and $X\# Y\# Z$, respectively. Here the $\pm$ signs are arbitrary, and independent of one another. Obstructions to Einstein Metrics {#sec:nein} ================================ When combined with the ideas of [@lric], Corollary \[voila\] immediately implies a non-existence result for Einstein metrics. \[oui\] Let $X$, $Y$, and $Z$ be simply connected symplectic $4$-manifolds with $b_{+}\equiv 3 \bmod 4$. Then $X\# Y\# k\overline{\CP}_{2}$ does not admit Einstein metrics if $$k+4 \geq \frac{c_{1}^{2}(X) + c_{1}^{2}(Y)}{3}.$$ Similarly, $X\# Y \# Z \# k\overline{\CP}_{2}$ does not admit Einstein metrics if $$k+8 \geq \frac{c_{1}^{2}(X) + c_{1}^{2}(Y)+c_{1}^{2}(Z)}{3}.$$ We begin by making the identification $$\begin{aligned} H^{2}(X \# k\overline{\CP}_{2},\ZZ)& = & H^{2}(X, \ZZ)\oplus \bigoplus_{j=1}^{k} H^{2}(\overline{\CP}_{2}, \ZZ) \\ & = & H^{2}(X, \ZZ)\oplus \ZZ^{\oplus k} , \end{aligned}$$ in the process make a choice of generators $E_{j}$, $j=1,\ldots , k$ for the $k$ copies of $H^{2}(\overline{\CP}_{2},\ZZ) \cong \ZZ$. Now recall that there are self-diffeomorphisms of $X \# k\overline{\CP}_{2}$ which act trivially on $H^{2}(X, \ZZ)$, but for which $$E_{j}\mapsto \pm E_{j}$$ for any desired choice of signs. Thinking of $X \# k\overline{\CP}_{2}$ as the $k$-fold symplectic blow-up of $X$, and then moving the symplectic structure via these diffeomorphism, we thus obtain $2^{k}$ distinct symplectic structures on $X \# k\overline{\CP}_{2}$, with first Chern classes $$c_{1}= c_{1}(X)+\sum_{j=1}^{k}(\pm E_{j})$$ for all possible choices of signs. Applying Corollary \[voila\] to $(X \# k\overline{\CP}_{2})\# Y$ and $(X \# k\overline{\CP}_{2})\# Y\# Z$, we thus conclude that all the classes of the form $\alpha+ \sum_{j=1}^{k} (\pm E_{j})$ are monopole classes, where $$\alpha = \left\{ \begin{array}{ll} c_{1}(X) + c_{1}(Y), & \mbox{ if } M = X\# Y\# k\overline{\CP}_{2}, \\ c_{1}(X) + c_{1}(Y)+ c_{1}(Z), & \mbox{ if } M = X\# Y\# Z\# k\overline{\CP}_{2}. \end{array} \right.$$ Now, given any particular metric $g$ on $M$, let us make a new choice $\hat{E}_{j}=\pm E_{j}$ of generators for our $k$ copies of $H^{2}(\overline{\CP}_{2},\ZZ)$ in such a way that $$\hat{E}_{j}\cdot \alpha^{+}\geq 0.$$ The resulting monopole class $$a= \alpha + \sum_{j=1}^{k}\hat{E}_{j}$$ then satisfies $$\begin{aligned} (a^{+})^{2} & = & \left(\alpha^{+}+ \sum \hat{E}_{j}^{+}\right)^{2} \\ & = & (\alpha^{+})^{2}+ 2\sum \alpha^{+} \cdot \hat{E}_{j}^{+} +\left(\sum \hat{E}_{j}^{+} \right)^{2} \\ & \geq & (\alpha^{+})^{2} \\ & \geq & \alpha^{2} \end{aligned}$$ Proposition \[est\] therefore tells us that any metric $g$ on $M$ satisfies $$\frac{1}{4\pi^{2}}\int_{M}\left( \frac{s^{2}}{24} + 2|W_{+}|^{2}\right) d\mu > \frac{2}{3} \alpha^{2} . \label{eq:crux}$$ (The inequality is strict because $a^{2}> (2\chi + 3\tau )(M)$, and this guarantees that $a$ is certainly not the first Chern class of a symplectic structure.) Now for any metric $g$ on our compact orientable $4$-manifold $M$, we have the Gauss-Bonnet type formula [@bes; @hit] $$(2\chi + 3\tau ) (M) = \frac{1}{4\pi^{2}}\int_{M}\left( \frac{s^{2}}{24} + 2|W_{+}|^{2}-\frac{|\stackrel{\circ}{r}|^{2}}{2}\right) d\mu,$$ where $\stackrel{\circ}{r}=r-\frac{s}{4}g$ is the traceless Ricci tensor. If $g$ is Einstein, $\stackrel{\circ}{r}=0$, and inequality (\[eq:crux\]) then becomes $$(2\chi + 3\tau ) (M) > \frac{2}{3} \alpha^{2}. \label{eq:here}$$ For $M= X\# Y\# k\overline{\CP}_{2}$, we have $$\alpha^{2}= c_{1}^{2}(X) + c_{1}^{2}(Y),$$ and $$(2\chi + 3\tau ) (M)= c_{1}^{2}(X) + c_{1}^{2}(Y)-4-k.$$ Inequality (\[eq:here\]) therefore asserts that a necessary condition for the existence of an Einstein metric on $M$ is that $$c_{1}^{2}(X) + c_{1}^{2}(Y)-4-k > \frac{2}{3} [c_{1}^{2}(X) + c_{1}^{2}(Y)],$$ or, in other words, that $$\frac{c_{1}^{2}(X) + c_{1}^{2}(Y)}{3} > k+ 4.$$ By contraposition, this shows that there [*cannot*]{} be an Einstein metric if $$k+4 \geq \frac{c_{1}^{2}(X) + c_{1}^{2}(Y)}{3} ,$$ exactly as claimed. For $M=X\# Y\# Z\# k\overline{\CP}_{2}$, we instead have $$\alpha^{2}= c_{1}^{2}(X) + c_{1}^{2}(Y)+ c_{1}^{2}(Z),$$ and $$(2\chi + 3\tau ) (M)= c_{1}^{2}(X) + c_{1}^{2}(Y)+ c_{1}^{2}(Z) -8-k,$$ so that inequality (\[eq:here\]) instead tells us that a necessary condition for the existence of an Einstein metric on $M$ is that $$c_{1}^{2}(X) + c_{1}^{2}(Y)+ c_{1}^{2}(Z) -8-k > \frac{2}{3}[c_{1}^{2}(X) + c_{1}^{2}(Y)+ c_{1}^{2}(Z)],$$ or in other words that $$\frac{c_{1}^{2}(X) + c_{1}^{2}(Y)+ c_{1}^{2}(Z)}{3} > k+ 8.$$ We thus conclude that there [*cannot*]{} be an Einstein metric if $$k+8 \geq \frac{c_{1}^{2}(X) + c_{1}^{2}(Y)+ c_{1}^{2}(Z)}{3} ,$$ and this finishes the proof. In particular, setting $k=0$ yields: \[non\] Let $X$, $Y$, and $Z$ be simply connected symplectic $4$-manifolds with $b_{+}\equiv 3 \bmod 4$. If $c_{1}^{2}(X) + c_{1}^{2}(Y) \leq 12$, then $X\# Y$ does not admit Einstein metrics. Similarly, if $c_{1}^{2}(X) + c_{1}^{2}(Y)+c_{1}^{2}(Z) \leq 24$, then $X\# Y \# Z$ does not admit Einstein metrics. Spin Examples {#sec:even} ============= In this section, we construct a sequence of $4$-dimensional spin manifolds which do not admit Einstein metrics, but nonetheless satisfy (\[eq:strict\]). Not only will we construct examples with an infinite number of homeotypes, but we will also show that each of these homeotypes carries an an infinite number of distinct smooth structures with the desired property. We begin with a collection of symplectic spin manifolds constructed by Gompf [@gompf]. For arbitrary integers $k \geq 2$ and $m \geq 0$, Gompf showed that one can construct, by symplectic surgery methods, a simply connected symplectic spin manifold with $(\chi, \tau ) = (24 k + 4 m , -16 k )$. In particular, setting $m = 2$, there is a simply connected symplectic spin $4$-manifold $X_{k }$ with $(\chi, \tau ) = (24 k + 8 , -16 k )$. By the Minkowski-Hasse classification of quadratic forms, this manifold thus has intersection form $-2k {\bf E}_{8}\oplus (4k +3){\bf H}$, and so is homeomorphic to $k (K3) \# (k +3)(S^{2}\times S^{2})$ by the Freedman’s theorem [@freedman]. Notice that $b_{+}(X_{k }) = 4k +3\equiv 3 \bmod 4$, and that $c_{1}^{2}(X_{k })= (2\chi + 3\tau ) (X_{k })= 16$. It is perhaps worth remarking that most of these symplectic manifolds cannot be taken to be complex surfaces, as most violate the Noether inequality $c_{1}^{2} \geq b_{+}-5$. On the other hand, we are free e.g. to take $X_{4}$ to be a smooth complex hypersurface of tridegree $(4,4,2)$ in $\CP_{1}\times \CP_{1}\times \CP_{1}$. In any case, it merely suffices for what follows that we choose one such symplectic manifold $X_{k}$ for each $k \geq 2$. The other ingredient we will need is a certain sequence of homotopy $K3$ surfaces. Let $Y_{0}$ denote a Kummer surface, equipped with an elliptic fibration $Y_{0}\to \CP_{1}$. Let $Y_{\ell}$ be obtained from $Y_{0}$ by performing a logarithmic transformation of order $2\ell+1$ on one non-singular elliptic fiber of $Y_{0}$. The $Y_{\ell}$ are simply connected spin manifolds with $b_{+}=3$ and $b_{-}=19$, and hence are homeomorphic to $K3$ by the Freedman classification. However, $Y_{\ell}$ is a Kähler surface with $p_{g}=1\neq 0$, and so, for each $\ell$, $\pm c_{1}(Y_{\ell})$ are Seiberg-Witten basic class, with Seiberg-Witten invariant $\pm 1$. Moreover, $c_{1}(Y_{\ell})= 2\ell {\mathfrak f}$, where $\mathfrak f$ is Poincaré dual to the multiple fiber introduced by the logarithmic transformation [@bpv]. With these building blocks in hand, we now proceed to prove the following result: \[spin\] For any integer $n \geq 4$, the topological spin manifold $$n (K3) \# (n+1) (S^{2}\times S^{2})$$ admits infinitely many distinct smooth structures for which there cannot exist compatible Einstein metrics. Moreover, this topological manifold has $$\begin{aligned} \chi & = & 24n +4 , \\ \tau & = & -16n, \end{aligned}$$ and thus satisfies the strict Hitchin-Thorpe inequality $2\chi > 3 |\tau |$. Set $k=n-2$, and consider the simply connected $4$-manifolds $$M_{k,\ell}=X_{k} \# Y_{0} \# Y_{\ell}.$$ Each of these smooth oriented $4$-manifolds is homeomorphic to $$(k +2) (K3) \# (k +3) (S^{2}\times S^{2}).$$ However, since $b_{+}(X_{k})=4k + 3$ and $b_{+}(Y_{0})=b_{+}(Y_{\ell})=3$ all reduce to $3 \bmod 4$, and since $$c_{1}^{2}(X_{k})+c_{1}^{2}(Y_{0})+c_{1}^{2}(Y_{\ell})= 16+0+0 < 24,$$ Corollary \[non\] asserts that none of these smooth manifolds $M_{k,\ell}$ can admit an Einstein metric. Now, for any fixed $k$, observe that the sequence $\{ M_{k,\ell} ~|~ \ell \in {\mathbb N}\}$ must contain infinitely many different diffeotypes. Indeed, Corollary \[voila\] asserts that $a=c_{1}(X_{k})+ c_{1}(Y_{\ell})$ and $b=c_{1}(X_{k}) -c_{1}(Y_{\ell})$ are both monopole classes on $M_{k,\ell} =X_{k} \# Y_{0} \# Y_{\ell}$. However, the difference $a-b= 2c_{1}(Y_{\ell})$ is divisible by $4\ell$, and the bandwidth of $M_{k,\ell} $, as defined in Definition \[band\], therefore satisfies $$\BW (M_{k,\ell} ) \geq 2\ell.$$ Thus, for any fixed $k$, $$\sup_{\ell\in {\mathbb N}} ~\BW (M_{k,\ell}) =\infty,$$ and it follows that no individual $M_{k,\ell}$ has maximal bandwidth, for any given $k$. Hence $\{ M_{k,\ell }~|~ \ell \in {\mathbb N}\}$ runs through infinitely many different diffeotypes for each $k$, and the claim follows. Non-Spin Examples {#sec:odd} ================= We conclude this article by showing that Theorem \[oui\] also provides a rich source of non-spin $4$-manifolds without Einstein metrics. We will begin with a sequence of simply connected symplectic manifolds constructed by Gompf [@gompf]. Namely, for each integer $i \geq 2$, there is a compact simply connected symplectic $4$-manifold $Z_{i }$ with Todd genus $\frac{1}{4}(\chi + \tau )(Z_{i }) = i $ and $c_{1}^{2}(Z_{i }) = 8i -11$. The easiest thing to do with these examples is to apply the results of [@lric], whereby one gets non-existence of Einstein metrics on $Z_{i }\# k \overline{\CP}_{2}$ for $k\geq (8i -11)/3$. Since the resulting manifolds are simply connected and non-spin, with $b_{+}=2i -1$ and $b_{-}=2i +10 +k$, Freedman’s theorem [@freedman] immediately yields Let $(m,n)$ be a pair of natural numbers, where $m$ is odd. If $n > \frac{7}{3}m+8,$ there is a smooth structure on $m\CP_{2}\# n \overline{\CP}_{2}$ for which there is no compatible Einstein metric. Note that these examples satisfy the strict Hitchin-Thorpe inequality (\[eq:strict\]) provided that we also require that $n < 5m +4$; such examples therefore exist for any $m\neq 1$. Moreover, many of these examples are actually homeomorphic to Einstein manifolds; cf. [@kot; @lebweyl; @lric]. Of course, this result does not pretend to be optimal; e.g. sporadic improvements could certainly be made by exploiting the manifolds constructed by Stipsicz [@stip]. A much more interesting result is obtained, however, by applying Theorem \[oui\] to $Z_{2j}\# Y_{\ell}\# k \overline{\CP}_{2}$, where the $Y_{\ell}$ are the homotopy $K3$ surfaces used in the previous section. If $k > \frac{16}{3}j -8$, these manifolds do not admit Einstein metrics. On the other hand, for fixed $j,k$, the same reasoning used in the proof of Theorem \[spin\] shows that $$\lim_{\ell \to \infty}\BW (Z_{2j}\# Y_{\ell}\# k \overline{\CP}_{2}) = \infty,$$ so we obtain infinitely many distinct smooth structures for each topological type. This proves the following: \[molti\] Let $(m,n)$ be a pair of natural numbers with $m\equiv 2 \bmod 4$ and $m\geq 6$. If $n > \frac{7}{3}m+16,$ there are infinitely many distinct smooth structure on $m\CP_{2}\# n \overline{\CP}_{2}$ for which no compatible Einstein metric exists. Again, these manifolds will satisfy the strict Hitchin-Thorpe inequality if $n < 5m +4$, and such examples thus occur for each allowed value of $m$. One can also apply the same method to $Z_{2j}\# R_{2,2}\# Y_{\ell}\# k \overline{\CP}_{2}$, where $R_{2,2}$ is the simply connected symplectic manifold constructed by Gompf [@gompf], with $b_{+}=3$ and $b_{-}=14$, and where $k \geq \frac{16}{3}j -10$. By the same bandwidth argument, we obtain \[lots\] Let $(m,n)$ be a pair of natural numbers with $m\equiv 1 \bmod 4$ and $m\geq 9$. If $n > \frac{7}{3}m+12,$ there are infinitely many distinct smooth structure on $m\CP_{2}\# n \overline{\CP}_{2}$ for which no compatible Einstein metric exists. For each allowed value of $m$, we once again obtain examples satisfying the strict Hitchin-Thorpe inequality. But indeed, many of these topological manifolds are known to admit some [*other*]{} smooth structure for which a compatible Einstein metric [*does*]{} exist. For any integer $p\geq 6$ with $p\equiv 2 \bmod 4$, set $m=p^{2}-3p+3$ and $n=3p^{2}-3p+1$. We then have $m\equiv 1\bmod 4$, $m \geq 9$, and $n > \frac{7}{3}m+12$; thus, Theorem \[lots\] therefore asserts that there are infinitely many smooth structures on $m\CP_{2}\# n \overline{\CP}_{2}$ for which no Einstein metrics exists. However, this homeotype is also realizable by the double branched cover of $\CP_{2}$, ramified over a smooth curve of degree $2p$. This complex surface of general type contains no $(-1)$- or $(-2)$-curves, and so has ample canonical line bundle [@bpv]; the Aubin/Yau theorem [@aubin; @yau] thus predicts that it admits an Einstein metric (which can be constructed so as to also be Kähler). The situation is thus analogous to that of the $K3$ surface; for one smooth structure, there is an Einstein metric, but for infinitely many others, no such metric exists. Of course, the $K3$ surface has the remarkable additional property that it admits [*only one*]{} differentiable structure for which there exists a compatible Einstein metric. In light of [@salvetti], however, no such uniqueness statement holds for many of the topological manifolds under discussion; cf. [@cat; @kot]. Nonetheless, one might speculate that Einstein metrics can exist only for a [*finite number*]{} of smooth structures on any given topological $4$-manifold. Perhaps this difficult question will become a fruitful topic for future research. [**Acknowledgment.**]{} We would like to express our deep gratitude to Professors Stefan Bauer and Mikio Furuta for helping us come to grips with the key features of the Bauer-Furuta invariant. [10]{} , [*Equations du type [M]{}onge-[A]{}mpère sur les variétés [Kä]{}hl[é]{}riennes compactes*]{}, C. R. Acad. Sci. Paris, 283A (1976), pp. 119–121. , [*Compact Complex Surfaces*]{}, Springer-Verlag, 1984. , [*A stable cohomotopy refinement of [S]{}eiberg-[W]{}itten invariants: [II]{}*]{}. preprint, 2001. , [*A stable cohomotopy refinement of [S]{}eiberg-[W]{}itten invariants: I*]{}. preprint, 2001. , [*[E]{}instein Manifolds*]{}, Springer-Verlag, 1987. , [*On the scalar curvature of [E]{}instein manifolds*]{}, Math. Res. Lett., 4 (1997), pp. 843–854. , [*Constructions of smooth $4$-manifolds*]{}, in Proceedings of the International Congress of Mathematicians, Vol. II (Berlin, 1998), pp. 443–452 (electronic). , [*On the topology of 4-manifolds*]{}, J. Differential Geom., 17 (1982), pp. 357–454. , [*On the diffeomorphism types of certain algebraic surfaces. [I]{}*]{}, J. Differential Geom., 27 (1988), pp. 297–369. , [*A new construction of symplectic manifolds*]{}, Ann. of Math. (2), 142 (1995), pp. 527–595. , [*Volume and bounded cohomology*]{}, Publ. Math. IHES, 56 (1982), pp. 5–99. , [*On compact four-dimensional [E]{}instein manifolds*]{}, J. Differential Geom., 9 (1974), pp. 435–442. , [*[E]{}instein metrics and smooth structures*]{}, Geom. Topol., 2 (1998), pp. 1–10. , [*Minimal genus in ${S}\sp 1\times {M}\sp 3$*]{}, Invent. Math., 135 (1999), pp. 45–61. , [*Four-manifolds without [E]{}instein metrics*]{}, Math. Res. Lett., 3 (1996), pp. 133–147. height 2pt depth -1.6pt width 23pt, [*Weyl curvature, [E]{}instein metrics, and [S]{}eiberg-[W]{}itten theory*]{}, Math. Res. Lett., 5 (1998), pp. 423–438. height 2pt depth -1.6pt width 23pt, [*Ricci curvature, minimal volumes, and [S]{}eiberg-[W]{}itten theory*]{}, Inv. Math., 145 (2001), pp. 279–316. , [*Higher type adjunction inequalities in [S]{}eiberg-[W]{}itten theory*]{}, J. Differential Geom., to appear (2001). , [*Private communication, 1995*]{}. Posted, by permission of the author, at http://www.math.sunysb.edu/[\~]{}claude/sun. , [*On the number of nonequivalent differentiable structures on $4$-manifolds*]{}, Manuscripta Math., 63 (1989), pp. 157–171. , [*An obstruction to the existence of [E]{}instein metrics on 4-manifolds*]{}, C. R. Acad. Sci. Paris, 322 (1996), pp. 1213–1218. , [*Simply connected $4$-manifolds near the [B]{}ogomolov-[M]{}iyaoka-[Y]{}au line*]{}, Math. Res. Lett., 5 (1998), pp. 723–730. , [*The [S]{}eiberg-[W]{}itten invariants and symplectic forms*]{}, Math. Res. Lett., 1 (1994), pp. 809–822. height 2pt depth -1.6pt width 23pt, [*The [S]{}eiberg-[W]{}itten and [G]{}romov invariants*]{}, Math. Res. Lett., 2 (1995), pp. 221–238. , [*Some remarks on the [G]{}auss-[B]{}onnet formula*]{}, J. Math. Mech., 18 (1969), pp. 779–786. , [*Monopoles and four-manifolds*]{}, Math. Res. Lett., 1 (1994), pp. 809–822. , [*Collected works of [H]{}idehiko [Y]{}amabe*]{}, Gordon and Breach Science Publishers, New York, 1967. , [*[C]{}alabi’s conjecture and some new results in algebraic geometry*]{}, Proc. Nat. Acad. USA, 74 (1977), pp. 1789–1799. [^1]: Supported in part by NSF grant DMS-0072591.
--- abstract: 'We propose a simple deterministic test for deciding whether or not an element $a \in {\mathbb{F}}_{2^n}^{\times}$ or ${\mathbb{F}}_{3^n}^{\times}$ is a zero of the corresponding Kloosterman sum over these fields, and rigorously analyse its runtime. The test seems to have been overlooked in the literature. The expected cost of the test for binary fields is a single point-halving on an associated elliptic curve, while for ternary fields the expected cost is one half of a point-thirding on an associated elliptic curve. For binary fields of practical interest, this represents an $O(n)$ speedup over the previous fastest test. By repeatedly invoking the test on random elements of ${\mathbb{F}}_{2^n}^{\times}$ we obtain the most efficient probabilistic method to date to find non-trivial Kloosterman sum zeros. The analysis depends on the distribution of Sylow $p$-subgroups in the two families of associated elliptic curves, which we ascertain using a theorem due to Howe.' address: - 'Claude Shannon Institute, UCD CASL, University College Dublin, Ireland' - 'Claude Shannon Institute, Dublin City University, Ireland' author: - Omran Ahmadi - Robert Granger bibliography: - 'KB.bib' title: An efficient deterministic test for Kloosterman sum zeros --- [^1] Introduction {#sec:intro} ============ For a finite field ${\mathbb{F}}_{p^n}$, the Kloosterman sum $\mathcal{K}_{p^n}: {\mathbb{F}}_{p^n} \rightarrow {\mathbb{C}}$ can be defined by $$\mathcal{K}_{p^n}(a) = 1 + \sum_{x \in {\mathbb{F}}_{p^n}^{\times}} \zeta^{\text{Tr}(x^{-1} + ax)},$$ where $\zeta$ is a primitive $p$-th root of unity and Tr denotes the absolute trace map $\text{Tr}:{\mathbb{F}}_{p^n} \rightarrow {\mathbb{F}}_p$, defined by $$\text{Tr}(x) = x + x^p + x^{p^2} + \cdots + x^{p^{n-1}}.$$ Note that in some contexts the Kloosterman sum is defined to be just the summation term without the added ‘$1$’ [@katz]. As one would expect, a Kloosterman (sum) zero is simply an element $a \in {\mathbb{F}}_{p^n}^{\times}$ for which $\mathcal{K}_{p^n}(a) = 0$. Kloosterman sums have recently become the focus of much research, most notably due to their applications in cryptography and coding theory (see [@gong; @moisiocode] for example). In particular, zeros of $\mathcal{K}_{2^{n}}$ lead to bent functions from ${\mathbb{F}}_{2^{2n}} \rightarrow {\mathbb{F}}_{2}$ [@dillon], and similarly zeros of $\mathcal{K}_{3^{n}}$ give rise to ternary bent functions [@helleseth1]. It was recently shown that zeros of Kloosterman sums only exist in characteristics 2 and 3 [@kononen], and hence these are the only cases we consider. Finding such zeros is regarded as being difficult, and recent research has tended to focus on characterising Kloosterman sums modulo small integers [@charpin; @moisio2; @lisonek; @lisonek2; @lisonek3; @faruk1; @faruk2; @faruk3; @faruk4]. While these results are interesting in their own right, they also provide a sieve which may be used to eliminate elements of a certain form prior to testing whether they are Kloosterman zeros or not, by some method. It has long been known that Kloosterman sums over binary and ternary fields are intimately related to the group orders of members of two families of elliptic curves over these fields [@katz; @wolfmann; @moisio; @geer]. In particular, for $p \in \{2,3\}$ the Kloosterman sum $\mathcal{K}_{p^n}(a)$ is equal to one minus the trace of the Frobenius endomorphism of an associated elliptic curve $E_{p^n}(a)$. As such, one may use $p$-adic methods — originally due to Satoh [@satoh] — to compute the group orders of these elliptic curves, and hence the corresponding Kloosterman sums. The best $p$-adic point counting method asymptotically takes $O(n^2 \log^2{n}\log\log{n})$ bit operations and requires $O(n^2)$ memory; see Vercauteren’s thesis [@frethesis] for contributions and a comprehensive survey. Rather than count points, Lisoněk has suggested that if instead one only wants to check whether a given element is a zero, one can do so by testing whether a random point of $E_{p^n}(a)$ has order $p^n$, via point multiplication [@lisonek]. Asymptotically, this has a similar bit complexity to the point counting approach, requires less memory, but is randomised. For fields of practical interest, it is reported that this approach is superior to point counting [@lisonek §3], and using this method Lisoněk was able to find a zero of $\mathcal{K}_{2^n}$ for $n \le 64$ and $\mathcal{K}_{3^n}$ for $n \le 34$, in a matter of days. In this paper we take the elliptic curve connection to a logical conclusion, in terms of proving divisibility results of Kloosterman sums by powers of the characteristic. In particular we give an efficient deterministic algorithm to compute the Sylow $2$- and $3$-subgroups of the associated elliptic curves in characteristics $2$ and $3$ respectively, along with a generator (these subgroups are cyclic in the cases considered). Moreover, the average case runtimes of the two algorithms are rigorously analysed. For binary fields of practical interest, the test gives an $O(n)$ speedup over the point multiplication test. Finding a single Kloosterman zero — which is often all that is needed in applications — is then a matter of testing random field elements until one is found, the success probability of which crucially depends on the number of Kloosterman zeros, see [@katz] and §\[exactformula\]. Our runtime analysis provides a non-trivial upper bound on this number, and consequently finding a Kloosterman zero with this approach still requires time exponential in the size of the field. We note that should one want to find [*all*]{} Kloosterman zeros over ${\mathbb{F}}_{2^n}$, rather than just one, then one can use the fast Walsh-Hadamard transform (see [@fft] for an overview), which requires $O(2^n \cdot n^2)$ bit operations and $O(2^n \cdot n)$ space. The sequel is organised as follows. In §\[connection\] we detail the basic connection between Kloosterman sums and two families of elliptic curves. In §\[determine\] we present the main idea behind our algorithm, while §\[binary\] and §\[ternary\] explore its specialisation to binary and ternary fields respectively. In §\[noofits\] we present data on the runtime of the two algorithms, provide a heuristic analysis which attempts to explain the data, and give an exact formula for the average case runtime. In §\[mainresult\] we rigorously prove the expected runtime, while in §\[compare\] we assess the practical efficiency of the tests. We finally make some concluding remarks in §\[conc\]. Connection with elliptic curves {#connection} =============================== Our observations stem from the following three simple lemmas, which connect Kloosterman sums over ${\mathbb{F}}_{2^n}$ and ${\mathbb{F}}_{3^n}$ with the group orders of elliptic curves in two corresponding families. The first is due to Lachaud and Wolfmann [@wolfmann], the second Moisio [@moisio], while the third was proven by Lisoněk [@lisonek]. \[lis1\] Let $a \in {\mathbb{F}}_{2^n}^{\times}$ and define the elliptic curve $E_{2^n}(a)$ over ${\mathbb{F}}_{2^n}$ by $$E_{2^n}(a): y^2 + xy = x^3 + a.$$ Then $\#E_{2^n}(a) = 2^n + \mathcal{K}_{2^n}(a)$. \[lis2\] Let $a \in {\mathbb{F}}_{3^n}^{\times}$ and define the elliptic curve $E_{3^n}(a)$ over ${\mathbb{F}}_{3^n}$ by $$E_{3^n}(a): y^2 = x^3 + x^2 - a.$$ Then $\#E_{3^n}(a) = 3^n + \mathcal{K}_{3^n}(a)$. \[lis3\] Let $p \in \{2, 3\}$, let $a \in {\mathbb{F}}_{p^n}^{\times}$, and let $1 \leq h \leq n$. Then $p^h \mid \mathcal{K}_{p^n}(a)$ if and only if there exists a point of order $p^h$ on $E_{p^n}(a)$. Lemma \[lis3\] is a simple consequence of the structure theorem for elliptic curves over finite fields. Note that for $p \in \{2,3\}$, by Lemmas \[lis1\] and \[lis2\] we have $\mathcal{K}_{p^n}(a) = 0$ if and only if $E_{p^n}(a)$ has order $p^n$. By Lemma \[lis3\], this is equivalent to $E_{p^n}(a)$ having a point of order $p^n$, and hence finding a point of order $p^n$ proves that $\mathcal{K}_{p^n}(a) = 0$, since $p^n$ is the only element divisible by $p^n$ in the Hasse interval. For the remainder of the paper, when we refer to a prime $p$ we implicitly presume $p \in \{2,3\}$. Determining the Sylow $p$-subgroup of $E_{p^n}(a)$ {#determine} ================================================== It is easy to show that $\mathcal{K}_{2^n}(a) \equiv 0 \pmod{4}$ and $\mathcal{K}_{3^n}(a) \equiv 0 \pmod{3}$ for all $a \in {\mathbb{F}}_{2^n}^{\times}$ and ${\mathbb{F}}_{3^n}^{\times}$ respectively. One way to see this is to observe that $E_{2^n}(a)$ possesses a point of order $4$ (see §\[binary\]) and $E_{3^n}(a)$ possesses a point of order $3$ (see §\[ternary\]), and hence by Lagrange’s theorem, $4 \mid \#E_{2^n}(a)$ and $3 \mid \#E_{3^n}(a)$. For an integer $x$, let ${\operatorname{ord}}_p(x)$ be the exponent of the maximum power of $p$ that divides $x$. For $a \in {\mathbb{F}}_{p^n}^{\times}$, let $h = {\operatorname{ord}}_p(\#E_{p^n}(a))$. By Lemma \[lis3\] the Sylow $p$-subgroup $S_{p}(E_{p^n}(a))$ is cyclic of order $p^h$, and hence has $(p-1)p^{h-1}$ generators. Multiplying these by $p$ results in the $(p-1)p^{h-2}$ generators of the order $p^{h-1}$ subgroup. Continuing this multiplication by $p$ process, after $h-1$ steps one arrives at the $p$-torsion subgroup $E_{p^n}(a)[p]$, consisting of $p-1$ order-$p$ points and the identity element $\mathcal{O}$. These considerations reveal the structure of the $p$-power torsion subgroups $E_{p^n}(a)[p^k]$ for $1 \le k \le h$, which one may view as a tree, with $\mathcal{O}$ as the root node. The root has $p-1$ children which are the non-identity points in $E_{p^n}(a)[p]$. If $h>1$ each of these $p-1$ nodes has $p$ children: the elements of $E_{p^n}(a)[p^2] \setminus E_{p^n}(a)[p]$. For $1 < k < h$, each of the $(p-1)p^{k-1}$ depth-$k$ nodes have $p$ children, while at depth $h$ we have $(p-1)p^{h-1}$ leaf nodes. Using a division polynomial approach Lisoněk was able to prove a necessary condition on $a \in {\mathbb{F}}_{2^n}^{\times}$ such that $\mathcal{K}_{2^n}(a)$ is divisible by $16$, and likewise a necessary condition on $a \in {\mathbb{F}}_{3^n}^{\times}$ such that $\mathcal{K}_{3^n}(a)$ is divisible by $9$. While necessary conditions for the divisibility of $\mathcal{K}_{2^n}(a)$ by $2^k$ have since been derived for $k \le 8$ [@faruk3], and for the divisibility of $\mathcal{K}_{3^n}(a)$ by $3^k$ for $k \le 3$ [@faruk4], these use $p$-adic methods; the division polynomial approach seemingly being too cumbersome to progress any further. However, the process outlined above — taking a generator of $S_{p}(E_{p^n}(a))$ and multiplying by $p$ repeatedly until the non-identity elements of the $p$-torsion are obtained — can be reversed, easily and efficiently, using point-halving in even characteristic, and point-thirding in characteristic three, as we demonstrate in the ensuing two sections. Furthermore, due to the cyclic structure of $S_{p}(E_{p^n}(a))$, at each depth, either all points are divisible by $p$, or none are. This means one can determine the height of the tree by using a depth-first search, without any backtracking; in particular, when a point $P$ at a given depth can not be halved or thirded, this depth is $\log_{p}(|S_{p}(E_{p^n}(a))|)$, and $P$ is a generator. Furthermore, one can do this without ever computing the group order of the curve. This process has been considered previously by Miret [[*et al.*]{}]{}, for determining the Sylow $2$-subgroup of elliptic curves over arbitrary finite fields of characteristic $ > 2$ [@miret1]; for $p=2$ the algorithm follows easily from the above considerations and point-halving, which is well studied in cryptographic circles [@knudsen; @schroeppel; @omran], and is known to be more than twice as fast as point-doubling in some cases [@handbook]. For primes $l > 2$, Miret [[*et al.*]{}]{} also addressed how to compute the Sylow $l$-subgroup of elliptic curves over arbitrary finite fields provided that $l$ was not the characteristic of the field [@miret2]. Therefore we address here the case $l = p = 3$, for the family of curves $E_{3^n}(a)$. We summarise this process in Algorithm \[sylow\]. Regarding notation, we say that a point $P$ is $p$-divisible if there exists a point $Q$ such that $[p]Q = P$, and write $Q = [1/p]P$. [DETERMINE $S_{p}(E_{p^n}(a))$]{} [$a \in {\mathbb{F}}_{p^n}^{\times}$, $P \in E_{p^n}(a)[p] \setminus \{\mathcal{O}\}$]{} [$(h,P_h)$ where $h = {\operatorname{ord}}_{p}(\#E_{p^n}(a))$ and $\langle P_h \rangle = S_{p}(E_{p^n}(a))$]{} \[sylow\] [ '.]{}$\text{counter} \leftarrow 1$;\ [ '.]{}While $P$ is $p$-divisible do:\ ['.]{}P := \[1/p\]P;\ ['.]{}counter++;\ [ '.]{}Return $(\text{counter},P)$ Observe that Algorithm \[sylow\] is deterministic, provided that a deterministic method of dividing a $p$-divisible point by $p$ is fixed once and for all, which we do for $p=2$ and $p=3$ in §\[binary\] and §\[ternary\] respectively. For a given field extension under consideration, choosing an appropriate field representation and basis can also be performed deterministically, via sequential search, however we consider this to be part of the setup phase and do not incorporate setup costs when assessing the runtime of Algorithm \[sylow\]. Binary fields {#binary} ============= We now work out the details of Algorithm \[sylow\] for the family of curves $E_{2^n}(a)$. For a fixed $n$, given a point $P =(x,y) \in E_{2^n}(a)$, $[2]P = (\xi,\eta)$ is given by the formula: $$\begin{aligned} \nonumber \lambda &=& x + y/x,\\ \label{half} \xi &=& \lambda^2 + \lambda,\\ \nonumber \eta &=& x^2 + \xi(\lambda+1).\end{aligned}$$ To halve a point, one needs to reverse this process, [i.e., ]{}given $Q=(\xi,\eta)$, find (if possible) a $P=(x,y) \in E_{2^n}(a)$ such that $[2]P = Q$. To do so, one first needs to solve (\[half\]) for $\lambda$, which has a solution in ${\mathbb{F}}_{2^n}$ if and only if $\text{Tr}(\xi) = 0$, since the trace of the right-hand side is zero for every $\lambda \in {\mathbb{F}}_{2^n}$, and one can provide an explicit solution in this case, as detailed in §\[solvequad\]. Observe that if $\lambda$ is a solution to (\[half\]) then so is $\lambda + 1$. Assuming $\lambda$ has been computed, one then has $$\begin{aligned} \nonumber x &=& (\eta + \xi(\lambda+1))^{1/2}, \\ \nonumber y &=& x(x+\lambda),\end{aligned}$$ which for the two choices of $\lambda$ gives both points whose duplication is $Q=(\xi,\eta)$. Aside from the cost of computing $\lambda$, the computation of $P = (x,y)$ as above requires two field multiplications. As detailed in Algorithm \[sylow2\], this can be reduced to just one by using the so-called $\lambda$-representation of a point [@knudsen; @schroeppel], where an affine point $Q = (\xi,\eta)$ is instead represented by $(\xi,\lambda_{Q})$, with $$\lambda_Q = \xi + \frac{\eta}{\xi}.$$ In affine coordinates, there is a unique $2$-torsion point $(0,a^{1/2})$, which halves to the two order $4$ points $P_{4}^{+} = (a^{1/4},a^{1/2})$, $P_{4}^{-} = (a^{1/4},a^{1/2} + a^{1/4})$. The corresponding $\lambda$-representations of each of these are $(a^{1/4},0)$ and $(a^{1/4},1)$ respectively. For simplicity, we choose to use the former as the starting point in Algorithm \[sylow2\]. [DETERMINE $S_{2}(E_{2^n}(a))$]{} [$a \in {\mathbb{F}}_{2^n}^{\times}$, $(x=a^{1/4}, \lambda = 0)$]{} [$(h,P_h)$ where $h = {\operatorname{ord}}_{2}(\#E_{2^n}(a))$ and $\langle P_h \rangle = S_{2}(E_{2^n}(a))$]{} \[sylow2\] [ '.]{}$\text{counter} \leftarrow 2$;\ [ '.]{}While $\text{Tr}(x) = 0$ do:\ ['.]{}Solve $\widehat{\lambda}^2 + \widehat{\lambda} + x = 0$;\ ['.]{}$t \leftarrow x(x + \lambda + \widehat{\lambda})$;\ ['.]{}$x \leftarrow \sqrt{t}$;\ ['.]{}$\lambda \leftarrow \widehat{\lambda} + 1$;\ ['.]{}counter++;\ [ '.]{}Return $(\text{counter},P = (x,x(x + \lambda)))$ Observe that if the $x$-coordinate $a^{1/4}$ of $P_{4}^{\pm}$ satisfies $\text{Tr}(a^{1/4}) = \text{Tr}(a) = 0$, then there exist four points of order $8$, and hence $8 \mid \mathcal{K}_{2^n}(a)$, which was first observed by van der Geer and van der Vlugt [@geer], and later by several others [@helleseth2; @charpin2; @lisonek]. Solving $\widehat{\lambda}^2 + \widehat{\lambda} + x = 0$ {#solvequad} --------------------------------------------------------- For odd $n$, let $\widehat{\lambda}$ be given by the following function, which is known as the [*half trace*]{}: $$\label{halftrace} \widehat{\lambda}(x) = \sum_{i=0}^{(n-1)/2} x^{2^{2i}}.$$ One can easily verify that this $\widehat{\lambda}$ satisfies the stated equation. When $n$ is even, the half trace approach will not work, essentially because $\text{Tr}_{{\mathbb{F}}_{2^{n}}/{\mathbb{F}}_2}(1) = 0$. Hence fix an element $\delta \in {\mathbb{F}}_{2^n}$ with $\text{Tr}_{{\mathbb{F}}_{2^{n}}/{\mathbb{F}}_2}(\delta) = 1$. Such a $\delta$ can be found during the setup phase via the sequential search of the trace of the polynomial basis elements, or by using the methods of [@omran]. A solution to equation (\[half\]) is then given by [@ECC1 Chapter II]: $$\label{fasthalf} \widehat{\lambda}(x) = \sum_{i=0}^{n-2} \bigg( \sum_{j=i+1}^{n-1} \delta^{2^j} \bigg) x^{2^i},$$ as may be verified. Note that for odd $n$, $\delta = 1$ suffices and so (\[fasthalf\]) simplifies to (\[halftrace\]). The inner sums of equation (\[fasthalf\]) can be precomputed, and for a general $\delta \in {\mathbb{F}}_{2^n}$ the computation of $\widehat{\lambda}(x)$ would require $n-1$ multiplications in ${\mathbb{F}}_{2^n}$, which together with the multiplication coming from [line](line) [4](4) of Algorithm \[sylow2\], gives a total of $n$ full ${\mathbb{F}}_{2^n}$-multiplications. However, should ${\mathbb{F}}_{2^n}$ contain a subfield of odd index, then one can reduce this cost as follows. Let $n = 2^m n'$ with $m \ge 1$ and $n'$ odd. Constructing ${\mathbb{F}}_{2^n}$ as a degree $n'$ extension of ${\mathbb{F}}_{2^{2^m}}$, fix a $\delta \in {\mathbb{F}}_{2^{2^m}}$ with $\text{Tr}_{{\mathbb{F}}_{2^{2^m}}/{\mathbb{F}}_{2}}(\delta)=1$. Then $$\text{Tr}_{{\mathbb{F}}_{2^{2^m\cdot n'}}/{\mathbb{F}}_2}(\delta) = n' \cdot \text{Tr}_{{\mathbb{F}}_{2^{2^m}}/{\mathbb{F}}_2}(\delta) = 1.$$ Hence this $\delta$ can be used in (\[fasthalf\]). As $\delta^{2^{2^m}} = \delta$, upon expanding (\[fasthalf\]) in terms of $\{\delta^{2^0},\delta^{2^1},\ldots,\delta^{2^{2^m - 1}}\}$, we see that at most $2^m$ multiplications of elements of ${\mathbb{F}}_{2^{2^m}}$ by elements of ${\mathbb{F}}_{2^n}$ are required. So the smaller the largest power of $2$ dividing $n$ is, the faster one can compute $\widehat{\lambda}(x)$. However, since the expressions for $\widehat{\lambda}(x)$ in (\[halftrace\]) and (\[fasthalf\]) are linear maps, in practice it is far more efficient for both odd and even $n$ to precompute and store $\{\widehat{\lambda}(t^i)\}_{i=0,\ldots,n-1}$ during setup, where ${\mathbb{F}}_{2^n} = {\mathbb{F}}_{2}(t)$ and $x = \sum_{i=0}^{n-1} x_it^i$. One then has $$\widehat{\lambda}(x) = \sum_{i=0}^{n-1} x_i \widehat{\lambda}(t^i).$$ On average just $n/2$ additions in ${\mathbb{F}}_{2^n}$ are required for each point-halving. Both the storage required and execution time can be further reduced [@handbook]. We defer consideration of the practical efficiency of Algorithm \[sylow2\] until §\[compare2\]. Ternary fields {#ternary} ============== Let $Q=(\xi,\eta) \in E_{3^n}(a)$. To find $P=(x,y)$ such that $[3]P = Q$, when possible, we do the following. As in [@miret2 §4], we have $$x([3]P) = x(P) - \frac{\Psi_{2}(x,y)\Psi_{4}(x,y)}{\Psi_{3}^{2}(x,y)},$$ or $$(x - \xi)\Psi_{3}^{2}(x,y) - \Psi_{2}(x,y)\Psi_{4}(x,y) = 0,$$ where $\Psi_l$ is the $l$-th division polynomial. Working modulo the equation of $E_{3^n}(a)$, this becomes $$x^9 - \xi x^6 + a(1-\xi)x^3 - a^2(a + \xi) = 0,$$ whereupon substituting $X = x^3$ gives $$\label{3div} f(X) = X^3 - \xi X^2 + a(1-\xi)X - a^2(a + \xi) = 0.$$ To solve (\[3div\]), we make the transformation $$g(X) = X^3 f\bigg(\frac{1}{X} - \frac{a(1-\xi)}{\xi}\bigg) = \frac{a^2 \eta^2}{\xi^3} X^3 -\xi X + 1.$$ Hence we must solve $$X^3 - \frac{\xi^4}{a^2 \eta^2} X + \frac{\xi^3}{a^2 \eta^2} = 0.$$ Writing $X = \frac{\xi^2}{a \eta}\widehat{X}$ this becomes $$\label{3tri} \widehat{X}^3 - \widehat{X} + \frac{a \eta}{\xi^3} = 0.$$ Our thirding condition is then simply $\text{Tr}(a \eta/\xi^3) = 0$, since as in the binary case, for every element $\widehat{X} \in {\mathbb{F}}_{3^n}$ we have $\text{Tr}(\widehat{X}^3 - \widehat{X}) = 0$, and if so then one can provide an explicit solution, as detailed in §\[solvecube\]. Observe that if $\widehat{X}$ is a solution to (\[3tri\]) then so is $\widehat{X} \pm 1$. Unrolling the transformations leads to the following algorithm, with input the $3$-torsion point $P_3 = (a^{1/3},a^{1/3})$. [DETERMINE $S_{3}(E_{3^n}(a))$]{} [$a \in {\mathbb{F}}_{3^n}^{\times}$, $(x=a^{1/3}, y = a^{1/3})$]{} [$(h,P_h)$ where $h = {\operatorname{ord}}_3(\#E_{3^n}(a))$ and $\langle P_h \rangle = S_{3}(E_{3^n}(a))$]{} \[sylow3\] [ '.]{}$\text{counter} \leftarrow 1$;\ [ '.]{}While $\text{Tr}(ay/x^3) = 0$ do:\ ['.]{}Solve $\widehat{X}^3 - \widehat{X} + \frac{ay}{x^3} = 0$;\ ['.]{}$x \leftarrow \bigg(\frac{ay}{x^2\widehat{X}} - \frac{a(1-x)}{x} \bigg)^{1/3}$;\ ['.]{}$y \leftarrow \big(x^3 + x^2 - a\big)^{1/2}$;\ ['.]{}counter++;\ [ '.]{}Return $(\text{counter},P = (x,y))$ Observe that as with Algorithm \[sylow2\], if the point $P_3$ satisfies $\text{Tr}(a \cdot a^{1/3}/a) = \text{Tr}(a) = 0$, then there is a point of order $9$, and hence $9 \mid \mathcal{K}_{3^n}(a)$, which again was first proven in [@geer], and later by others [@lisonek; @faruk1]. Solving $\widehat{X}^3 - \widehat{X} + \frac{a y}{x^3} = 0$ {#solvecube} ----------------------------------------------------------- Let $\beta = \frac{a y}{x^3}$, and let $\delta \in {\mathbb{F}}_{3^n}$ be an element with $\text{Tr}_{{\mathbb{F}}_{3^{n}}/{\mathbb{F}}_3}(\delta) = 1$, which can be found deterministically during the setup phase. It is then a simple matter to verify that $$\label{fastthird} \widehat{X}(\beta) = \sum_{i=0}^{n-2} \bigg( \sum_{j=i+1}^{n-1} \delta^{3^j} \bigg) \beta^{3^i}$$ is a solution to equation (\[3tri\]). For $n \equiv 1 \pmod{3}$, one may choose $\delta = 1$ and the expression for $\widehat{X}(\beta)$ in equation (\[fastthird\]) simplifies to $$\widehat{X}(\beta) = \sum_{i=1}^{(n-1)/3} \left(\beta^{3^{3i-1}} - \beta^{3^{3i-2}}\right).$$ For $n \equiv 2 \pmod{3}$, one may choose $\delta = -1$ and the expression for $\widehat{X}(\beta)$ in equation (\[fastthird\]) simplifies to $$\widehat{X}(\beta) = -\beta + \sum_{i=1}^{(n-2)/3} \left(\beta^{3^{3i-1}} - \beta^{3^{3i}}\right).$$ For $n \equiv 0 \pmod{3}$, one can use the approach described in §\[solvequad\] to pick $\delta$ from the smallest subfield of ${\mathbb{F}}_{3^n}$ of index coprime to $3$, in order to reduce the cost and the number of multiplications required to solve (\[3tri\]). As in the binary case, one can also exploit the linearity of $\widehat{X}(\beta)$ and precompute and store $\{\widehat{X}(t^i)\}_{i=0,\ldots,n-1}$ during setup, where ${\mathbb{F}}_{3^n} = {\mathbb{F}}_{3}(t)$ and $\beta = \sum_{i=0}^{n-1} \beta_it^i$, in order to reduce the cost of solving (\[3tri\]) to an average of $2n/3$ additions. We defer consideration of the practical efficiency of Algorithm \[sylow3\] until §\[compare3\]. Heuristic analysis of the expected number of iterations {#noofits} ======================================================= For any input $a \in {\mathbb{F}}_{p^n}^{\times}$, the runtime of Algorithm \[sylow\] is proportional to the number of loop iterations performed, which is precisely the height of the corresponding Sylow $p$-subgroup tree, $h = \log_{p}(|S_p(E_{p^n}(a))|)$. In this section we present experimental data for the distribution of these heights for $p \in \{2,3\}$, provide a heuristic argument to explain them, and give an exact formula for the average case runtime. Since we are interested in the average number of loop iterations[^2], we consider the arithmetic mean of the heights of the Sylow $p$-subgroup trees, or equivalently the logarithm of the geometric mean of their orders. Experimental data ----------------- In order to gain an idea of how $\{\log_{p}(|S_p(E_{p^n}(a))|)\}_{a \in {\mathbb{F}}_{p^n}^{\times}}$ is distributed, we computed all of them for several small extensions of ${\mathbb{F}}_p$. Tables \[dist2\] and \[dist3\] give the results for $p=2$ and $p=3$ respectively. Observe that for $p=2$, the first two columns are simply $2^n - 1 = |{\mathbb{F}}_{2^n}^{\times}|$, reflecting the fact that all of the curves $\{E_{2^n}(a)\}_{a \in {\mathbb{F}}_{2^n}^{\times}}$ have order divisible by $4$. Similarly for $p=3$, the first column is given by $3^n-1 = |{\mathbb{F}}_{3^n}^{\times}|$, reflecting the fact that all the curves $\{E_{3^n}(a)\}_{a \in {\mathbb{F}}_{3^n}^{\times}}$ have order divisible by $3$. Furthermore, since exactly half of the elements of ${\mathbb{F}}_{2^n}$ have zero trace, the third column for $p=2$ is given by $2^{n-1}-1$. Likewise for $p=3$, the second column is given by $3^{n-1} - 1$, since exactly one third of the elements of ${\mathbb{F}}_{3^n}$ have zero trace. For $p=2$ there is an elegant result due Lisoněk and Moisio which gives a closed formula for the $n$-th entry of column $4$ of Table \[dist2\] [@lisonek3 Theorem 3.6], which includes the $a=0$ case, namely: $$\label{column4} (2^n - (-1 + i)^n - (-1 - i)^n)/4.$$ Beyond these already-explained columns, it appears that as one successively moves one column to the right, the number of such $a$ decreases by an approximate factor of $2$ or $3$ respectively, until the number of Kloosterman zeros is reached, which by Hasse bound occurs as soon as $p^k > 1 + 2p^{n/2}$, or $k > n/2 + \log_{p}2$. $n \backslash k$ $1$ $2$ $3$ $4$ $5$ $6$ $7$ $8$ $9$ $10$ $11$ $12$ $13$ ------------------ ------ ------ ------ ------ ------ ----- ----- ----- ----- ------ ------ ------ ------ 1 1 1 2 3 3 3 7 7 3 4 15 15 7 5 5 31 31 15 5 5 6 63 63 31 15 12 12 7 127 127 63 35 14 14 14 8 255 255 127 55 21 16 16 16 9 511 511 255 135 63 18 18 18 18 10 1023 1023 511 255 125 65 60 60 60 60 11 2047 2047 1023 495 253 132 55 55 55 55 55 12 4095 4095 2047 1055 495 252 84 72 72 72 72 72 13 8191 8191 4095 2015 1027 481 247 52 52 52 52 52 52 : $\# \{E_{2^n}(a)\}_{a \in {\mathbb{F}}_{2^n}^{\times}}$ whose group order is divisible by $2^{k}$[]{data-label="dist2"} $n \backslash k$ $1$ $2$ $3$ $4$ $5$ $6$ $7$ $8$ $9$ $10$ $11$ ------------------ -------- ------- ------- ------ ------ ----- ----- ----- ----- ------ ------ 1 2 2 8 2 3 26 8 3 4 80 26 4 4 5 242 80 35 15 15 6 728 242 83 24 24 24 7 2186 728 266 77 21 21 21 8 6560 2186 692 252 48 48 48 48 9 19682 6560 2168 741 270 108 108 108 108 10 59048 19682 6605 2065 575 100 100 100 100 100 11 177146 59048 19547 6369 2596 924 264 264 264 264 264 : $\# \{E_{3^n}(a)\}_{a \in {\mathbb{F}}_{3^n}^{\times}}$ whose group order is divisible by $3^k$[]{data-label="dist3"} A heuristic for the expected number of iterations {#heuristic} ------------------------------------------------- To explain the data in Tables \[dist2\] and \[dist3\], we propose the following simple heuristic (and prove the validity of its consequences in §\[mainresult\]): \[heur\] Over all $a \in {\mathbb{F}}_{p^n}^{\times}$, on any occurrence of [*[line](line)*]{} [*[2](2)*]{} of the loop in Algorithms \[sylow2\] and \[sylow3\], regardless of the height of the tree at that point, the argument of the ${\mathbb{F}}_{p^n}$ trace is uniformly distributed over ${\mathbb{F}}_{p^n}$, and hence is zero with probability $1/p$. While this assumption is clearly false at depths $> n/2 + \log_{p}2$, the data in Tables \[dist2\] and \[dist3\] does support it (up to relatively small error terms). In order to calculate the expected value of $\log_{p}(|S_p(E_{p^n}(a))|)$, we think of Algorithms \[sylow2\] and \[sylow3\] as running on all $p^n-1$ elements of ${\mathbb{F}}_{p^n}^{\times}$ in parallel; we then sum the number of elements which survive the first loop, then the second loop and the third loop etc., and divide this sum by $p^n-1$ to give the average. We now explore the consequences of Heuristic \[heur\], treating the two characteristics in turn. For Algorithm \[sylow2\], on the first occurrence of [line](line) [2](2), $2^{n-1} - 1$ elements of ${\mathbb{F}}_{2^n}^{\times}$ have zero trace and hence $2^{n-1}-1$ elements require an initial loop iteration. On the second occurrence of [line](line) [2](2), by Heuristic \[heur\], approximately $2^{n-1}/2 = 2^{n-2}$ of the inputs have zero trace and so this number of loop iterations are required. Continuing in this manner and summing over all loop iterations at each depth, one obtains a total of $$2^{n-1} + 2^{n-1} + \cdots + 2 + 1 \approx 2^n,$$ for the number of iterations that need to be performed for all $a \in {\mathbb{F}}_{2^n}^{\times}$. Thus on average this is approximately one loop iteration per initial element $a$. Incorporating the divisibility by $4$ of all curve orders, the expected value as $n \rightarrow \infty$ of $\log_{2}(|S_2(E_{2^n}(a))|)$ is $3$, and hence the geometric mean of $\{|S_2(E_{2^n}(a))|\}_{a \in {\mathbb{F}}_{2^n}^{\times}}$ as $n \rightarrow \infty$ is $2^3 = 8$. For Algorithm \[sylow3\], applying Heuristic \[heur\] and the same reasoning as before, the total number of iterations required for all $a \in {\mathbb{F}}_{3^n}^{\times}$ is $$3^{n-1} + 3^{n-2} + \cdots + 3 + 1 \approx 3^n/2.$$ Thus on average this is approximately $1/2$ an iteration per initial element $a$, and incorporating the divisibility by $3$ of all curve orders, the expected value as $n \rightarrow \infty$ of $\log_{3}(|S_3(E_{3^n}(a))|)$ is $3/2$, and hence the geometric mean of $\{|S_3(E_{3^n}(a))|\}_{a \in {\mathbb{F}}_{3^n}^{\times}}$ as $n \rightarrow \infty$ is $3^{3/2} = 3\sqrt{3}$. Exact formula for the average height of Sylow $p$-subgroup trees {#exactformula} ---------------------------------------------------------------- Let $p^n + t$ be an integer in the Hasse interval $I_{p^n} = [p^n + 1 - 2p^{n/2},p^n + 1 + 2p^{n/2}]$, which is assumed to be divisible by $4$ if $p=2$ and divisible by $3$ if $p=3$. Let $N(t)$ be the number of solutions in ${\mathbb{F}}_{p^n}^{\times}$ to $\mathcal{K}_{p^n}(a) = t$. The sum of the heights of the Sylow $p$-subgroup trees, over all $a \in {\mathbb{F}}_{p^n}^{\times}$, is $$\label{exactav} T_{p^n} = \sum_{(p^n+t) \in I_{p^n}} N(t) \cdot \text{ord}_p(p^n+t),$$ and thus the expected value of $\log_{p}(|S_p(E_{p^n}(a))|)$ is $T_{p^n}/(p^n-1)$. The crucial function $N(t)$ in (\[exactav\]) has been evaluated by Katz and Livné in terms of class numbers [@katz]. In particular, let $\alpha = (t-1 + \sqrt{(t-1)^2 - 4p^n})/2$ for $t$ as above. Then $$N(t) = \sum_{\text{orders} \ \mathcal{O}} h(\mathcal{O}),$$ where the sum is over all orders $\mathcal{O} \subset {\mathbb{Q}}(\alpha)$ which contain ${\mathbb{Z}}[\alpha]$. It seems difficult to prove Heuristic \[heur\] or our implied estimates for $T_{p^n}$ using the Katz-Livné result directly. However, using a natural decomposition of $T_{p^n}$ and a theorem due to Howe [@howe], in the following section we show that the consequences of Heuristic \[heur\] as derived in §\[heuristic\] are correct. Main result {#mainresult} =========== We now present and prove our main result, which states that the expected value of $\{\log_{p}(|S_p(E_{p^n}(a))|)\}_{a \in {\mathbb{F}}_{p^n}^{\times}}$ is precisely as we derived heuristically in §\[heuristic\]. To facilitate our analysis, for $1 \le k \le n$, we partition $T_{p^n}$ into the counting functions $$\label{igusaprimer} T_{p^n}(k) = \sum_{(p^n + t) \in I_{p^n}, p^k|(p^n + t)} N(t),$$ so that by (\[exactav\]) we have $$\label{partition} T_{p^n} = \sum_{k = 1}^{n} T_{p^n}(k).$$ Indeed, the integers $T_{p^n}(k)$ are simply the $(n,k)$-th entries of Tables \[dist2\] and \[dist3\] for $p=2$ and $3$ respectively, and thus $T_{p^n}$ is the sum of the $n$-th row terms. Hence we already have $T_{2^n}(1) = T_{2^n}(2) = 2^n-1$, $T_{2^n}(3) = 2^{n-1}-1$ and $T_{2^n}(4) = (2^n - (-1 + i)^n - (-1 - i)^n)/4$ by (\[column4\]), and similarly $T_{3^n}(1) = 3^n -1$ and $T_{3^n}(2) = 3^{n-1}-1$. Estimating $T_{p^n}(k)$ ----------------------- For $k \ge 2$, let $\mathcal{T}_{2^n}(k)$ be the set of ${\mathbb{F}}_{2^n}$-isomorphism classes of elliptic curves $E/{\mathbb{F}}_{2^n}$ such that $\#E({\mathbb{F}}_{2^n}) \equiv 0 \pmod{2^k}$. Similarly for $k \ge 1$, let $\mathcal{T}_{3^n}(k)$ be the set of ${\mathbb{F}}_{3^n}$-isomorphism classes of elliptic curves $E/{\mathbb{F}}_{3^n}$ such that $\#E({\mathbb{F}}_{3^n}) \equiv 0 \pmod{3^k}$. Observe that the elliptic curves $E_{2^n}(a)$ and $E_{3^n}(a)$ both have $j$-invariant $1/a$ [@Silverman Appendix A], and hence cover all the $\overline{{\mathbb{F}}}_{2^n}$- and $\overline{{\mathbb{F}}}_{3^n}$-isomorphism classes of elliptic curves over ${\mathbb{F}}_{2^n}$ and ${\mathbb{F}}_{3^n}$ respectively, except for $j=0$. We have the following lemma. \[wouter\][@castryck Lemma 6] Let $E/{\mathbb{F}}_q$ be an elliptic curve and let $[E]_{{\mathbb{F}}_q}$ be the set of ${\mathbb{F}}_q$-isomorphism classes of elliptic curves that are $\overline{{\mathbb{F}}}_q$-isomorphic to $E$. Then for $j \ne 0,1728$ we have $\#[E]_{{\mathbb{F}}_q} = 2$, and $[E]_{{\mathbb{F}}_q}$ consists of the ${\mathbb{F}}_q$-isomorphism class of $E$ and the ${\mathbb{F}}_q$-isomorphism class of its quadratic twist $E^t$. Let $\#E_{2^n}(a) = 2^n + 1 - t_a$, with $t_a$ the trace of Frobenius. Since $j \ne 0$, by Lemma \[wouter\] the only other ${\mathbb{F}}_{2^n}$-isomorphism class with $j$-invariant $1/a$ is that of the quadratic twist $E_{2^n}^{t}(a)$, which has order $2^n + 1 + t_a$. Since $t_a \equiv 1 \pmod{4}$, we have $\#E_{2^n}^{t}(a) \equiv 2 \pmod{4}$ and hence none of the ${\mathbb{F}}_{2^n}$-isomorphism classes of the quadratic twists of $E_{2^n}(a)$ for $a \in {\mathbb{F}}_{2^n}^{\times}$ are in $\mathcal{T}_{2^n}(k)$, for $k \ge 2$. By an analogous argument, only the ${\mathbb{F}}_{3^n}$-isomorphism classes of $E_{3^n}(a)$ for $a \in {\mathbb{F}}_{3^n}^{\times}$ are in $\mathcal{T}_{3^n}(k)$, for $k \ge 1$. Furthermore, all curves $E/{\mathbb{F}}_{2^n}$ and $E/{\mathbb{F}}_{3^n}$ with $j=0$ are supersingular [@washington §3.1], and therefore have group orders $\equiv 1 \pmod{4}$ and $\equiv 1 \pmod{3}$ respectively. Hence no ${\mathbb{F}}_{p^n}$-isomorphism classes of curves with $j=0$ are in $\mathcal{T}_{p^n}(k)$ for $p \in \{2,3\}$. As a result, for $2 \le k \le n$ we have $$\label{Tequal} |\mathcal{T}_{2^n}(k)| = T_{2^n}(k),$$ and similarly, for $1 \le k \le n$ we have $$\nonumber |\mathcal{T}_{3^n}(k)| = T_{3^n}(k).$$ Therefore in both cases, a good estimate for $|\mathcal{T}_{p^n}(k)|$ is all we need to estimate $T_{p^n}(k)$. The cardinality of $\mathcal{T}_{p^n}(k)$ is naturally related to the study of modular curves; in particular, considering the number of ${\mathbb{F}}_{p^n}$-rational points on the Igusa curve of level $p^k$ allows one to prove Theorem \[maintheorem\] below [@Igusa; @Amilcar]. However, for simplicity (and generality) we use a result due to Howe on the group orders of elliptic curves over finite fields [@howe]. Consider the set $$V({\mathbb{F}}_q;N) = \{ E/{\mathbb{F}}_q: N \mid \#E({\mathbb{F}}_q)\}\big/\cong_{{\mathbb{F}}_q}$$ of equivalence classes of ${\mathbb{F}}_q$-isomorphic curves whose group orders are divisible by $N$. Following Lenstra [@lenstra], rather than estimate $V({\mathbb{F}}_q;N)$ directly, Howe considers the weighted cardinality of $V({\mathbb{F}}_q;N)$, where for a set $S$ of ${\mathbb{F}}_q$-isomorphism classes of elliptic curves over ${\mathbb{F}}_q$, this is defined to be: $$\#'S = \sum_{[E] \in S} \frac{1}{\#\text{Aut}_{{\mathbb{F}}_q}(E)}.$$ For $j \ne 0$ we have $\#\text{Aut}_{\overline{{\mathbb{F}}}_q}(E) = 2$ [@Silverman §[III.10]{}] and since $\{\pm 1\} \subset \text{Aut}_{{\mathbb{F}}_q}(E)$ we have $\#\text{Aut}_{{\mathbb{F}}_q}(E) = 2$ also. Therefore, by the above discussion, for $p=2, k \ge 2$ and $p =3, k \ge 1$ we have $$\label{estimate} |\mathcal{T}_{p^n}(k)| = 2 \cdot \#'V({\mathbb{F}}_{p^n};p^k),$$ We now present Howe’s result. \[howesthm\][@howe Theorem 1.1] There is a constant $C \le 1/12 + 5\sqrt{2}/6 \approx 1.262$ such that the following statement is true: Given a prime power $q$, let $r$ be the multiplicative arithmetic function such that for all primes $l$ and positive integers $a$ $$r(l^a) = \begin{cases} \dfrac{1}{l^{a-1}(l-1)}, & \mbox{if } q \not\equiv 1 \pmod{l^c};\\ \\ \dfrac{l^{b+1}+ l^b-1}{l^{a+b-1}(l^2-1)}, & \mbox{if } q \equiv 1 \pmod{l^c}, \end{cases}$$ where $b = \lfloor a/2 \rfloor$ and $c = \lceil a/2 \rceil$. Then for all positive integers $N$ one has $$\label{howeformula} \bigg| \frac{\#'V({\mathbb{F}}_q;N)}{q} - r(N) \bigg| \le \frac{C N \rho(N)2^{\nu(N)}}{\sqrt{q}},$$ where $\rho(N) = \prod_{p \mid N}((p+1)/(p-1))$ and $\nu(N)$ denotes the number of distinct prime divisors of $N$. Equipped with Theorem \[howesthm\], we now present and prove our main theorem. \[maintheorem\] Let $p \in \{2,3\}$ and let $T_{p^n}(k)$ be defined as above. Then - For $3 \le k < n/4$ we have $T_{2^n}(k) = 2^{n - k + 2} + O(2^{k+n/2})$, - For $2 \le k < n/4$ we have $T_{3^n}(k) = 3^{n - k + 1} + O(3^{k+n/2})$, - $T_{2^n} = 3 \cdot 2^n + O(n \cdot 2^{3n/4})$, - $T_{3^n} = 3^{n+1}/2 + O(n \cdot 3^{3n/4})$, - $\lim_{n \to \infty} T_{p^n}/(p^n-1) = \begin{cases} 3 \hspace{8mm} \text{if} \ p = 2,\\ 3/2 \hspace{4.5mm} \text{if} \ p=3. \end{cases}$ Furthermore, in $(i)-(iv)$ the implied constants in the $O$-notation are absolute and effectively computable. By equations (\[Tequal\]) and (\[estimate\]), and Theorem \[howesthm\] with $l=p$, for $3 \le k \le n$ we have $$\bigg| \frac{T_{2^n}(k)}{2^{n+1}} - \frac{1}{2^{k-1}} \bigg| \le \frac{C \cdot 2^k\cdot 3 \cdot 2}{2^{n/2}},$$ from which (i) follows immediately. Similarly for $2 \le k \le n$ we have $$\bigg| \frac{T_{3^n}(k)}{2 \cdot 3^{n}} - \frac{1}{3^{k-1}\cdot 2} \bigg| \le \frac{C \cdot 3^k \cdot (4/2) \cdot 2}{3^{n/2}},$$ from which (ii) follows. For (iii) we write equation (\[partition\]) as follows: $$\nonumber T_{2^n} = \sum_{k = 1}^{n} T_{2^n}(k) = \sum_{k = 1}^{\lfloor n/4 \rfloor -1} T_{2^n}(k) + \sum_{k =\lfloor n/4 \rfloor}^{n} T_{2^n}(k).$$ Freely applying (i), the first of the these two sums equals $$\begin{aligned} \nonumber & & 2^n + (2^n + 2^{n-1} + \cdots + 2^{n - \lfloor n/4 \rfloor + 2}) + O(2^{n/2 + 2} + 2^{n/2 + 3} + \cdots + 2^{n/2 + \lfloor n/4 \rfloor})\\ \nonumber &=& 2^n + 2^{n+1} - 2^{n - \lfloor n/4 \rfloor + 2} + O(2^{n/2 + \lfloor n/4 \rfloor + 1})\\ \nonumber &=& 2^n + 2^{n+1} + O(2^{3n/4}) = 3\cdot 2^n + O(2^{3n/4}).\end{aligned}$$ For the second sum, observe that $p^{k+1} \mid t \Longrightarrow p^k \mid t$ and so $T_{2^n}(k+1) \le T_{2^n}(k)$, which gives $$\sum_{k =\lfloor n/4 \rfloor}^{n} T_{2^n}(k) \le (3n/4 + 2) \cdot T_{2^n}(\lfloor n/4 \rfloor) = O(n \cdot 2^{3n/4}).$$ Combining these two sums one obtains (iii). Part (iv) follows [*mutatis mutandis*]{}, which together with (iii) proves (v). Theorem \[maintheorem\] proves that for $k < n/4$, the distribution of the height function $\log_{p}(|S_p(E_{p^n}(a))|)$ over $a \in {\mathbb{F}}_{p^n}^{\times}$ is approximately geometric. Hence using an argument similar to the above one can prove that asymptotically, the variance is $2$ for $p=2$, and $3/4$ for $p=3$. Our proof also gives an upper bound on the number of Kloosterman zeros. In particular, parts (i) and (ii) imply that for $k < n/4$, for increasing $k$, $T_{p^n}(k)$ is decreasing, and hence the number of Kloosterman zeros is $O(p^{3n/4})$. Shparlinski has remarked [@shpar] that this upper bound follows from a result of Niederreiter [@nied], which refines an earlier result due to Katz [@katzM]. The Weil bound intrinsic to Howe’s estimate fails to give any tighter bounds on $|T_{p^n}(k)|$ for $n/4 \le k \le n/2$. Finding improved bounds on $|T_{p^n}(k)|$ for $k$ in this interval is an interesting problem, since they would immediately give a better upper bound on the number of Kloosterman zeros. While our proof only required the $l=p$ part of Howe’s result (when we could have used tighter bounds arising from an Igusa curve argument), the more general form, when combined with our approach, allows one to compute the expected height of the Sylow $l$-subgroup trees for $l \ne p$ as well, should this be of interest. Test Efficiency {#compare} =============== We now address the expected efficiency of Algorithms \[sylow2\] and \[sylow3\] when applied to random elements of ${\mathbb{F}}_{2^n}^{\times}$ and ${\mathbb{F}}_{3^n}^{\times}$ respectively. Since the number of Kloosterman zeros is $O(p^{3n/4})$, by choosing random $a \in {\mathbb{F}}_{p^n}^{\times}$ and applying our algorithms, one only has an exponentially small probability of finding a zero. Hence we focus on those $n$ for which such computations are currently practical and do not consider the asymptotic complexity of operations. For comparative purposes we first recall Lisoněk’s randomised Kloosterman zero test [@lisonek]. Lisoněk’s Kloosterman zero test ------------------------------- For a given $a \in {\mathbb{F}}_{p^n}^{\times}$, Lisoněk’s test consists of taking a random point $P \in E_{p^n}(a)$, and computing $[p^n]P$ to see if it is the identity element $\mathcal{O} \in E_{p^n}(a)$. If it is not, then by Lemmas \[lis1\] and \[lis2\] one has certified that the group order is not $p^n$ and thus $a$ is not a Kloosterman zero. If $[p^n]P = \mathcal{O}$ and $[p^{n-1}]P \neq \mathcal{O}$, then $\langle P \rangle = E_{p^n}(a)$ and $a$ is a Kloosterman zero. In this case the probability that a randomly chosen point on the curve is a generator is $1/2$ and $2/3$ for $p=2$ and $p=3$ respectively. The test thus requires $O(n)$ point-doublings/triplings in $E_{2^n}(a)$ and $E_{3^n}(a)$ respectively. Algorithm \[sylow2\] for $E_{2^n}(a)$ {#compare2} ------------------------------------- By Theorem \[maintheorem\](v), only one loop iteration of Algorithm \[sylow2\] is required on average. Each such iteration requires computing: a trace; solving (\[half\]); a multiplication; a square root; two additions; and a bit-flip. This process has been extensively studied and optimised for point-halving in characteristic $2$ [@handbook]. In particular, for $n=163$ and $n=233$, point-halving is reported to be over twice as fast as point-doubling [@handbook Table 3]. Hence in this range of $n$, with a state-of-the-art implementation, Algorithm \[sylow2\] is expected to be $\approx 2n$ times faster than Lisoněk’s algorithm (or $\approx n$ times faster if for the latter one checks whether or not $\text{Tr}(a) = 0$ before initiating the point multiplication). For the field ${\mathbb{F}}_{2^{75}} = {\mathbb{F}}_2[t]/(t^{75} + t^6 + t^3 + t + 1)$, using a basic MAGMA V2.16-12 [@magma] implementation of Algorithm \[sylow2\], we found the Kloosterman zero: $$\begin{aligned} a & = & t^{74} + t^{73} + t^{68} + t^{67} + t^{66} + t^{65} + t^{63} + t^{62} + t^{59} + t^{57} + t^{56} + t^{55} + t^{52}\\ & + & t^{44} + t^{43} + t^{41} + t^{40} + t^{39} + t^{38} + t^{37} + t^{36} + t^{35} + t^{34} + t^{31} + t^{30} + t^{29}\\ & + & t^{28} + t^{25} + t^{24} + t^{23} + t^{22} + t^{19} + t^{16} + t^{15} + t^{14} + t^{13} + t^{12} + t^{11} + t^{8}\\ & + & t^{7} + t^{6} + t^5 + t^4 + t^3 + t^2 + t,\end{aligned}$$ in 18 hours using eight AMD Opteron 6128 processors each running at 2.0 GHz. Due to MAGMA being general-purpose, without a built-in function for point-halving, the above implementation has comparable efficiency to a full point multiplication by $2^{75}$ on $E_{p^n}(a)$, [i.e., ]{}Lisoněk’s algorithm. However, using a dedicated implementation as in [@handbook] for both point-doubling and point-halving, one would expect Algorithm \[sylow2\] to be more than $150$ times faster than Lisoněk’s algorithm (or more than $75$ times faster with an initial trace check). Since point-doubling for the dedicated implementation is naturally much faster than MAGMA’s, the above time could be reduced significantly, and Kloosterman zeros for larger fields could also be found, if required. The $O(n)$ factor speedup is due to the fundamental difference between Lisoněk’s algorithm and our approach; while Lisoněk’s algorithm traverses the hypothetically-of-order-$p^n$ Sylow $p$-subgroup tree from leaf to root, we instead calculate its exact height from root to leaf, which on average is $3$ and thus requires an expected single point-halving. Algorithm \[sylow3\] for $E_{3^n}(a)$ {#compare3} ------------------------------------- Due to the presence of inversions and square-root computations, one expects each loop iteration of Algorithm \[sylow3\] to be slower than each loop iteration of Algorithm \[sylow2\]. Indeed our basic MAGMA implementation of Algorithm \[sylow3\] for curves defined over ${\mathbb{F}}_{3^{47}}$ runs $\approx 3.5$ times slower than our one for Algorithm \[sylow2\] for curves defined over ${\mathbb{F}}_{2^{75}}$. However the MAGMA implementation is $\approx 15$ times faster than Lisoněk’s algorithm in this case (or equivalently $5$ times faster if a trace check is first performed). For the field ${\mathbb{F}}_{3^{47}} = {\mathbb{F}}_3[t]/(t^{47} -t^4 - t^2 - t + 1)$, using our MAGMA implementation of Algorithm \[sylow3\], we found the Kloosterman zero: $$\begin{aligned} a & = & t^{46} + t^{45} - t^{44} - t^{42} + t^{39} - t^{38} - t^{36} - t^{35} - t^{33} - t^{31} - t^{30} + t^{29} + t^{28}\\ & + & t^{26} + t^{25} - t^{24} - t^{22} - t^{21} + t^{20} - t^{19} - t^{17} + t^{16} - t^{15} + t^{14} + t^{13} - t^{11}\\ & + & t^{10} - t^9 - t^7 + t^6 + t^5 + t^4 -t^2 + 1,\end{aligned}$$ in 126 hours, again using eight AMD Opteron 6128 processors running at 2.0 GHz. In order to improve our basic approach, representational, algorithmic and implementation optimisations need to be researched. It may be possible for instance to improve the underlying point-thirding algorithm by using alternative representations of the curve, or the points, or both. For example, one may instead use the Hessian form [@chudnovsky] of $E_{3^n}(a)$: $$H_{3^n}(\bar{a}): \bar{x}^3 + \bar{y}^3 + 1 = \bar{a}\bar{x}\bar{y},$$ where $\bar{a} = a^{-1/3}$, $\bar{x} = -a^{1/3}(x+y)$ and $\bar{y} = a^{1/3}(y-x)$, and an associated tripling formula, see for example [@hisil §3]. Could point-thirding in this form be faster than that described for the Weierstrass form in Algorithm \[sylow3\]? Also, is there an analogue of the $\lambda$-representation of a point [@knudsen; @schroeppel] that permits more efficient point-tripling, and hence point-thirding? We leave as an interesting practical problem the development of efficient point-thirding algorithms and implementations for ternary field elliptic curves with non-zero $j$-invariant. Concluding remarks {#conc} ================== We have presented an efficient deterministic algorithm which tests whether or not an element of ${\mathbb{F}}_{2^n}^{\times}$ or ${\mathbb{F}}_{3^n}^{\times}$ is a Kloosterman zero, and have rigorously analysed its expected runtime. Our analysis also gives an upper bound on the number of Kloosterman zeros. By repeatedly applying our algorithm on random field elements, we obtain the fastest probabilistic method to date for finding Kloosterman zeros, which for ${\mathbb{F}}_{2^n}$ is $O(n)$ times faster than the previous best method, for $n$ in the practical range. Since this method of finding a Kloosterman zero is still exponential in $n$, it remains an important open problem to compute Kloosterman zeros efficiently. Acknowledgements {#acknowledgements .unnumbered} ================ The authors wish to thank Faruk Göloğlu and Alexey Zaytsev for useful discussions, and the reviewers for their comments. [^1]: Both authors are supported by the Claude Shannon Institute, Science Foundation Ireland Grant No. 06/MI/006. [^2]: The worst case being $n$ iterations, which of course is the best case when searching for a Kloosterman zero.
--- abstract: 'We prove under suitable hypotheses that convergence of integral varifolds implies convergence of associated mod $2$ flat chains and subsequential convergence of associated integer-multiplicity rectifiable currents. The convergence results imply restrictions on the kinds of singularities that can occur in mean curvature flow.' author: - Brian White date: 'May 13, 2008. Revised November 8, 2008' title: | Currents and Flat Chains\ Associated to Varifolds, with an\ Application to Mean Curvature Flow --- [^1] Introduction {#section:intro} ============ Let $U$ be an open subset of ${\mathbf{R}}^N$. Let ${{\mathcal L}_{\text{$m$-rec}}}(U, {\mathbf{Z}}^+)$ denote the space of functions on $U$ that take values in nonnegative integers, that are locally ${\mathcal L}^1$ with respect to Hausdorff $m$-dimensional measure on $U$, and that vanish except on a countable disjoint union of $m$-dimensional $C^1$ submanifolds of $U$. We identify functions that agree except on a set of Hausdorff $m$-dimensional measure zero. Let ${{\mathcal L}_{\text{$m$-rec}}}(U,{\mathbf{Z}}_2)$ be the corresponding space with the nonnegative integers ${\mathbf{Z}}^+$ replaced by ${\mathbf{Z}}_2$, the integers mod $2$. The space of $m$-dimensional integral varifolds in $U$ is naturally isomorphic to ${{\mathcal L}_{\text{$m$-rec}}}(U,{\mathbf{Z}}^+)$: given any such varifold $V$, the corresponding function is the density function $\Theta(V,\cdot)$ given by $$\Theta(V,x) = \lim_{r\to 0}\frac{\mu_V({\mathbf{B}}(x,r))}{\omega_mr^m}$$ where $\mu_V$ is the radon measure on $U$ determined by $V$ and $\omega_n$ is the volume of the unit ball in ${\mathbf{R}}^m$. In particular, this limit exists and is a nonnegative integer for ${\mathcal H}^m$-almost every $x\in U$. Similarly, the space of $m$-dimensional rectifiable mod $2$ flat chains in $U$ is naturally isomorphic to ${{\mathcal L}_{\text{$m$-rec}}}(U, {\mathbf{Z}}_2)$: given any such flat chain $A$, the corresponding function is the density function $\Theta(A,\cdot)$ given by $$\Theta(A,x) = \lim_{r\to 0}\frac{\mu_A({\mathbf{B}}(x,r))}{\omega_mr^m} = \lim_{r\to 0}\frac{M(A\cap {\mathbf{B}}(x,r))}{\omega_mr^m}$$ where $\mu_A$ is the radon measure on $U$ determined by $A$. In particular, this limit exists and is $0$ or $1$ for ${\mathcal H}^m$-almost every $x\in U$. The surjective homomorphism $$\begin{aligned} [\cdot]: \, &{\mathbf{Z}}^+ \to {\mathbf{Z}}_2 \\ &k \mapsto [k]\end{aligned}$$ determines a homomorphism from ${{\mathcal L}_{\text{$m$-rec}}}(U,{\mathbf{Z}}^+)$ to ${{\mathcal L}_{\text{$m$-rec}}}(U, {\mathbf{Z}}_2)$, and thus also a homomorphism from the additive semigroup of integral varifolds in $U$ to the additive group of rectifiable mod $2$ flat chains in $U$. If $V$ is such a varifold, we let $[V]$ denote the corresponding rectifiable mod $2$ flat chain. Thus $[V]$ is the unique rectifiable mod $2$ flat chain in $U$ such that $$\Theta([V], x) = [ \Theta(V,x)]$$ for ${\mathcal H}^m$-almost every $x\in U$. Although in some ways integral varifolds and rectifiable mod $2$ flat chains are similar, the notions of convergence are quite different. Typically (and throughout this paper) convergence of varifolds means weak convergence as radon measures on $U\times {\rm G}_m({\mathbf{R}}^N)$ (where ${\rm G}_m({\mathbf{R}}^N)$ is the set of $m$-dimensional linear subspaces of ${\mathbf{R}}^N$), and convergence of flat chains means convergence with respect to the flat topology (see Section \[s:Appendix\]). A sequence $V(i)$ of integral varifolds may converge even though the associated flat chains $[V(i)]$ do not converge. Similarly, the flat chains $[V(i)]$ may converge even though the varifolds $V(i)$ do not. Furthermore, the $V(i)$ and $[V(i)]$ may converge to limits $V$ and $A$, respectively, with $A\ne [V]$. See Section \[s:examples\] for examples. This paper identifies an important situation in which convergence of integral varifolds implies convergence of the corresponding mod $2$ flat chains to the expected limit. In practice, one often proves existence of convergent sequences of integral varifolds by appealing to Allard’s compactness theorem (described in Section \[s:main\] below). Here we prove that if a sequence of integral varifolds with limit $V$ satisfies the hypotheses of Allard’s compactness theorem plus one additional hypothesis, then the corresponding mod $2$ flat chains converge to $[V]$: \[Intro:Mod2Theorem\] Let $V(i)$ be a sequence of $m$-dimensional integral varifolds in an open set $U$ of ${\mathbf{R}}^N$ that converges to a limit $V$. Suppose that 1. The $V(i)$ satisfy the hypotheses of Allard’s compactness theorem for integral varifolds, and 2. The boundaries $\partial [V(i)]$ of the mod $2$ flat chains $[V(i)]$ converge in the flat topology. Then the chains $[V(i)]$ converge in the flat topology to $[V]$. I do not know whether hypothesis (2) is really necessary. There is an analogous theorem with rectifiable currents in place of mod $2$ flat chains. Suppose $A$ is an $m$-dimensional integer-multiplicity rectifiable current in $U$ and that $V$ is an $m$-dimensional integral varifold in $U$. Recall that $A$ determines an integral varifold ${\bold v}(A)$ by forgetting orientations [@SimonBook]\*[§27]{}. We say that $A$ and $V$ are [**compatible**]{} provided $$V = {\bold v}(A) + 2 W$$ for some integral varifold $W$ in $U$. Thus $A$ and $V$ are compatible if and only if they determine the same mod $2$ rectifiable chain. Equivalently, $A$ and $V$ are compatible provided $$\Theta(V,x) - \Theta(A,x)$$ is a nonnegative, even integer for ${\mathcal H}^m$-almost every $x\in U$. The analog of Theorem \[Intro:Mod2Theorem\] for integer-multiplicity currents is the following: \[Intro:IntegerTheorem\] Let $V(i)$ and $A(i)$ be sequences of $m$-dimensional integral varifolds and integer-multiplicity currents, respectively, in $U$, such that $V(i)$ and $A(i)$ are compatible for each $i$. Suppose the $V(i)$ satisfy the hypotheses of Allard’s compactness theorem for integral varifolds. Suppose also that the boundaries $\partial A(i)$ converge (in the integral flat topology) to a limit current. Then there is a subsequence $i(k)$ such that the $V(i(k))$ converge to an integral varifold $V$, the $A(i(k))$ converge to a limit integer-multiplicity current $A$, and such that $A$ and $V$ are compatible. The existence of a subsequence for which the limits $V$ and $A$ exist follows immediately from Allard’s compactness theorem for integral varifolds and from the Federer-Fleming compactness theorem for integer-multiplicity currents. What is new here is the compatibility of the limits $A$ and $V$. Preliminaries ============= Terminology {#s:terminology} ----------- For mod $2$ flat chains, see Fleming’s original paper [@Fleming] or, for a different approach, Federer’s book [@FedererBook]\*[§4.2.26]{}. Unfortunately (for the purposes of this paper), a multiplicity $[1]$ plane does not qualify as a mod $2$ flat chain under either definition[^2]. By contrast, a multiplicity $1$ plane does qualify as an integral varifold. Thus in order for the map $V\mapsto [V]$ (as described in the introduction) to be a homomorphism from integral varifolds to mod $2$ flat chains, one must either restrict the class of varifolds or enlarge the class of flat chains. If one prefers to restrict, then one should (throughout this paper) replace “varifold” by “compactly supported varifold" and “flat chain" by “compactly supported flat chain". (Federer’s flat chains are automatically compactly supported, but Fleming’s need not be.) Likewise ${{\mathcal L}_{\text{$m$-rec}}}(U,{\mathbf{Z}}^+)$ and ${{\mathcal L}_{\text{$m$-rec}}}(U,{\mathbf{Z}}_2)$ should be replaced by the subsets consisting of compactly supported functions. In particular, the main theorem, Theorem \[Mod2CompatibilityTheorem\], remains true if one makes those replacements. However, in this paper we have chosen instead to enlarge the class of flat chains. Fortunately, only a slight modification in Fleming’s definition (or Federer’s) is required to produce the “correct” class of flat chains. (Flat chains so defined would, in the terminology of [@FedererBook], be called “locally flat chains" However, although locally flat chains over the integers are briefly mentioned in [@FedererBook] (in Section 4.1.24), the mod $2$ versions are not.) See Section \[s:Appendix\] for the required modification. When the coefficient group is the integers (with the standard metric), the “correct” class of flat chains is defined in [@SimonBook], and the rectifiability and compactness theorems are proved there. Notation {#s:notation} -------- Suppose $M$ is a Borel subset of a properly embedded $m$-dimensional $C^1$ submanifold of $U$, or of a countable union of such manifolds. If $M$ has locally finite ${\mathcal H}^m$ measure, we let $[M]$ denote the mod $2$ flat chain associated to $M$ and ${\bold v}(M)$ denote the integral varifold associated to $M$. More generally, if $f: M\to {\mathbf{Z}}^+$ is a function such that the extension $$\begin{aligned} &F: U\to {\mathbf{Z}}^+ \\ &F(x) = \begin{cases} f(x) &\text{if $x\in M$} \\ 0 &\text{if $x\in U\setminus M$} \end{cases}\end{aligned}$$ is in ${{\mathcal L}_{\text{$m$-rec}}}(U,{\mathbf{Z}}^+)$, then we let ${\bold v}(M,f)$ be the integral varifold in $U$ corresponding to $F$. Push-forwards ------------- Suppose that $V$ is an integral varifold in $U$ and that $\phi:U\to W$ is a $C^1$ map that is proper on $U\cap{\operatorname{spt}}(\mu_V)$. Then the push-forward $\phi_\#V$ is also an integral varifold in $W$ and it satisfies $$\label{e:VarifoldPushForward} \Theta(\phi_\#V,y) = \sum_{\phi(x)=y} \Theta(V,x)$$ for ${\mathcal H}^m$-almost every $y\in W$. Similarly, if $A$ is a rectifiable mod $2$ flat chain in $U$ and if $\phi:U\to W$ is locally lipschitz on $U\cap{\operatorname{spt}}{\mu_A}$, then the image chain $\phi_\#A$ satisfies $$\label{e:ChainPushForward} [\Theta(\phi_\#A,y)] = \sum_{\phi(x)=y}{}[\Theta(A,x)] $$ for ${\mathcal H}^m$-almost every $y\in W$. Note that this determines $\Theta(\phi_\#A,y)$ for ${\mathcal H}^m$-almost every $y$ since its value is $0$ or $1$ almost everywhere. In other words, for ${\mathcal H}^m$-almost every $y\in W$, $$\label{e:ChainPushForwardCases} \Theta(\phi_\#A,y) = \begin{cases} 1, &\text{if $\sum_{\phi(x)=y}\Theta(A,y)$ is odd, and} \\ 0, &\text{if the sum is even.} \end{cases}$$ Together  and  imply that $$\phi_\#[V] = [\phi_\#V].$$ We shall need push-forwards only in the special cases where $\phi$ is a dilation or an affine projection. Examples {#s:examples} -------- Although they are not needed in this paper, some examples illustating the differences between flat chain convergence and varifold convergence may be instructive. First, consider a sequence of smooth, simple closed curves $C_i$ lying in a compact region of ${\mathbf{R}}^2$ such that the lengths tend to infinity but the enclosed areas tend to $0$. Let $V_i={\bold v}(C_i)$ be the corresponding one-dimensional integral varifolds. Then the varifolds $V_i$ do not converge, but the corresponding mod $2$ flat chains $[V_i]$ converge to $0$. Next let $$\label{e:OddIntervals} J_n = \cup \left\{ \left[ \frac{k}{2n}, \frac{k+1}{2n} \right] : \text{$k$ odd, $0<k<2n$} \right\}$$ and let $$S_n = J_n \times \{0, 1/(2n)\} \subset {\mathbf{R}}^2.$$ Thus $S_n$ consists of $2n$ horizontal intervals, each of length $1/(2n)$. Let $V_n={\bold v}(S_n)$ be the corresponding integral varifold. Then the $V_n$ converge to ${\bold v}(I)$, where $$\label{e:interval} I = \{ (x,0): 0\le x \le 1\}.$$ However, the corresponding mod $2$ flat chains $[V_n]$ do not converge. To see this, suppose to the contrary that the $[V_n]$ converge to a limit chain $T$. Let $f, g:{\mathbf{R}}^2\to{\mathbf{R}}$ be the projections given by $f(x,y)=x$ and $g(x,y)=x-y$. Then $f_\#[V_n]=0$ and $g_\#[V_n]=[[0,1]]$. Passing to the limit, we get $$\label{e:projections} f_\#T=0, \qquad g_\#T = [[0,1]].$$ However, $T$ is clearly supported in $I$ and $f|I=g|I$, so $f_\#T=g_\#T$ (by ), contradicting . This proves that the $[V_n]$ do not converge. For a final example, let $$Q_n = J_n \times [0, (1/n^2)]$$ where $J_n$ is given by . Thus $Q_n$ is the union of $n$ closed rectangles, each with base $1/(2n)$ and height $1/(n^2)$. Let $V_n$ be the one-dimensional varifold associated to the set-theoretic boundary of $Q_n$: $V_n={\bold v}(\partial Q_n)$. Then the $V_n$ converge to $V={\bold v}(I)$, where $I$ is given by , but the flat chains $[V_n]$ converge to $0$ since the area of $Q_n$ tends to $0$. Thus the varifolds $V_n$ converge to $V$ and the chains $[V_n]$ converge to $0$, but $[V]\ne 0$. Proofs of the Main Results {#s:main} ========================== Let $V(i)$ be a sequence of $m$-dimensional varifolds in an open subset $U$ of ${\mathbf{R}}^N$. If the $V(i)$ converge to a varifold $V$, then of course $$\label{e:uniformlyboundedmasses} \text{$ \limsup \mu_{V(i)} W < \infty$ for all $W\subset\subset U$.}$$ Conversely, if holds, then the $V(i)$ have a convergent subsequence (by the compactness theorem for radon measures.) \[NiceDefinition\] Suppose that $V(i)$, $i=1,2,3,\dots$, and $V$ are $m$-dimensional varifolds in an open subset $U$ of ${\mathbf{R}}^N$. In this paper, we will say that $V(i)$ converges [**with locally bounded first variation**]{} to $V$ provided $V(i)\to V$ as varifolds and $$\label{e:NicenessBound} \limsup_{i\to\infty} \| \delta V(i)\| (W) < \infty$$ for every $W\subset\subset U$. To understand the definition, the reader may find it helpful to recall that if $V$ is the mutiplicity $1$ varifold associated to a smooth, embedded manifold-with-boundary $M$, then $$\| \delta V \| (W) = {\mathcal H}^{m-1}(W\cap \partial M) + \int_{M\cap W} |H(x)|\,d{\mathcal H}^mx$$ where $H(x)$ is the mean curvature vector of $M$ at $x$. Thus for a sequence $V(i)$ of such integral varifolds, the condition  means that the areas of the boundaries and the $L^1$ norms of the mean curvature are uniformly bounded on compact subsets of $U$. (See [@AllardFirstVariation] or [@SimonBook]\*[§39]{} for the general definition of $\|\delta V\|$.) The following closure theorem of Allard ([@AllardFirstVariation]\*[6.4]{} or [@SimonBook]\*[§42.8]{}) is one of the key results in the theory of varifolds: \[AllardClosureTheorem\] If $V(i)$ is a sequence of integral varifolds that converges with locally bounded first variation to $V$, then $V$ is also an integral varifold. Here we prove: \[Mod2CompatibilityTheorem\] Suppose $V(i)$ is a sequence of integral varifolds that converge with locally bounded first variation to an integral varifold $V$. If the boundaries $\partial [V(i)]$ converge (as mod $2$ flat chains) to a limit chain $\Gamma$, then $$[V(i)] \to [V]$$ and therefore $\partial [V] = \Gamma$. The last assertion ($\partial [V]=\Gamma$) follows because the boundary operator is continuous with respect to flat convergence. The result is already interesting in the case where $\partial [V(i)] =0$ for all $i$. Since $V$ is rectifiable, there is a countable union $\cup{\mathcal M}$ of $m$-dimensional $C^1$ embedded manifolds such that $$\mu_V(U\setminus \cup{\mathcal M}) = 0$$ Without loss of generality, we may assume that the manifolds in ${\mathcal M}$ are disjoint. By the compactness theorem for flat chains of locally finite mass (see Theorem \[CompactnessTheorem\]), a subsequence of the $[V(i)]$ will converge to such a flat chain $A$. (Here and throughout the proof, “flat chain” means “mod $2$ flat chain".) Using the rectifiability theorem (see Theorem \[RectifiabilityTheorem\]), we can conclude that $A$ is rectifiable. We remark that here one may prove rectifiability of $A$ directly (without invoking Theorem \[RectifiabilityTheorem\]). One sees that as follows. By the lower-semicontinuity of mass with respect to flat convergence, the inequality $$\mu_{[V(i)]} \le \mu_{V(i)}$$ implies that $$\label{e:MeasureInequality} \mu_A \le \mu_V$$ and therefore that $$\label{e:ConcentrationOnM} \mu_A({\mathbf{R}}^N\setminus \cup{\mathcal M}) \le \mu_V({\mathbf{R}}^N\setminus \cup{\mathcal M}) = 0.$$ Hence $A$ is rectifiable. To show that $A=[V]$, it suffices by  to show that $$\label{e:DensityInequality} \text{$\Theta(\mu_V,x) - \Theta(\mu_A,x)$ is an even integer}$$ for $\mu_V$-almost every $x\in U$. By , it suffices to show that holds for $\mu_V$-almost every $x\in \cup{\mathcal M}$. For ${\mathcal H}^m$-almost $x\in \cup{\mathcal M}$ (and therefore in particular for $\mu_V$-almost every $x\in\cup{\mathcal M}$) we have: $$\label{e:FirstBlowUp} \begin{split} &\eta_{x,\lambda\#}V \to \Theta(V,x) {\bold v}(P), \\ &\eta_{x,\lambda\#}A \to \Theta(A,x) [P] \end{split}$$ as $\lambda\to 0$, where $P$ is the tangent plane at $x$ to the unique $M\in {\mathcal M}$ that contains $x$. Here $\eta_{x,\lambda}:{\mathbf{R}}^N\to{\mathbf{R}}^N$ is translation by $-x$ followed by dilation by $1/\lambda$: $$\eta_{x,\lambda}(y) = \frac1{\lambda}(y-x).$$ The proof of Lemma 42.9 in shows that $\mu_V$-almost every $x$ has an additional property, namely $$\label{e:ControlledBoundaryDensity} \text{ $\liminf_i \| \delta V(i)\| {\mathbf{B}}(x,r) \le cr^m$ for all $r\in (0,1)$}$$ where $c=c(x)<\infty$. We will complete the proof by showing that if $x$ has properties  and , then $\Theta(V,x)$ and $\Theta(A,x)$ differ by an even integer. For each fixed $\lambda$, $$\label{e:FixedLambda} \begin{split} &\eta_{x,\lambda\#}V(i) \to \eta_{x,\lambda\#}V \\ &\eta_{x,\lambda\#}A(i) \to \eta_{x,\lambda\#}A. \end{split}$$ Thus a standard diagonal argument (applied to  and ) shows that there is a sequence $\lambda(i)\to 0$ such that $$\label{e:PairConvergence} \begin{split} &\tilde V(i) \to \Theta(V,x) {\bold v}(P) \\ &[\tilde V(i)] \to \Theta(A,x) [P] \end{split}$$ where $$\tilde V(i) = \eta_{x, \lambda(i)\#}V(i).$$ (One does not need to pass to a subsequence to achieve this. Rather, one simply chooses the $\lambda_i$’s to go to $0$ sufficiently slowly.) Note that $\|\delta V(i)\|{\mathbf{B}}(0,r)$ scales like $r^{m-1}$. Thus  implies that for each $r$, $$\liminf_{i\to \infty} \| \delta \tilde V(i) \| {\mathbf{B}}(0,r) = 0.$$ By passing to a further subsequence, we can assume that the liminf is in fact a limit, so that $$\label{e:VanishingBoundary} \| \delta \tilde V(i) \| \to 0$$ as radon measures. (For example, one can choose $i_1 < i_2 < i_3 <\dots$ so that $$\| \delta \tilde V(i_k) \| {\mathbf{B}}(0,k) < 1/k$$ for each $k$.) Thus we will be done if we can show that  and  imply that $\Theta(V,x)-\Theta(A,x)$ is an even integer. That is, we have reduced the theorem to the special case described in the following lemma. \[Lemma\] Suppose 1. \[LemmaHypothesis:VarifoldConvergence\] A sequence $V(i)$ of integral varifolds converges to the varifold $V=n{\bold v}(P)$, where $n$ is a nonnegative integer and $P$ is an $m$-dimensional linear subspace of ${\mathbf{R}}^N$. 2. \[LemmaHypothesis:VanishingBoundary\] The radon measures $\| \delta V(i) \|$ converge to $0$. 3. \[LemmaHypothesis:FlatConvergence\] The associated mod $2$ flat chains $[V(i)]$ converge to $A= a [P]$, where $a\in {\mathbf{Z}}_2$. Then $a=[n]$. We may assume that $P={\mathbf{R}}^m\times (0)^{N-m}\subset {\mathbf{R}}^N$. Let $$\pi: {\mathbf{R}}^N \cong {\mathbf{R}}^m\times {\mathbf{R}}^{N-m} \to {\mathbf{R}}^m$$ be the orthogonal projection map. Hypothesis (\[LemmaHypothesis:FlatConvergence\]) implies that for almost every $R>0$, $$\label{e:alimit1} [V(i)] \llcorner {\mathbf{B}}^N(0,R) \to a[P] \cap {\mathbf{B}}^N(0,R) = a[P\cap {\mathbf{B}}^N(0,R)].$$ We can assume that this is the case for $R=1$. (Otherwise dilate by $1/R$.) We write ${\mathbf{B}}$ for ${\mathbf{B}}^N(0,R)={\mathbf{B}}^N(0,1)$. Let $W(i)=V(i)\llcorner {\mathbf{B}}$. By , $$\label{e:alimit2} [\pi_\#W(i)] = \pi_\#[W(i)] \to a[{\mathbf{B}}^m],$$ where ${\mathbf{B}}^m={\mathbf{B}}^m(0,1)$. Also, $$W(i) \to V \llcorner {\mathbf{B}}= n {\bold v}(P\cap {\mathbf{B}})$$ and therefore $$\label{e:piW} \pi_\#W(i) \to n {\bold v}({\mathbf{B}}^m).$$ Note that $$\label{e:piWandtheta} \pi_\#W(i) = {\bold v}({\mathbf{B}}^m,\theta_i)$$ where $$\theta_i(x) = \sum_{y\in {\mathbf{B}}\cap \pi^{-1}x} \Theta(W(i),y).$$ From hypotheses  and , it follows that $$\label{e:Qsmall} {\mathcal L}^m Q_i \to 0$$ where $$\label{e:Qdef} Q_i = \{ x\in {\mathbf{B}}^m: \theta_i(x) \ne n \}.$$ (This is a very nontrivial fact. Indeed, it is a key part of the proof given in [@SimonBook] of the closure theorem for integral varifolds. See Remark \[remark\] below for a more detailed discussion.) Now $$[\pi_\#W(i)] = [ \{x\in {\mathbf{B}}^m: \text{$ \theta_i(x)$ is odd} \}].$$ Thus $$[\pi_\#W(i)] - [ n {\bold v}({\mathbf{B}}^m) ] = [ \{x\in {\mathbf{B}}^m: \text{$ \theta_i(x) - n$ is odd} \}],$$ and so (by and ) $$M( [\pi_\#W(i)] - [ n{\bold v}({\mathbf{B}}^m)] ) \le {\mathcal L}^m(Q_i) \to 0.$$ Consequently, $$[\pi_\#W(i)] \to [ n{\bold v}({\mathbf{B}}^m)].$$ This together with implies that $a[{\mathbf{B}}^m] = [ n {\bold v}({\mathbf{B}}^m)]$ and thus that $a=[n]$. \[remark\] Here we elaborate on statement  of the proof above, because it may not be immediately apparent to one who reads [@SimonBook] that the lemma we cite (Lemma 42.9) does actually justify that step. Note that $$\label{e:thetaintegral} \int_{{\mathbf{B}}^m}\theta_i \to n {\mathcal L}^m({\mathbf{B}}^m)$$ by and . Let ${\epsilon}>0$. Write $$\label{e:FplusG} \theta_i(x) = F_{i,{\epsilon}}(x) + G_{i,{\epsilon}}(x)$$ where $$F_{i,{\epsilon}}(x) = \sum \left\{ \Theta(W(i),y): y\in {\mathbf{B}}\cap \pi^{-1}(x),\, |y|<{\epsilon}\right\}$$ and $$G_{i,{\epsilon}}(x) = \sum \left\{ \Theta(W(i),y) : y\in {\mathbf{B}}\cap \pi^{-1}(x), \, |y|\ge {\epsilon}\right\}.$$ Now $$\label{e:Gto0} \int G_{i,{\epsilon}} \to 0$$ since $W(i)\to n {\bold v}(P)$. This together with  implies that $$\label{e:Fintegral} \int F_{i,{\epsilon}} \to n {\mathcal L}^m({\mathbf{B}}^m).$$ According to [@SimonBook]\*[Lemma 42.9]{}, $$\label{e:Slemma} \limsup_{i\to\infty} \int_{{\mathbf{B}}^m} (F_{i,{\epsilon}} - n)^+\,d{\mathcal L}^m \le \omega({\epsilon})$$ for some function $\omega(\cdot)$ such that $\omega({\epsilon})\to 0$ as ${\epsilon}\to 0$. (Note: there is a mistake in the statement of [@SimonBook]\*[Lemma 42.9]{}: instead of , it asserts the weaker inequality $$\limsup_{i\to\infty} {\mathcal L}^m\{ x\in {\mathbf{B}}^m: F_{i,{\epsilon}}(x)>n \} \le \omega({\epsilon}).$$ However, the proof of [@SimonBook]\*[Lemma 42.9]{} establishes the stronger statement . Indeed the stronger statement is essential in the proof of Allard’s integrality theorem [@SimonBook]\*[§42.8]{}. In particular, the stronger statement is used in line (8) of that proof.) From and , we see that $$\limsup_{i\to\infty} \int_{{\mathbf{B}}^m}|F_{i,{\epsilon}} - n |\,d{\mathcal L}^m \le \omega({\epsilon})$$ This together with and implies that $$\label{e:penultimate} \limsup_{i\to\infty}\int_{{\mathbf{B}}^m} | \theta_i - n |\,d{\mathcal L}^m \le \omega({\epsilon}).$$ Letting ${\epsilon}\to 0$ gives $$\limsup_{i\to\infty} \int_{{\mathbf{B}}^m} | \theta_i - n|\,d{\mathcal L}^m = 0$$ and thus (since $\theta_i$ is integer-valued) $$\lim_{i\to\infty} {\mathcal L}^m \{ x\in {\mathbf{B}}^m : \theta_i(x)\ne n \} = 0.$$ \[IntegerCompatibilityTheorem\] Suppose $V(i)$ is a sequence of integral varifolds that converge with locally bounded first variation to an integral varifold $V$. Suppose $A(i)$ is a sequence of integer-multiplicity rectifiable currents such that $V(i)$ and $A(i)$ are compatible. If the boundaries $\partial A(i)$ converge (in the integral flat topology) to a limit integral flat chain $\Gamma$, then there is a subsequence $i(k)$ such that the $A(i(k))$ converge to an integer-multiplicity rectifiable current $A$. Furthermore, $V$ and $A$ must then be compatible, and $\partial A$ must equal $\Gamma$. The proof is exactly analogous to the proof of Theorem \[Mod2CompatibilityTheorem\]. Alternatively, one can argue as follows. The existence of a subsequence $A(i(k))$ that converges to an integer-multiplicity rectifiable current $A$ follows from the compactness theorem for such currents (see Theorems \[CompactnessTheorem\] and \[RectifiabilityTheorem\]). The “furthermore” statement then follows immediately from Theorem \[Mod2CompatibilityTheorem\], together with the observation that an integral varifold and an integer-multiplicity rectifiable current are compatible if and only if they determine the same mod $2$ rectifiable flat chain. Application to Mean Curvature Flow ================================== Here we show how the results of this paper rule out certain kinds of singularities in mean curvature flows. In another paper, we will use similar arguments to prove, under mild hypotheses, boundary regularity at all times for hypersurfaces moving by mean curvature. On both theoretical and experimental grounds, grain boundaries in certain annealing metals are believed to move by mean curvature flow [@BrakkeBook]\*[Appendix A]{}. In such metals, one typically sees triple junctions where three smooth surfaces come together at equal angles along a smooth curve. Of course one also sees such triple junctions in soap films, which are equilibrium solutions to mean curvature flow. Consider the following question: can an initially smooth surface evolve under mean curvature flow so as later to develop triple junction type singularities? More generally, can such a surface have as a blow-up flow (i.e., a limit of parabolic blow-ups) a static configuration of $k$ half-planes (counting multiplicity) meeting along a common edge? Using Theorem \[Mod2CompatibilityTheorem\], we can (for a suitable formulation of mean curvature flow) prove that the answer is “no” if $k$ is odd. (Suppose ${\mathcal M}$ is a Brakke flow, $X_i$ is a sequence of spacetime points converging to $X=(x,t)$ with $t>0$, and $\lambda_i$ is a sequence of numbers tending to infinity. Translate ${\mathcal M}$ in spacetime by $-X_i$ and then dilate parabolically by $\lambda_i$ to get a flow ${\mathcal M}_i$. A [**blow-up flow**]{} of ${\mathcal M}$ is any Brakke flow that can be obtained as a subsequential limit of such a sequence.) Let $I\subset {\mathbf{R}}$ be an interval, typically either $[0,\infty)$ or all of ${\mathbf{R}}$. Recall that a Brakke flow $t\in I \mapsto V(t)$ of varifolds is called an [**integral Brakke flow**]{} provided $V(t)$ is an integral varifold for almost all $t\in I$. (See [@BrakkeBook]\*[§3]{} or [@IlmanenBook]\*[§6]{} for the definition of Brakke flow.) Let $t\mapsto V(t),\, t\in I$ be an integral Brakke flow in $U\subset {\mathbf{R}}^N$. We say that $V(\cdot)$ is [**cyclic mod $2$**]{} (or [**cyclic**]{} for short) provided $\partial [V(t)] =0$ for almost every $t\in I$. More generally, suppose $W$ is an open subset of $U$ and $J$ is a subinterval of $I$. We say that the Brakke flow $V(\cdot)$ is [**cyclic mod $2$ in $W\times J$**]{} if for almost all $t\in J$, $[V(t)]$ has no boundary in $W$. We have: \[theorem:LimitOfCyclicFlows\] Suppose $t\mapsto V_i(t)$ is a sequence of integral Brakke flows that converge as Brakke flows to an integral Brakke flow $t\mapsto V(t)$. If the flows $V_i(\cdot)$ are cyclic mod $2$, then so is the flow $V(\cdot)$. If the flows $V_i(\cdot)$ are cyclic mod $2$ in $W\times J$, then so is the flow $V(\cdot)$. Here convergence as Brakke flows means that for almost all $t$: $$\begin{aligned} &\text{$\mu_{V_i(t)} \to \mu_{V(t)}$, and} \\ \label{e:nice2}&\text{there is a subsequence $i(k)$ (depending on $t$) such that $V_{i(k)}(t) \to V(t)$.}\end{aligned}$$ (This definition may seem peculiar, but this is precisely the convergence that occurs in Ilmanen’s compactness theorem for integral Brakke flows [@IlmanenBook]\*[§7]{}.) Theorem \[theorem:LimitOfCyclicFlows\] follows immediately from Theorem \[Mod2CompatibilityTheorem\] and the following lemma (which is implicit in [@IlmanenBook], but is not actually stated there): Suppose $t\in I\mapsto V_i(t)$ is a sequence of Brakke flows in $U\subset {\mathbf{R}}^N$ that converges to a Brakke flow $t\mapsto V(t)$. Then for almost every $t\in I$, there is a subsequence $i(k)$ such that $V_{i(k)}(t)$ converges with locally bounded first variation to $V(t)$. Indeed, we can choose the subsequence so that $\delta V_{i(k)}$ is absolutely continuous with respect to $\mu_{V_{i(k)}}$ and so that $$\sup_{i(k)} \int_{x\in W} |H(V_{i(k)}(t),x)|^2\, d\mu_{V_{i(k)}(t)}x < \infty.$$ for every $W\subset \subset U$, where $H(V_{i(k)}(t),\cdot)$ is the generalized mean curvature of $V_{i(k)}(t)$. For simplicity, let us assume that $I=[0,\infty)$. Recall that for almost all $t$, the varifold $V_i(t)$ has bounded first variation, and the singular part of the first variation measure is $0$. Thus (for such $t$) $$\label{e:CauchySchwartz} \| \delta V_i(t)\| (W) = \int_W |H_{i,t}|\,d\mu_{i,t} \le \left( \int_W |H_{i,t}|^2\,d\mu_{i,t}\right)^{1/2} \left( \vphantom{\int} \mu_{i,t}(W) \right)^{1/2}.$$ where $H_{i,t}$ is the generalized mean curvature of $V_i(t)$ and $\mu_{i,t}=\mu_{V_i(t)}$. Consider first the case that the varifolds $V_i(t)$ are all supported in some compact set. Then the initial total masses ${\operatorname{M}}(V_i(0))=\mu_{V_i(0)}({\mathbf{R}}^N)$ are bounded above by some $C<\infty$. Since mass decreases under mean curvature flow, the same bound holds for all $t>0$. By definition of Brakke flow, $$\overline{D}_t {\operatorname{M}}(V_i(t)) \le - \int |H(V_i(t),\cdot)|^2\,d\mu_{V_i(t)},$$ so $$\label{e:L2boundonH} \int_{t\in I} \int |H(V_i(t), \cdot)|^2 \,d\mu_{V_i(t)}\,dt \le C.$$ Thus by Fatou’s theorem, $$\int_{t\in I}\left( \liminf_i \int |H(V_i(t),\cdot)|^2\,d\mu_{V_i(t)} \right) \,dt \le C$$ In particular, $$\liminf_i \int |H(V_i(t),\cdot)|^2\,d\mu_{V_i(t)} < \infty$$ for almost every $t$. For each such $t$, there is a subsequence $i(k)$ such that $$\sup_{k} \int |H(V_{i(k)}(t),\cdot)|^2\,d\mu_{V_{i(k)}(t)} < \infty$$ This together with  implies that the $V_{i(k)}(t)$ converge with locally bounded first variation to $V(t)$ (in the sense of Definition \[NiceDefinition\]). The general case (noncompactly supported varifolds) is essentially the same, except that instead of one uses the local bound: $$\sup_i \int_{t\in J} \int_{x\in W} |H(V_i(t),x)|^2\,d\mu_{V_i(t)}\,dt < \infty$$ together with the mass bound $$\sup_i \sup_{t\in J} \mu_{V_i(t)}(W) < \infty,$$ both of which bounds hold for all intervals $J\subset\subset I$ and open subsets $W\subset\subset U$ [@EckerBook]\*[Proposition 4.9]{}. \[StrongerRemark\] The lemma and Allard’s closure theorem (Theorem \[AllardClosureTheorem\]) imply that a limit of integral Brakke flows is also integral. \[corollary:NoOddJunctions\] Suppose $k$ is an odd integer. A static configuration of $k$-half planes (counting multiplicity) meeting along a common edge cannot occur as a blow-up flow to an integral Brakke flow that is cyclic mod $2$. Let $V$ be the varifold corresponding to $k$ halfplanes (counting multiplicity) meeting along an edge $E$. If the static flow $t\mapsto V$ is a limit flow to an integral Brakke flow that is cyclic mod $2$, then this static flow is also cyclic mod $2$ and thus $\partial [V]=0$. But $\partial [V]$ is the common edge $E$ with multiplicity $[k]$, so $k$ must then be even. The following theorem shows that for rather arbitrary initial surfaces, there exist nontrivial integral Brakke flows that are cyclic mod $2$. \[theorem:Mod2FlowExistence\] Let $A_0$ be any compactly supported rectifiable mod $2$ cycle in ${\mathbf{R}}^N$. (For example, $A_0$ could be the mod $2$ rectifiable flat chain associated to a $C^1$ compact, embedded submanifold.) Then there is an integral Brakke flow $t\in [0,\infty)\mapsto V(t)$ and a one-parameter family $t\in [0,\infty)\mapsto A(t)$ of rectifiable mod $2$ flat chains with the following properties: 1. \[FirstConclusion\] $A(0)=A_0$ and $\mu_{V(0)} = \mu_{A(0)}$. 2. $\partial A(t)=0$ for all $t$. 3. $t\mapsto A(t)$ is continuous with respect to the flat topology. 4. \[PenultimateConclusion\] $\mu_{A(t)} \le \mu_{V(t)}$ for all $t$. 5. \[AandVcompatible\] $A(t)=[V(t)]$ for almost every $t$. In particular, the flow is cyclic mod $2$, and thus triple (or more generally odd-multiplicity) junctions cannot occur in $V(\cdot)$ by Corollary \[corollary:NoOddJunctions\]. (Remark about assertion : Since $V(\cdot)$ is an integral Brakke flow, $V(t)$ is an integral varifold for almost all $t$ and thus $[V(t)]$ is well-defined for almost all $t$.) Except for assertion , this was proved by Ilmanen [@IlmanenBook]\*[8.1 and 8.3]{}. He used integer-multiplicity currents rather than mod $2$ flat chains, but his proof works equally well in either context. (The $A(t)$ here is the slice $T_t$ in Ilmanen’s notation.) The flat continuity (3) is not stated there, but it follows immediately from [@IlmanenBook]\*[8.3]{}. Roughly speaking, Ilmanen constructs $V(\cdot)$ and $A(\cdot)$ as limits of “nice" examples $V_i(\cdot)$ and $A_i(\cdot)$ for which $$\mu_i(t) = \mu_{A(i)}(t)$$ for all $t$. Now his $A_i(t)$ are not quite cycles. However, $A_i(t)$ moves by translation, and it moves very fast if $i$ is large. In particular, if $U\subset\subset {\mathbf{R}}^N$ and $I\subset\subset (0,\infty)$, then for sufficiently large $i$ and for all $t\in I$, $\partial A_i(t)$ lies outside $U$. Thus (exactly as in the proof of Theorem \[theorem:LimitOfCyclicFlows\], or by Remark \[StrongerRemark\] and Theorem \[Mod2CompatibilityTheorem\]), we deduce (for almost every $t\in I$) that $A(t)\llcorner U = [V(t)]\llcorner U$ and that $\partial[V(t)]$ lies outside $U$. Since $U$ is arbitrary, this gives . The description just given is a slightly simplified account of Ilmanen’s proof. Actually he does not quite get the pair ($V(\cdot), A(\cdot))$ as limits of nice examples. Rather he gets a pair of flows $(\mu^*(\cdot), A^*(\cdot))$ of one higher dimension as such a limit. The argument given above shows that $(\mu^*(\cdot), A^*(\cdot))$ has the property corresponding to property  above (and Ilmanen in his proof shows that it has properties -. Now the pair $(\mu^*(\cdot), A^*(\cdot))$ is translation invariant in one spatial direction. By slicing, Ilmanen gets the desired pair $(\mu(\cdot), A(\cdot))$. Translational invariance implies (in a straightforward way) that properties - for $(\mu(\cdot), A(\cdot))$ are equivalent to the corresponding properties for $(\mu^*(\cdot), A^*(\cdot))$. Theorem \[theorem:Mod2FlowExistence\] has an analog for integer-multiplicity currents in place of mod $2$ flat chains: \[theorem:IntegralFlowExistence\] Let $A_0$ be any compactly supported integer-multiplicity cycle (i.e., integer-multiplicity current with $\partial A_0=0$.) Then there is an integral Brakke flow $t\in [0,\infty)\mapsto V(t)$ and a one-parameter family $t\in [0,\infty)\mapsto A(t)$ of integer-multiplicity currents with the following properties: 1. $A(0)=A_0$ and $\mu_{V(0)} = \mu_{A(0)}$. 2. $\partial A(t)=0$ for all $t$. 3. $t\mapsto A(t)$ is continuous with respect to the flat topology. 4. $\mu_{A(t)} \le \mu_{V(t)}$ for all $t$. 5. $A(t)$ and $V(t)$ are compatible for almost every $t$. We omit the proof since it is almost identical to the proof of the mod $2$ case, Theorem \[theorem:Mod2FlowExistence\] Note that if an integer-multiplicity current $A$ is compatible with an integral varifold $V$, then $[V]$ is the flat chain mod $2$ corresponding to $A$. It follows that the Brakke flow $V(\cdot)$ in Theorem \[theorem:IntegralFlowExistence\] is cyclic mod $2$. In particular, triple (or more generally odd-multiplicity) junctions cannot occur in $V(\cdot)$ by Corollary \[corollary:NoOddJunctions\]. Ruling out even-multiplicity junctions is more subtle. In particular, limits of smooth Brakke flows can have quadruple junctions. For example, recall that Sherk constructed a complete, embedded, singly periodic minimal surface in ${\mathbf{R}}^3$ that is, away from the $z$-axis, asymptotic to the union of the planes $x=0$ and $y=0$. We may regard that surface as an equilibrium solution to mean curvature flow. Now dilate by $1/n$ and let $n\to \infty$. The limit surface is a pair of orthogonal planes and thus has a quadruple junction. Appendix: Flat Chains {#s:Appendix} ===================== Let $G$ be a metric abelian coefficient group, i.e., an abelian group with a translation invariant metric $d(\cdot,\cdot)$. The norm $|g|$ of a group element $g$ is defined to be its distance from $0$. The groups relevant for this paper are ${\mathbf{Z}}_2$ and ${\mathbf{Z}}$, both with the standard metrics. If $U$ is an open subset of ${\mathbf{R}}^N$, let ${\mathcal{F}}_c(U;G)$ be the space of flat chains with coefficients in $G$ and with compact support in $U$, as defined in [@Fleming]. We let ${\mathcal{F}}_{m,c}(U;G)$ denote the space of $m$-dimensional chains in ${\mathcal{F}}_c(U;G)$. If $W$ is an open subset of ${\mathbf{R}}^N$ and $A\in {\mathcal{F}}_c({\mathbf{R}}^N;G)$, we let ${\operatorname{M}}_W(A)$ be the minimum of $$\label{e:MassSeminorm} \liminf \mu_{A(i)}(W)$$ among all sequences of compactly supported, finite-mass flat chains $A(i)$ such that $A(i)$ converges in the flat topology to $A$. By lower-semicontinuity of mass, ${\operatorname{M}}_W(A)=\mu_A(W)$ for any chain $A$ of finite mass. We define the flat seminorm ${\mathcal{F}}_W$ by $${\mathcal{F}}_W(A) = \inf \{ M_W(A-\partial Q) + M_W(Q) \},$$ where the infimum is over all $Q\in {\mathcal{F}}_{c}({\mathbf{R}}^N;G)$. Let $U$ be an open subset of ${\mathbf{R}}^N$. Choose a countable collection $\mathcal{W}$ of nested open sets whose union is $U$ and each of whose closures is a compact subset of $U$. We define the space ${\mathcal{F}}_m(U;G)$ of flat $m$-chains in $U$ with coefficients in $G$ to be the completion of ${\mathcal{F}}_{m,c}(U;G)$ with respect to the seminorms ${\mathcal{F}}_W$ for $W\in \mathcal{W}$. (It is straightforward to show that the resulting space is independent of the choice of $\mathcal{W}$.) By continuity, the seminorms ${\mathcal{F}}_W$ extend to all of ${\mathcal{F}}_m(U;G)$. We also define the mass seminorms ${\operatorname{M}}_W$ on all of ${\mathcal{F}}_m(U;G)$ exactly as above . Convergence of flat chains means flat convergence, i.e., convergence with respect to the seminorms ${\mathcal{F}}_W$ for all open $W\subset\subset U$ or, equivalently, for all $W\in \mathcal{W}$ for a collection $\mathcal{W}$ of nested open sets as above. We define the support of a flat chain $A\in {\mathcal{F}}_m(U;G)$ as follows: $x\notin {\operatorname{spt}}A$ if and only if there is a sequence $A_i\in {\mathcal{F}}_{m,c}(U;G)$ and a ball ${\mathbf{B}}(x,r)$ such that $A_i\to A$ and such that ${\operatorname{spt}}A_i$ is disjoint from ${\mathbf{B}}(x,r)$ for every $i$. In the proof of the main results, Theorems \[Mod2CompatibilityTheorem\] and \[IntegerCompatibilityTheorem\], we used the following version of the compactness theorem for flat chains. It is valid for any coefficient group $G$ in which all sets of the form $\{ g\in G: |g| \le r\}$) are compact. In particular, it is valid for the integers with the usual norm and for the integers mod $2$. \[CompactnessTheorem\] Let $A_i$ be a sequence of flat $m$-chains in $U$ such that the boundaries $\partial A_i$ converge to a limit chain $\Gamma$, and such that $$\label{MassBound} \limsup_i {\operatorname{M}}_W(A_i) < \infty$$ for every open $W\subset\subset U$. Then $A_i$ has a convergent subsequence. We first prove the version for compact supports: \[CompactnessLemma\] If $A_i, \Gamma\in {\mathcal{F}}_c({\mathbf{R}}^N;G)$ are supported in a fixed compact subset $X$ of ${\mathbf{R}}^N$, if $\sup_i{\operatorname{M}}(A_i)<\infty$, and if ${\mathcal{F}}(\partial A_i - \Gamma)\to 0$, then there is a subsequence $A_{i(k)}$ and a chain $A$ such that ${\mathcal{F}}(A_{i(k)}-A)\to 0$. We may assume $X$ is convex (otherwise replace it by its convex hull.) Since $\partial A_i\to \Gamma$, we have $\partial \Gamma=0$. It follows that $\Gamma=\partial R$ for some chain $R$ of finite mass. By hypothesis, $${\mathcal{F}}( \partial A_i - \partial R) \to 0.$$ Thus there are chains $Q_i$ such that $$\label{zapped} {\operatorname{M}}(Q_i) + {\operatorname{M}}( \partial Q_i + \partial A_i - \partial R) \to 0.$$ We may assume that $R$ and the $Q_i$ are supported in $X$. (Otherwise map them into $X$ by the nearest point retraction of ${\mathbf{R}}^N$ to $X$.) Now let $$A_i^* = Q_i + A_i - R.$$ Note that $$\limsup_i {\operatorname{M}}(A_i^*) \le \sup_i{\operatorname{M}}(A_i) + {\operatorname{M}}(R)<\infty$$ since ${\operatorname{M}}(Q_i)\to 0$ by . From  we also see that ${\operatorname{M}}(\partial A_i^*)\to 0$, so in particular $\sup_i {\operatorname{M}}(\partial A_i^*)<\infty$. Thus by the standard compactness theorem (see for example [@Fleming]\*[7.4]{}), we may, by passing to a subsequence, assume that the $A_i^*$ converge to a limit $A^*$. Hence $$\begin{aligned} {\mathcal{F}}(A_i - (A^*+R)) &= {\mathcal{F}}( A_i^* - A^* - Q_i) \\ &\le {\mathcal{F}}(A_i^*-A^*) + {\mathcal{F}}(Q_i) \\ &\le {\mathcal{F}}(A_i^*-A^*) + {\operatorname{M}}(Q_i) \\ &\to 0\end{aligned}$$ since $A_i^*\to A^*$ and $ {\mathcal M}(Q_i) \to 0$. Thus the $A_i$ converge to $A^*+R$. Let $W$ be an open set whose closure is a compact subset of $U$. Choose an open set $V$ whose interior contains the closure of $W$ and whose closure is a compact subset of $V$. The idea of the proof is to work in a one-point compactification of $V$ so that we can apply Lemma \[CompactnessLemma\]. Let $u: {\mathbf{R}}^N\to [0,1]$ be a smooth function that is $1$ on $W$, that is strictly positive on $V$, and that vanishes on ${\mathbf{R}}^N\setminus V$. Define $f: {\mathbf{R}}^N \to {\mathbf{R}}^{N+1}$ by $$F(x) = u(x) (x,1).$$ Note that $f$ is lipschitz and that $f$ maps the complement of $V$ to a point. (Indeed, $f({\mathbf{R}}^N)$ may be regarded as a one-point compactification of $V$.) It follows that ${\operatorname{M}}(f_\#S)$ and ${\mathcal{F}}(f_\#S)$ can be bounded by a constant times ${\mathcal M}_V(S)$ and ${\mathcal{F}}_V(S)$, respectively. Let $A_i^*= f_\#A_i$. Then the hypotheses of Lemma \[CompactnessLemma\] are satisfied for the $A_i^*$. Thus by passing to a subsequence we may assume that the $A_i^*$ converge in the ${\mathcal{F}}$ metric. By passing to a further subsequence, we may assume that $$\label{SumOfFlatNorms} \sum_i {\mathcal{F}}(A_i^* - A_{i+1}^*) < \infty.$$ Let $H=H_\zeta$ be a halfspace of the form ${\mathbf{R}}^N\times [\zeta,\infty)$. From  it follows that $$\label{SlicedFlatSum} \sum_i {\mathcal{F}}( A_i^*\llcorner H - A_{i+1}^*\llcorner H) < \infty$$ for almost every $\zeta$ (See [@Fleming]\*[Lemma 2.1]{}). Fix such a $\zeta\in (0,1)$ and the corresponding $H$. The radial projection map $$\begin{aligned} &\pi: H\to {\mathbf{R}}^N \\ &\pi(x,y) = x/y\end{aligned}$$ is lipschitz, so by  the chains $A_i^\dag:=\pi_\#(A_i^*\llcorner H)$ are ${\mathcal{F}}$-convergent. It follows that the $A_i^\dag$ are also ${\mathcal{F}}_W$ convergent (since ${\mathcal{F}}_W\le {\mathcal{F}}$). But $\pi\circ f$ is the identity on $W$. Hence $A_i^\dag$ and $A_i$ coincide in $W$. (In other words, $A_i - A_i^\dag$ is supported in $W^c$.) Thus the $A_i$ are also ${\mathcal{F}}_W$ convergent. We have shown that for every open $W\subset\subset U$, there is an ${\mathcal{F}}_W$-convergent subsequence of the $A_i$. Now apply the diagonal argument to a nested sequence of such $W$’s that exhaust $U$. Suppose $A_i$ are flat chains in $U$ such that $$\limsup_i ({\operatorname{M}}_W(A_i) + {\operatorname{M}}_W(\partial A_i)) < \infty$$ for every $W\subset\subset U$. Then $A_i$ has a subsequence that converges in the flat topology. By Theorem \[CompactnessTheorem\] applied to the $\partial A_i$, there is a subsequence $i(k)$ for which the boundaries $\partial A_{i(k)}$ converge. Consequently the $A_{i(k)}$ satisfy the hypotheses of Theorem \[CompactnessTheorem\]. \[RectifiabilityTheorem\] Suppose $A$ is a flat $m$-chain in $U$ with locally finite mass. Then $A$ is rectifiable. Of course “$A$ has locally finite mass” means “${\operatorname{M}}_W(A)<\infty$ for every open $W\subset\subset U$". The theorem was proved in the case $G={\mathbf{Z}}$ by Federer and Fleming [@FedererFleming]. The proof is also presented in [@FedererBook] and in [@SimonBook]. Rather different proofs are given in [@SolomonClosure] and [@WhiteCompactness]. Fleming proved the rectifiabilty theorem for all finite coefficient groups [@Fleming]. For the most general result, see [@WhiteRectifiability], which gives a simple necessary and sufficient condition on the coefficient group in order for the rectifiablity theorem to hold. [^1]: This research was supported by the NSF under grants DMS-0406209 and DMS-0707126 [^2]: Federer’s definition requires that a flat chain have compact support, and Fleming’s definition requires that a flat chain have finite flat norm.
--- abstract: 'Combining deep neural networks with the concepts of continuous logic is desirable to reduce uninterpretability of neural models. Nilpotent logical systems offer an appropriate mathematical framework to obtain continuous logic based neural networks (CL neural networks). We suggest using a differentiable approximation of the cutting function in the nodes of the input layer as well as in the logical operators in the hidden layers. The first experimental results point towards a promising new approach of machine learning.' author: - József Dombi - Orsolya Csiszár - Gábor Csiszár title: Semantic Interpretation of Deep Neural Networks Based on Continuous Logic --- Introduction ============ In recent times deep learning is applied to a variety of machine learning problems such as image and speech recognition, natural language processing, machine translation and so forth. Artificial neural networks (ANNs) are one type of model for machine learning, which has become competitive to conventional regression and statistical models. They are effective, efficient and successful in providing a high level of capability in handling complex problems in extensive applications e.g. in medical science, engineering, finance, management or security. One of the greatest research challenge is the increasing need to address the problem of interpretability and to find a general mathematical framework to improve model transparency and performance and to provide a deeper understanding of the model. Combining deep neural networks with structured logic rules contributes to the achievement of flexibility and to the reduction of uninterpretability of the neural models. Although boolean units and multilayer perceptrons have a long history, to the best of our knowledge there has been little attempt to combine neural networks with continuous logical systems so far. The basic idea of continuous logic is the replacement of the space of truth values $\{T,F\}$ by a compact interval such as $[0, 1]$. Quantifiers $\forall x$ and $\exists x$ are replaced by $\sup_x$ and $\inf_x$, and logical connectives are continuous functions. Among other families of many-valued logics, T-norm fuzzy logics are broadly used in applied fuzzy logic and fuzzy set theory as a theoretical basis for approximate reasoning. In fuzzy logic, the membership function of a fuzzy set represents the degree of truth as a generalization of the indicator function in classical sets. Both propositional and first-order (or higher-order) t-norm fuzzy logics, as well as their expansions by modal and other operators, have been studied thoroughly. Important examples of t-norm fuzzy logics are monoidal t-norm logic MTL of all left-continuous t-norms, basic logic BL of all continuous t-norms, product fuzzy logic of the product t-norm, or the nilpotent minimum logic of the nilpotent minimum t-norm. Some independently motivated logics belong among t-norm fuzzy logics, too, for example [Ł]{}ukasiewicz logic (which is the logic of the [Ł]{}ukasiewicz t-norm) or Gödel-Dummett logic (which is the logic of the minimum t-norm). In this work, we suggest combining nilpotent logical systems and neural architecture. Among other preferable properties, the fulfillment of the law of contradiction and the excluded middle, and the coincidence of the residual and the S-implication [@Dubois; @Trillasimpl] make the application of nilpotent operators in logical systems promising. In their pioneer work [@bounded], Dombi and Csiszár examined connective systems instead of operators themselves. In the last few years, the most important operators of general nilpotent systems have been thoroughly examined. In [@boundedimpl] and in [@boundedeq], Dombi and Csiszár examined the implications and equivalence operators in bounded systems. In [@aggr], a parametric form of the generated operator $o_{\nu}$ was given by using a shifting transformation of the generator function. Here, the parameter has an important semantic meaning as a threshold of expectancy (decision level). This means that nilpotent conjunctive, disjunctive, aggregative (where a high input can compensate for a lower one) and negation operators can be obtained by changing this parameter. Negation operators were also studied thoroughly in [@bounded], as they play a significant role in logical systems by building connections between the main operators (De Morgan law) and characterising their basic properties. Dombi and Csiszár introduced possibility and neccesity operators in [@ijcci] by repeating the arguments of manyvariable operators and in [@iwobi] using double negations. Moreover, as it was shown in [@ijcci], membership functions, which play a substantial role in the overall performance of fuzzy representation, can also be defined by means of a generator function. In this work we propose a general framework capable of enhancing various types of neural networks with nilpotent logic. The article is organized as follows. After reviewing some related work in Section \[relwork\] and the most relevant results on nilpotent logical systems in Section \[nilpot\], we introduce continuous logic based (CL) neural models in Section \[sec3\], and finally show some experimental results in Section \[exp\] to demonstrate the promising performance of the squashing function as activatition function. In Section \[concl\], the main results are summarized. Related Work {#relwork} ============ Combination of neural networks and logic rules has been considered in different contexts. Neuro-fuzzy systems [@neurofuzzy] were examined thoroughly in the literature. These hybrid intelligent systems synergize the human-like reasoning style of fuzzy systems with the learning structure of neural networks through the use of fuzzy sets and a linguistic model consisting of a set of IF-THEN fuzzy rules. These models were the first attempts to combine continuous logical elements and neural computation. KBANN [@Towell], Neural-symbolic systems [@Garcez], such as CILP++ [@Franca], construct network architectures from given rules to perform knowledge acquisition. Kulkarni et al. [@Kulkarni] used a specialized training procedure to obtain an interpretable neural layer of an image network. In [@harness], Hu et. al. proposed a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifcally, they developed an iterative distillation method that transfers the structured information of logic rules into the weights of neural networks. With a few highly intuitive rules, they obtained substantial improvements and achieve state-of-the-art or comparable results to previous best-performing systems. In [@xu], Xu et al. develop a novel methodology for using symbolic knowledge in deep learning by deriving a semantic loss function that bridges between neural output vectors and logical constraints. This loss function captures how close the neural network is to satisfying the constraints on its output. In [@dl2], Fischer et al. present DL2, a system for training and querying neural networks with logical constraints. Using DL2, one can declaratively specify domain knowledge constraints to be enforced during training, as well as pose queries on the model to find inputs that satisfy a set of constraints. DL2 works by translating logical constraints into a loss function with desirable mathematical properties. The loss is then minimized with standard gradient-based methods. All these promising approaches point towards the desirable mathematical framework that nilpotent logical systems can offer. Our aspiration in this paper is to provide a general mathematical framework in order to benefit from a tight integration of deep learning and continuous logical methods. Nilpotent Logical Systems {#nilpot} ========================= Next, we recall the basic concept of nilpotent operator systems. The triple $(c,d,n),$ where $c$ is a t-norm, $d$ is a t-conorm and $n$ is a strong negation, is called a connective system. A connective system is nilpotent if the conjunction $c$ is a nilpotent t-norm, and the disjunction $d$ is a nilpotent t-conorm. Two connective systems $(c_1,d_1,n_1)$ and $(c_2,d_2,n_2)$ are isomorphic if there exists a bijection $\phi:[0,1]\rightarrow[0,1]$ such that $$\phi^{-1}\left(c_1\left(\phi(x),\phi(y)\right)\right)=c_2(x,y)$$ $$\phi^{-1}\left(d_1\left(\phi(x),\phi(y)\right)\right)=d_2(x,y)$$ $$\phi^{-1}\left(n_1\left(\phi(x)\right)\right)=n_2(x).$$ In the nilpotent case, the generator functions of the disjunction and the conjunction being determined up to a multiplicative constant can be normalized the following way: $$f_{c}(x):=\frac{t(x)}{t(0)},\quad \quad f_{d}(x):=\frac{s(x)}{s(1)}.$$ Note that the normalized generator functions are uniquely defined. We will use normalized generator functions for conjunctions and disjunctions as well. This means that the normalized generator functions of conjunctions, disjunctions and negations are $$f_{c},f_{d},f_{n}:[0,1]\rightarrow [0,1].$$ We will suppose that $f_c$, $f_d$ and $f_n$ are continuous and strictly monotonic functions. Two special negations can be generated by the normalized additive generators of the conjunction and the disjunction. \[ncnd\] The negations $n_{c}$ and $n_{d}$ generated by $f_{c}$ and $f_{d}$ respectively, $$n_{c}(x)=f_{c}^{-1}(1-f_{c}(x))$$ and $$n_{d}(x)=f_{d}^{-1}(1-f_{d}(x))$$ are called natural negations of $c$ and $d$. $ \begin{array}{cccc} \includegraphics[width=0.2\textwidth]{conj}& \includegraphics[width=0.2\textwidth]{conjsq}& \includegraphics[width=0.2\textwidth]{disj}& \includegraphics[width=0.2\textwidth]{disjsq} \end{array}$ This means that for a connective system with normalized generator functions $f_{c},f_{d} \text{ and } f_{n}$ we can associate three negations, $n_{c},n_{d} \text{ and } n$. In a consistent system, $f_{c}(x)+f_{d}(x)\geq1$ always holds. A nilpotent connective system is called a bounded system if $$f_c(x)+f_d(x)>1, \text{or equivalently } n_d(x)<n(x)<n_c(x)$$\[dnc\] holds for all $x\in(0,1),$ where $f_c$ and $f_d$ are the normalized generator functions of the conjunction and disjunction, and $n_c,n_d$ are the natural negations. For examples for consistent bounded systems see [@bounded]. Let us define the cutting operation $[\phantom{x}]$ by $$[x]=\left\{ \begin{array} [c]{ccc} 0 & if & x<0\\ x & if & 0\leq x\leq 1\\ 1 & if & 1<x\\ \end{array} \right.$$ \[cutth\] With the help of the cutting operator, we can write the conjunction and disjunction in the following form, where $f_c$ and $f_d$ are decreasing and increasing normalized generator functions respectively. $$\label {[]} c(x,y)=f_{c}^{-1}[f_{c}(x)+f_{c}(y)],$$ $$\label{[]2} d(x,y)=f_{d}^{-1}[f_{d}(x)+f_{d}(y)].$$ All basic operators discussed so far can be handled in a common framework, since they all can be described by the following parametric form. Let $x, y \in [0,1]$, $\alpha, \beta, \gamma \in \mathbb{R}$ and let $f:[0,1]\rightarrow[0,1]$ be a strictly increasing bijection. Let the general parametric operator be $$o_{\alpha, \beta, \gamma}(x,y):=f^{-1}[\alpha f(x)+\beta f(y)+\gamma].$$ The most commonly used operators for special values of $\alpha, \beta $ and $\gamma$, also for $f(x)=x$, are listed in Table \[abc\]. \[table1\] $\alpha$ $\beta$ $\gamma$ $o_{\alpha, \beta, \gamma}(x,y)$ for $f(x)=x$ Notation ---------------------- ---------- --------- ---------- -------------------------------------------- -------------------------------- ---------- disjunction $1$ $1$ $0$ $f^{-1}[f(x)+f(y)]$ $[x+y]$ $d(x,y)$ conjunction $1$ $1$ $-1$ $f^{-1}[f(x)+f(y)-1]$ $[x+y-1]$ implication $-1$ $1$ $1$ $f^{-1}[f(y)-f(x)+1]$ $[y-x+1]$ $i(x,y)$ arithmetic mean $0.5$ $0.5$ $0$ $f^{-1}\left[\frac{f(x)+f(y)}{2}\right]$ $\frac{x+y}{2}$ $m(x,y)$ preference $-0.5$ $0.5$ $0.5$ $f^{-1}\left[\frac{f(y)-f(x)+1}{2}\right]$ $\frac{y-x+1}{2}$ $p(x,y)$ aggregative operator $1$ $1$ $-0.5$ $f^{-1}\left[f(x)+f(y)-\frac{1}{2}\right]$ $\left[x+y-\frac{1}{2}\right]$ $a(x,y)$ Now let us focus on the unary (1-variable) case. The unary operators are mainly used to construct modifiers and membership functions from the generator function. The membership functions can be interpreted as modelling an inequality [@memeva]. Note that non-symmetrical membership functions can be also constructed by connecting two unary operators with a conjunction [@iwobi; @ijcci]. Let $x \in [0,1]$, $\alpha, \gamma \in \mathbb{R}$ and let $f:[0,1]\rightarrow[0,1]$, a strictly increasing bijection. Then $$o_{\alpha, \gamma}(x):=f^{-1}[\alpha f(x)+\gamma].$$ For special $\gamma$ values, see Table \[ac\]. $\alpha$ $\gamma$ $o_{\alpha, \gamma}(x,y)$ for $f(x)=x$ Notation ------------- ---------- ---------------------- -------------------------------------------- ----------------------------------- --------------- possibility $\alpha$ $0$ $f^{-1}[\alpha f(x)]$ $[\alpha x]$ $\tau_{P}(x)$ necessity $\alpha$ $1-\alpha$ $f^{-1}[\alpha f(x)-(\alpha-1)]$ $[\alpha x-(\alpha-1)]$ $\tau_N(x)$ sharpness $\alpha$ $\frac{\alpha-1}{2}$ $f^{-1}[\alpha f(x)-\frac{(\alpha-1)}{2}]$ $[\alpha x-\frac{(\alpha-1)}{2}]$ $\tau_S(x)$ The main drawback of the [Ł]{}ukasiewicz operator family is the lack of differentiability, which would be necessary for numerous practical applications. Although most fuzzy applications (e.g. embedded fuzzy control) use piecewise linear membership functions owing to their easy handling, there are areas where the parameters are learned by a gradient-based optimization method. In this case, the lack of continuous derivatives makes the application impossible. For example, the membership functions have to be differentiable for each input in order to fine-tune a fuzzy control system by a simple gradient-based technique. This problem could be easily solved by using the so-called squashing function (see Dombi and Gera [@Gera]), which provides a solution to the above-mentioned problem by a continuously differentiable approximation of the cut function. The squashing function defined below is a continuously differentiable approximation of the generalized cutting function by means of sigmoid functions (see Figure \[fig:squash\]). ![Squashing functions for $a=0.5$, $\lambda=1,$ for different $\beta$ values ($\beta_1=1,$ $\beta_2=2$ and $\beta_3=5$) []{data-label="fig:squash"}](squash){width="35.00000%"} The squashing function is defined as (see [@Gera] and [@ijcci]) $$S^{(\beta)}_{a,\lambda}(x) = \frac1{\lambda\beta}\ln\frac{1+e^{\beta\left(x-(a-\lambda/2)\right)}}{1+e^{\beta\left(x-(a+\lambda/2)\right)}} = \frac1{\lambda\beta}\ln\frac{\sigma_{a+\lambda/2}^{(-\beta)}(x)}{\sigma_{a-\lambda/2}^{(-\beta)}(x)}.$$ where $x,a,\lambda,\beta\in\mathbb{R}$ and $\sigma_d^{(\beta)}(x)$ denotes the logistic function: $$\begin{aligned} \sigma_d^{(\beta)}(x) = \frac1{1+e^{-\beta\cdot (x-d)}}. \end{aligned}$$ \[squashdef\] By increasing the value of $\beta$, the squashing function approaches the generalized cut function. The parameters $a$ and $\lambda$ determine its center and width. The error of the approximation can be upper bounded by $c/\beta$, which means that by increasing the parameter $\beta$, the error decreases by the same order of magnitude. The derivatives of the squashing function are easy to calculate and can be expressed by sigmoid functions and itself: $$\begin{aligned} \frac{\partial S^{(\beta)}_{a,\lambda}(x)}{\partial x} &= \frac1\lambda\left(\sigma_{a-\lambda/2}^{(\beta)}(x)-\sigma_{a+\lambda/2}^{(\beta)}(x)\right) \\ \frac{\partial S^{(\beta)}_{a,\lambda}(x)}{\partial a} &= \frac1\lambda\left(\sigma_{a+\lambda/2}^{(\beta)}(x)-\sigma_{a-\lambda/2}^{(\beta)}(x)\right) \\ \frac{\partial S^{(\beta)}_{a,\lambda}(x)}{\partial \lambda} &= -\frac1\lambda{}S^{(\beta)}_{a,\lambda}(x)+\frac1{2\lambda}\left(\sigma_{a+\lambda/2}^{(\beta)}(x)+\sigma_{a-\lambda/2}^{(\beta)}(x)\right)\end{aligned}$$ By using squashing functions one can approximate the [Ł]{}ukasiewicz operators and piecewise linear membership functions (i.e. trapezoidal or triangular) by substituting the cut function. The approximated membership functions are called soft and defined as: $$ATR(x;\beta,a_1,\lambda_1,a_2,\lambda_2) = S^{(\beta)}_{a_1,\lambda_1}(x) - S^{(\beta)}_{a_2,\lambda_2}(x).$$ The derivatives of a soft trapezoidal are simply the derivatives of the proper squashing function. In this framework it becomes possible to define all the operators by a single generator function and a few parameters. Examples for he main operarors and their soft approximations using the squashing function with different parameter values are shown in Figure \[fig:conj\]. Note that assymmetrical membership functions can also be easily defined by connecting two unary operators with a conjunction. Neural Networks Based on Nilpotent Logic {#sec3} ======================================== The results on nilpotent logical systems recalled in Section \[nilpot\] offer a new approach to construct neural networks using continuous logic. $ \begin{array}{c} \includegraphics[width=0.3\textwidth]{neuralmodel} \end{array}$ Neural nets are composed of networks of computational models of neurons called perceptron. As it is illustrated in Figure \[fig:fig1b\], the threshold can be added as an additional input with weight $w_T=-1$ to simplify the computation. Note that the perceptron can be interpreted as a linear classifier modelling the inequality $$\sum_{i=1}^{n} w_{i}x_{i}-T>0,\label{ineq}$$ where $x_i$ are the input values, $w_i$ are the weights and $T$ is the threshold. Also note that as we mentioned above, the membership functions can also be interpreted as modelling an inequality. It is well-known that any Boolean function can be composed using a multi‐layer perceptron. As examples, the conjunction, the disjunction and the implication are illustrated in Figure \[fig:cdfig\] and \[fig:impl\]. Note that for the XOR gate, an additional hidden layer is also needed. It can be shown that a network of linear classifiers that fires if the input is in a given area with arbitrary complex decision boundaries can be constructed with only one hidden layer and a single output. This means that if a neural network learns to separate different regions in in the $n$-dimensional space having $n$ input values, each node in the first layer can separate the space into two half-spaces by drawing one hyperplane, while the nodes in the hidden layers can combine them using logical operators. In Figure \[fig:mit\], some basic types of neural networks are shown with two input values, finding different regions of the plane. Generally speaking, each node in the neural net represents one threshold and therefore can draw one line into the picture. The line can be diagonal if that nodes receives both of the inputs $i_1$ and $i_2$. The line has to be horizontal or vertical if the node only receives one of the inputs. The deeper hidden levels are responsible for the logical operations. Here, we suggest applying the nilpotent logical concept in the neural architecture to get nilpotent logic based neural networks (NL model) in the following way. In the first layer, the activation function in the nodes are membership functions, representing the truth value of . The nilpotent logical operators (see table \[table1\]) work in the hidden layers. To ensure differentiability, the cutting function should be approximated by the squashing function from Definition \[squashdef\]. As an example, output values for a triangular domain using nilpotent logic and its continuous approximation for different parameter values are illustrated in Figure \[fig:regions\]. Furthermore, taking into account that the area inside or outside a circle is described by an inequality containing the squares of the input values, it is also possible to construct a novel type of unit by adding the square of each input into the input layer (see Figure \[fig:circle\]). This way, the polygon approximation of the circle can be eliminated. For an illustration see Figure \[fig:circle3d\]. Note that by changing the weights an arbitrary conic section can also be described. ![Perceptron model of the conjunction and disjunction[]{data-label="fig:cdfig"}](cd){width="40.00000%"} ![Perceptron model of the implication[]{data-label="fig:impl"}](impl){width="40.00000%"} ![Basic types of neural networks with two input values using logical operators in the hidden layer to find different regions of the plane[]{data-label="fig:mit"}](mit){width="48.00000%"} ![**Perceptron Model of a Circle with radius $r$**[]{data-label="fig:circle"}](circle){width="49.00000%"} $ \begin{array}{cc} \includegraphics[width=0.2\textwidth]{circle3dup}& \includegraphics[width=0.2\textwidth]{circle3dupsq} \end{array}$ $ \begin{array}{cccc} \includegraphics[width=0.2\textwidth]{regions}& \includegraphics[width=0.2\textwidth]{3and}& \includegraphics[width=0.2\textwidth]{3andsq}& \includegraphics[width=0.2\textwidth]{3andsqsmall} \end{array}$ Experimental Results: Activation Function Performance {#exp} ===================================================== Choosing the right activation function for each layer is crucial and may have a significant impact on metric scores and the training speed of the neural model. In the NL modell introduced in Section \[sec3\], the smooth approximation of the cutting function is a natural choice for the activation function in the first layer as well as in the hidden layers, where the logical operators work. Although there are a vast number of activation functions (e.g. linear, sigmoid, $\tanh$, or the recently introduced Rectified Linear Unit (ReLU) [@relu], exponential linear unit (ELU) [@elu], sigmoid-weighted linear unit (SiLU) [@silu]) considered in the literature, most of them are introduced based on some desired properties, without any theoretical background. The parameters are usually fitted only on the basis of experimental results. The squashing function (also soft cutting or soft clipping function) introduced above stands out of the other candidates by having a theoretical background thanks to the nilpotent logic which lies behind the scenes. In [@softclip], Klimek and Perelstein presented a Neural Network (NN) algorithm optimized to perform a Monte Carlo methods, which are widely used in particle physics to integrate and sample probability distributions on multi-dimensional phase spaces. The algorithm has been applied to several examples of direct relevance for particle physics, including situations with non-trivial features such as sharp resonances and soft/collinear enhancements. In this algorithm, each node in a hidden layer of the NN takes a linear combination of the outputs of the nodes in the previous layer and applies an activation function. The nodes in the final layer again take a linear combination of the values in the next-to-last layer, but then apply another function, the output function, which is chosen to map onto the set of possible outcomes for the given situation. For their purpose, an output function that maps onto the unit interval was needed, since phase space is described as a unit hypercube. Sigmoids approach the boundary values of 0 and 1 very slowly, which makes it difficult for the NN to populate the edges of the hypercube. Therefore the exponential linear unit (ELU) was used as the activation function and since the ELU does not generate exponentially large values, a special case of the above defined (see Definition \[squashdef\]) squashing function (soft cutting or alternatively soft clipping function) was introduced and used as the output function. This soft clipping function is approximately linear within $x\in(0,1)$ and asymptotes very quickly outside that range. It is parameterized by a parameter which determines how close to linear the central region is and how sharply the linear region turns to the asymptotic values. Excellent performance has been demonstrated in all examples in favor of the squashing function (see Figures 8 and 9 in [@softclip]). In our experiments, we compared the performance of the squashing function (SQ) with the sigmoid function and the sigmoid linear unit (SiLu) using the Fashion MNIST dataset, which is a dataset consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255. The training and test data sets have 785 columns. The first column consists of the class labels and represents the article of clothing. The rest of the columns contain the pixel-values of the associated image. The results shown in Figure \[fig:expresult\] demonstrate the excellent performance of the squashing function. ![Performance of the squashing function compared to the sigmoid and SiLu activation function tested on FASHION MNIST[]{data-label="fig:expresult"}](SiLUvsSQvsSigm500.pdf){width="40.00000%"} Conclusion {#concl} ========== In this work we suggested combining deep neural networks with the concepts of nilpotent logical systems to reduce uninterpretability of neural models by proposing a general mathematical framework. In our model, the activation functions in the nodes of the first layer are membership functions representing inequalities, and the nilpotent logical operators work in the hidden layers. To ensure differentiability, the cutting function (in the membership functions as well as in the logical operators) is approximated by the squashing function. All the operators in the model can be defined by a single generator function and a few parameters. A novel type of neural unit was also introduced by adding the square of each input into the input layer (see Figure \[fig:circle\]) to describe the inside / outside of a circle without polygon approximation. Finally we showed that the squashing function as an activation function not only stands out of the other candidates considered in the literature by having a theoretical background, but it also performs well in the first experiments on the FASHION MNIST dataset. Acknowledgment ============== This research was partially supported by grant TUDFO/47138-1/2019-ITM of the Ministry for Innovation and Technology, Hungary. [00]{} D. Clevert, T. Unterthiner, S. Hochreiter, Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs), arXiv:1511.07289, 2015. O. Csiszár, J. Dombi, Generator-based Modifiers and Membership Functions in Nilpotent Operator Systems, IEEE International Work Conference on Bioinspired Intelligence (iwobi 2019), July 3-5, 2019, Budapest, Hungary, 2019. J. Dombi, Membership function as an evaluation, Fuzzy Sets Syst., 35, 1-21, 1990. J. Dombi, O. Csiszár, The general nilpotent operator system, Fuzzy Sets Syst., 261, 1-19, 2015. J. Dombi, O. Csiszár, Implications in bounded systems, Inform. Sciences, 283, 229-240, 2014. J. Dombi, O. Csiszár, Equivalence operators in nilpotent systems, Fuzzy Sets Syst., doi:10.1016/j.fss.2015.08.012, available online, 2015. J. Dombi, O. Csiszár, Self-dual operators and a general framework for weighted nilpotent operators, Int J Approx Reason, 81, 115-127, 2017. J. Dombi, O. Csiszár, Operator-dependent Modifiers in Nilpotent Logical Systems, Operator-dependent Modifiers in Nilpotent Logical Systems, In Proceedings of the 10th International Joint Conference on Computational Intelligence (IJCCI 2018), 126-134, 2018 D. Dubois and H. Prade, Fuzzy sets in approximate reasoning. Part 1: Inference with possibility distributions, Fuzzy Sets and Syst., 40, 143-202, 1991. S. Elfwing, E. Uchibe, K. Doya, Sigmoid-weighted linear units for neural network function approximation in reinforcement learning, Neural Networks 107, 3-11, 2018. M. Fisher, M. Balunovic, D. Drachsler-Cohen, T. Gehr, C. Zhang and M. Vechev, DL2: Training and Querying Neural Networks with Logic, Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. M. V., Franca, G. Zaverucha and A. S. d. Garcez, Fast relational learning using bottom clause propositionalization with artificial neural networks, Machine learning, 94(1):81-104, 2014. A. S. d. Garcez, K. Broda, and D. M. Gabbay, Neural-symbolic learning systems: foundations and applications, Springer Science & Business Media, 2012. J. Dombi, Zs. Gera, Fuzzy rule based classifier construction using squashing functions. J. Intell. Fuzzy Syst. 19, 3-8, 2008. J. Dombi, Zs. Gera, The approximation of piecewise linear membership functions and [Ł]{}ukasiewicz operators, Fuzzy Sets Syst., 154, 275- 286, 2005. Z. Hu, X. Ma, Z. Liu, E. Hovy, E. P. Xing, Harnessing Deep Neural Networks with Logic Rules, ArXiv:1603.06318v5 M. D. Klimek, M. Perelstein, Neural Network-Based Approach to Phase Space Integration, arXiv:1810.11509v1 T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, Deep convolutional inverse graphics network, In Proc. of NIPS, 2530-2538, 2015. C. T. Lin, C.S.G. Lee, Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems, Upper Saddle River, NJ: Prentice Hall, 1996. A. L. Maas, A. Y. Hannun, A. Y. Ng, Rectifier Nonlinearities Improve Neural Network Acoustic Models, 2014. G. G. Towell, J. W. Shavlik and M. O. Noordewier, Refinement of approximate domain theories by knowledge-based neural networks, in Proceedings of the eighth National Conference on Artificial Intelligence, Boston, MA, 861-866, 1990. E. Trillas and L. Valverde, On some functionally expressable implications for fuzzy set theory, Proc. of the 3rd International Seminar on Fuzzy Set Theory, Linz, Austria, 173-190, 1981. J. Xu, Z. Zhang, T. Friedman, Y. Liang, and G. V. den Broeck, A semantic loss function for deep learning with symbolic knowledge, Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Volume 80, 5498–5507, 2018.
--- abstract: 'In this paper, we propose a novel approach named by *Discriminative Principal Component Analysis* which is abbreviated as *Discriminative PCA* in order to enhance separability of PCA by Linear Discriminant Analysis (LDA). The proposed method performs feature extraction by determining a linear projection that captures the most scattered discriminative information. The most innovation of *Discriminative PCA* is performing PCA on discriminative matrix rather than original sample matrix. For calculating the required discriminative matrix under low complexity, we exploit LDA on a converted matrix to obtain within-class matrix and between-class matrix thereof. During the computation process, we utilise direct linear discriminant analysis (DLDA) to solve the encountered SSS problem. For evaluating the performances of *Discriminative PCA* in face recognition, we analytically compare it with DLAD and PCA on four well known facial databases, they are PIE, FERET, YALE and ORL respectively. Results in accuracy and running time obtained by nearest neighbour classifier are compared when different number of training images per person used. Not only the superiority and outstanding performance of *Discriminative PCA* showed in recognition rate, but also the comparable results of running time.' author: - | Hanli Qiao\ School of Science, Guilin University of Technology, China\ ` hanlinqiao77@gmail.com` bibliography: - 'DiscriminativePCA-PR.bib' title: 'Discriminative Principal Component Analysis: <span style="font-variant:small-caps;">A Reverse Thinking</span>[^1] ' --- [**Keywords:** Discriminative PCA, DLDA, PCA, discriminative matrix, face recognition]{} Introduction ============ Principal component analysis (PCA) and linear discriminant analysis (LDA) are two of the most popular linear dimensionality reduction approaches. Due to their effectivenesses in feature exraction, PCA, LDA and their variants have been continuious developed and applied into numerous applications in various areas involving pattern recognition, computer vision, industrial engineering and data analysis, etc. For illustrations, a novel variant of PCA, namely the adaptive block sparse PCA based on penalized SVD is proposed to deduce a new multiple-set canonical correlation analysis (mCCA) method, which is applied to the problem of multi-subject fMRI data sets analysis in [@Seghouane18]. And two multilinear extensions of PCA is investigated in [@Pacella18] with application to an emission control system. Whereas LDA is often used in field of classifications, such as the application in text classification based on self-training and LDA topic models is introduced in [@Pavlinek17] and the application of facial exprssion classification using LDA and threshold SVM is studied by literature [@Shah17]. For accelerating the convergence rate of the incremental LDA algorithm, paper [@Ghassabeh15] derives new algorithms by optimizing the step size in each iteration using steepest descent and conjugate direction methods. Besides these applications, literatures [@Wu17; @Stuhlsatz12] root deep learning techniques into LDA to learn non-linear transformations. Generally speaking, PCA is trying to find an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of most scattered values of linearly uncorrelated variables. Large number of theoritical analysis and real applications can prove that PCA is simple and efficient, however PCA concerns with the overall sample data, which is a drawback of limiting the same classification rate when changing different viewpoint. Whereas LDA performs excellent separable performances by maximising between-class distance simultaneously minimising within-class distance however under much expensive computational cost. Owing to the importance of feature extraction in numerous areas, design more effective approach such that exploit advantages of existing methodologies and overcome their shortcomings is promising. In this sense, we aim to enhance separability of PCA by LDA in this paper and at the same time reserve the outstanding performance of low computational complexity. For completing our goal, we need to perform PCA on a discriminative matrix which can be calculated by LDA. However, there are two severe problems exist in regular LDA. One is the computational difficulties and another one appears in the cases that number of observations is less than their dimensions, which is so called *small sample size*-SSS problem. Therefore in order to utilise LDA effectively, we need to find a reasonable solution to resolve the both problems. Thanks to the extensive applications of LDA, there are many scientists focus on how to solve SSS probelm. One of the popular solutions is so called *PCA plus LDA*, which is a method apply PCA as pre-processing step. The relevant theoretical foundation and its applications can be found in [@PCA-LDA03; @KPCA-LDA04; @Sun18]. But incompatibility is a potential problem of *PCA plus LDA*, which may lead to PCA discard dimensions that contain important discriminative information for LDA. Therefore another more feasible and effective approach is designed by [@DLDA01; @Lu03], which is named by direct linear discriminant analysis (DLDA). The proposed DLDA algorithms can accept high-dimensional data input and optimize Fisher’s criterion directly without any feature extraction operation by discarding the null space of between-class matrix and meantime keeping the null space of within-class matrix to tackle the incompatibility problem of *PCA plus LDA*. Due to its effectiveness, researchers adopt DLDA in diverse applications and receive outperforming results in [@Portillo17; @Meshgini13]. Therefore back to our original intention, in this paper we adopt DLDA to solve SSS problem during the implementation process. Related Works ------------- The essential problem of *Discriminative PCA* is how to build the computational framework in order the derived subspace to possess the superiorities of LDA and PCA and overcome their limitations simultaneously. According to the fact that classifier fusion is promising research directions in the field of pattern recognition and computer vision. It seems reasonable to expect that a better performance could be obtained by combining the resulting classifiers. Then idea of fusing PCA and LDA hence becomes feasible. Bsaed on this consideration, some relevant combining techiniques are designed for applications in diverse areas involving face recognition, verification, re-identification, classification and fault detection [@Oh13; @Marcialis02; @Sadeghi10; @Borade16; @Sharma06; @Deng17]. Authors fuse PCA-LDA in data preprocessing part for extraction of facial features in [@Oh13] by integrating two covariance matrices together into a single covariance matrix. The fused subspce is expected to preserve the nature of both subfaces of PCA and LDA in hence to improve its performances. However during the computation process, the two covariance matrices are calculated directly on original data matrix which is constructed by highly dimensional vectors. This is an intractable task especially in calculation of corresponded eigenvalues. Similar problems occur in the other literatures, despite the fusion strategies perform outstanding results, there still exist two serious drawbacks. One is the expensive computation cost and the other one is the easy occurrence of SSS problem during LDA procedure. Take ORL facial database [@ORL] for an illustation, there are 40 people totally and 10 facial images per person with size of pixels $112\times 92$, then each image can be seen as a point in 10304-dimensional space. If we choose 5 training images per individual in face recognition, then the size of training matrix is $10304\times 200$. In this case, the fusion of PCA and LDA will be conducted directly in two $10304\times 10304$ covariances matrices so that the computational cost becomes very expensive which will lead to application difficulties. The SSS probelm that comes with it will cause inefficiency. Therefore fuse PCA and LDA directly cannot meet our purpose, other thoughts to enhance discriminant information for PCA need to be developed. Not like the fusion classifiers, novel methods incorporating the discriminant constraints inside for non-negative matrix factorization (NMF) and kernel NMF (KNMF) are proposed in [@Zafeiriou06; @Liang10]. Which inspires us to discover discriminant projections for sample data after projection to the obtained low-dimensional subspace. For guaranteeing outperforming properties with low computational cost and solving the potential SSS problem, we enhance discriminant information inside of PCA procedure by adopting DLDA strategy on a converted small size matrix with respect to original sample matrix. For understanding *Discriminative PCA*, the next section will introduce the processes of feature extraction in face recognition by using of PCA and LDA. PCA and LDA =========== Through introduction, it is now clear that *Discriminative PCA* is such process that performing PCA on a discriminative matrix which is computed by LDA. Therefore in order to come into our novel approach better, at the beginning we briefly introduce the schemes of PCA and LDA in feature extraction of face recognition. Firstly the symbols and their descriptions are concluded in table \[symbols\]. [|c|c|c|]{} Symbols & Descriptions & Dimensions\ $\Omega=\{\omega_{ij}\}$ & --------------------------------------------------------- training matrix consisting of the set of input training images, one column represents one facial image --------------------------------------------------------- & $MN\times cl$\ $\omega_{ij}$ & jth face of ith individual & $MN\times 1$\ $MN$ & dimensions of $\omega_{ij}$ & scalar\ $c$ & the number of persons & scalar\ $l$ & ------------------------------------------------------ number of training images per person, we assume that each individual has same number of training images ------------------------------------------------------ & scalar\ $cl$ & total number of training images & scalar\ $\bf C$ & covariance matrix & $MN \times MN$\ $\bar{\omega}$ & mean of all training samples & $MN \times1 $\ $\bar{\omega_i}$ & mean of the $i$th-class sample & $MN \times1$\ ${\bf{S_b}}$ & between-class matrix & $MN \times MN$\ ${\bf{S_w}}$ & within-class matrix & $MN \times MN$\ The represented symbols listed in table \[symbols\] are described in details. The input training samples is a set of $c$ observations defined by $${\bf{\Omega}}=\{ \Omega_1, \Omega_2, \dots, \Omega_c\},\ \Omega_i=\{\omega_{i1}, \omega_{i2},\cdots, \omega_{il}\},\ i=1,\cdots, c$$ $c$ and $l$ have their own meanings thereof as table \[symbols\] describes. Then the training sample matrix can be represented as $$\label{training matrix} {\bf{\Omega}}=\left[\omega_{11},\dots,\omega_{1l}, \cdots, \omega_{c1},\dots, \omega_{cl}\right]$$ which is a real matrix composed by $\omega_{ij}$ sorted as column and the size is $MN\times cl$. PCA implementation by the covariance method {#PCA} ------------------------------------------- PCA [@PCA91; @PCA10] can be used to reduce dimensions by mapping original sample matrix into a $p$-dimensional feature subspace, where $p \ll MN.$ This is an algorithm based on Karhunen-Loève transform which is a common orthogonal transform that choose a dimensionality reducing linear projection that maximizes the scatter of all projected samples. Its orthogonal basis functions used in this representation are determined by the covariance function of the process. In detail, the transform was found by expanding the process with respect to the basis spanned by the eigenvectors of the covariance function. The purpose of PCA hence is finding a linear transformation projecting the original sample matrix onto a $p$-dimensional feature subspace $V$ which is defined by the linear transformation as follows, $${\bf{Y}}=V^T{\bf{\Omega}}$$ where $V\in {\mathbb{R}}^{MN\times p}$ is a matrix with orthonormal columns composed by $v_k,\ k=1,\dots,p$. The computation details will be described in next subsequent paragraphs. For making sure projected samples are maximal scatter without correlations, the process implemented on centred sample matrix as the forthcoming paragraph introduces. Matrix $\Omega$ in table \[symbols\] is described thereof each column represents an observation which can be seen a point in $MN$-dimensional space. In covariance method, the essential step for PCA is seeking a set of $p$ orthonormal vectors $v_k$, which best represent the distribution of all samples. This process can be satisfied through diagonalizing the covariance matrix of centred sample data by $$\label{covariance matrix} {\bf{C}}=\frac{1}{cl}\sum_{i=1}^c\sum_{j=1}^l(\omega_{ij}-\bar{\omega})(\omega_{ij}-\bar{\omega})^T$$ We learn from table \[symbols\] that $\bar{\omega}$ is the average of all observations which is defined by $\bar{\omega}=\displaystyle{\frac{1}{cl}}\sum\limits_{i=1}^c\sum\limits_{j=1}^l\omega_{ij}.$ The required feature vectors are the orthonormal eigenvectors of $\bf{C}$ corresponding to $p$ largest eigenvalues, set $A=\displaystyle{\frac{1}{\sqrt{MN}}}[\omega_{11}-\bar{\omega},\cdots, \omega_{1l}-\bar{\omega},\cdots, \omega_{cl}-\bar{\omega}]$ then $v_k$ are chosen so that the diagonal elements of $$\Lambda= diag \left(\lambda_1,\cdots,\lambda_{MN} \right)=V^T( AA^T)V$$ attains larger values. However, the covariance matrix is $MN\times MN$ real symmetric matrix, which is so large therefore easily cause the computation difficulties in calculation of eigenvalues and the corresponding eigenvectors. For overcoming this shortage, more feasible method should be considered, that is decomposing the novel matrix $$\widetilde{\bf{C}}=A^TA$$ by its eigenvectors. the size now is $cl \times cl$, which is much smaller than $\bf{C}$. We assume $u_k$ are the eigenvectors of $\widetilde{\bf{C}}$ corresponding to eigenvalue $\lambda_k$, then the feature vectors $v_k$ derived from can be calculated by $Au_k$ since $$A^TAu_k=\lambda_ku_k\rightarrow AA^T(Au_k)=\lambda_k(Au_k)$$ Now the feature matrix can be calculated by $V=AU$, where $U =[u_1,\dots,u_p]$ which is composed by eigenvectors corresponding to top $p$ largest eigenvalues. PCA has successful application in face recognition which is well known as *eigenfaces*. LDA-fisher method {#LDA} ----------------- Although the feature space yielded by PCA contains maximal scatter information without correlations among samples, PCA is sensitive to unexpected variations such as illumination, expressions and poses in face recognition because lacking of discriminant information. In these situations, variations between images of the same person are larger than image variations derived from changing person identity. Thus the PCA projections may not be optimal from a discriminant viewpoint. The limitation in separability of *eigenfaces* was overcomed by [@LDA97]. Which is so called LDA algorithm, also notified as *fisherface* in face recognition. LDA is a supervised method, then it makes sense to use of labelled information of training samples to build a more reliable method for dimensionality reduction of the feature space. Similarly in PCA, feature matrix $W$ of *fisher* LDA can be defined by the following linear transformation $${\bf{Y}}=W^T{\bf{\Omega}}$$ where $W\in {\mathbb{R}}^{MN\times m}$ can be computed in such a way that the ratio of the between-class matrix and the within-class matrix is maximized for all training samples. We use ${\bf{S_b}}$ and ${\bf{S_w}}$ to denote between-class and within-class matrices respectively. The detailed formations are listed in . $$\label{discriminant matrices} \begin{aligned} {\bf{S_b}}&=\sum_{i=1}^c(\bar{\omega_i}-\bar{\omega})(\bar{\omega_i}-\bar{\omega})^T\\ {\bf{S_w}}&=\sum_{i=1}^c\sum_{j=1}^l(\omega_{ij}-\bar{\omega_i})(\omega_{ij}-\bar{\omega_i})^T \end{aligned}$$ where $\bar{\omega_i}=\displaystyle{\frac{1}{l}\sum_{j=1}^l}\omega_{ij},$ then the feature matrix $W$ of *fisher* LDA can be obtained by $$\label{fisher} \begin{aligned} W&=\mathop{\arg\max}_{W} \frac{W^T{\bf{S_b}}W}{W^T{\bf{S_w}}W}\\ &=[w_1, w_2, \cdots, w_m] \end{aligned}$$ where $w_i,\ i=1,\cdots,m$ is the eigenvectors of $\bf{S_b}$ and $\bf{S_w}$ corresponding to the top $m$ largest eigenvalues, i.e. $${\bf{S_w^{-1}}}{\bf{S_b}}w_i=\lambda_iw_i,\ i=1,2,\dots,m$$ Obviously, feature vectors $w_i$ can be obtained by eigenvalue decomposition of ${\bf{S_w^{-1}}}{\bf{S_b}}$, the size of which is $MN\times MN$. Similarly as in PCA, this is an intractable task. In addition to computational difficulty, one potential situation easily confronted is that ${\bf{S_w}}$ is always singular especially in face recognition field. This stems from the fact that the number of observations much smaller than their dimensions, i.e. the pixels number, this is so called SSS problem. Conclusions of PCA and LDA -------------------------- To illustrate the properties of PCA and LDA, we conclude their superiorities and shortcomings in this subsection to bring the original intention of *Discriminative PCA*. The key advantage of PCA in face recognition is the low noise sensitivity, low computational cost and high recognition accuracy on ideal facial databases. Compared with LDA, PCA works better in case where number of class is less. However, PCA only consider the most scattered information among all samples, whereas LDA works better with large dataset having multiple classes which stems from the fact that class separability is an important factor while reducing dimensionality. Conversely, LDA has much more expensive computational cost than PCA and usually face to SSS problem. Based on these considerations, construct a novel technique that can preserve both superiorities of PCA and LDA and at the same time overcome their weaknesses is promising and necessary. *Discriminative PCA* derives from this original intention. In the next section, the details of *Discriminative PCA* will be explained well. Discriminative PCA ================== The core idea of *Discriminative PCA* is constructing an algorithm to find a subspace contains discriminative principal components, i.e. *eigen*-subspace that possesses discriminant information. In order to fulfil this purpose, the essential implementation is performing PCA on discriminative matrices. Therefore the main problem need to be solved by *Discriminative PCA* is how to compute discriminative matrices. In one word, similarly as in PCA and LDA, we are aiming to find a feature matrix $\bf{\Xi}$ such that build the following linear transformation $$\label{projection_dpca} {\bf{Y}}={\bf{\Xi}}^T\Omega$$ where ${\bf{\Xi}}\in {\mathbb{R}}^{MN\times p}$ composed by a set of feature vectors $\tilde{v}_k,\ k=1,\dots,p$ which are top $p$ principal components containing discriminant information. The idea of *Discriminative PCA* arises a natural and feasible thought that is implementing PCA to the matrix that contains discriminative information instead of to original sample data. This is a process that use of LDA to enhance separability for PCA. However if we directly adopt LDA, then the conundrums of computational cost and SSS problem will stop us forward. From subsection \[LDA\], it is clearly the size of ${\bf{S_b}}$ and ${\bf{S_w}}$ is $MN \times MN$, which is too large to calculating the relevant discriminative matrices. Therefore the primary issue for us is designing a novel small size matrix such that the complexity shall be much lower when perform LDA on it than on original sample matrix. The original training sample matrix is hence converted into $$\label{New_sample} \widetilde{\mathcal{S}} \coloneqq {\bf{\Omega}}^T{\bf{\Omega}},$$ which is a $cl\times cl$ matrix. $cl$ stands for class number as described in table \[symbols\], the size of $\widetilde{\mathcal{S}}$ therefore is much smaller now so that the computation crisis is solved successfully. In order to obtain the discrimination information of $\Omega$ rather than $\widetilde{\mathcal{S}}$, we should find the relationship between them. We start with the calculations of ${\bf{\widetilde{S}_b}}$ and ${\bf{\widetilde{S}_w}}$ for $\widetilde{\mathcal{S}}$ by $$\begin{aligned} {\bf{\widetilde{S}_b}}&=\sum_{i=1}^{c} ({\bf{\Omega}}^T\bar{\omega_i}-{\bf{\Omega}}^T\bar{\omega})({\bf{\Omega}}^T\bar{\omega_i}-{\bf{\Omega}}^T\bar{\omega})^T\\ &={\bf{\Omega}}^T \left[\sum_{i=1}^{c} (\bar{\omega_i}-\bar{\omega})(\bar{\omega_i}-\bar{\omega})^T \right]{\bf{\Omega}}\\ \end{aligned}$$ similarly, we have $$\begin{aligned} {\bf{\widetilde{S}_w}}&=\sum_{i=1} ^{c}\sum_{j=1}^{l}({\bf{\Omega}}^T\omega_{ij}-{\bf{\Omega}}^T\bar{\omega_i})({\bf{\Omega}}^T\omega_{ij}-{\bf{\Omega}}^T\bar{\omega_i})^T\\ &={\bf{\Omega}}^T \left[ \sum_{i=1} ^{c}\sum_{j=1}^{l}(\omega_{ij}-\bar{\omega_i})(\omega_{ij}-\bar{\omega_i})^T\right] {\bf{\Omega}}. \end{aligned}$$ where ${{\bf{\widetilde{S}_b}}}$ and ${{\bf{\widetilde{S}_w}}}$ denote between-class matrix and within-class matrix of $\widetilde{\mathcal{S}}$ respectively. $\bf{S_b}$ and $\bf{S_w}$ are the corresponding discriminative matrices of $\Omega$. Then the relationship between them can be described as $$\begin{aligned} {\bf{\widetilde{S}_b}}&={\bf{\Omega}}^T {\bf{S_b}} {\bf{\Omega}} \\ {\bf{\widetilde{S}_w}}&={\bf{\Omega}}^T {\bf{S_w}} {\bf{\Omega}} \end{aligned}$$ The feature space $\bf{\Xi}$ we committed to find should possess discriminant information, which means that $\bf{\Xi}$ can be obtained based on the optimal subspace $W$ described in formulation . For the reason of computational difficulty, we calculate $\widetilde{W}$ of $\widetilde{\mathcal{S}}$ at the beginning in order to deduce $W$. We firstly give a lemma to explain the relationship between $W$ and $\widetilde{W}$. \[novel\_w\] Suppose $X,\ A$ are invertible matrices, if $\widetilde{w}$ is an eigenvector of $\widetilde{A}^{-1}\widetilde{B}$, and $\widetilde{A}=X^TAX,\ \widetilde{B}=X^TBX$, then $X\widetilde{w}$ is an eigenvector of $A^{-1}B$. $$\begin{aligned} &\widetilde{B}\widetilde{w}=\lambda \widetilde{A} \widetilde{w}\\ &\Rightarrow (X^TBX)\widetilde{w}=\lambda X^TAX \widetilde{w}\\ &\Rightarrow (X^TA)^{-1}(X^TB)X\widetilde{w}=\lambda X \widetilde{w}\\ &\Rightarrow A^{-1}(X^T)^{-1}(X^T)B(X\widetilde{w})=\lambda (X \widetilde{w})\\ &\Rightarrow A^{-1}B(X\widetilde{w})=\lambda (X \widetilde{w}) \end{aligned}$$ According to Lemma \[novel\_w\], $W$ can be obtained by $$\label{discriminant_novel} W={\bf{\Omega}}\widetilde{W}$$ where $\widetilde{W}\in {\mathbb{R}}^{cl\times m}$ is constructed by eigenvectors $\widetilde{w}_k,\ k=1,\dots,m$ corresponding to top $m$ largest eigenvalues derived from $\displaystyle{ {\bf{\widetilde{S}_w}}^{-1}{\bf{\widetilde{S}_b}}\widetilde{W}}=\lambda \widetilde{W}$. But non-singularity is a necessary requirement for all the relevant computations above mentioned. However, particularly in face recognition it is very difficult to guarantee sample matrix and within-class matrix are non-singular, i.e. SSS problem. Therefore, we need to find an appropriate approach to solve this task. In addition to SSS problem, we find that the elemental values of ${\bf{\widetilde{S}_w}}$ and ${\bf{\widetilde{S}_b}}$ are too large to getting correct results. Before settle SSS problem, we first design regularization strategies to preserve that elemental values in an appropriate range. There are two ways to carry out this goal, which are described as below formula: $$\label{meanvalue_rule} \left \{ \begin{array}{l} {\bf{\widetilde{S}_w}}={\bf{\widetilde{S}_w}}./\overline{{\bf{\widetilde{S}}}}_w\\ {\bf{\widetilde{S}_b}}={\bf{\widetilde{S}_b}}./\overline{{\bf{\widetilde{S}}}}_b \end{array} \right.$$ and $$\label{maxvalue_rule} \left \{ \begin{array}{l} {\bf{\widetilde{S}_w}}={\bf{\widetilde{S}_w}}./\max ({\bf{\widetilde{S}_w}})\\ {\bf{\widetilde{S}_b}}={\bf{\widetilde{S}_b}}./\max ({\bf{\widetilde{S}_b}}) \end{array} \right.$$ where symbol $./$ stands for each element of matrix in numerator divide its denominator. $\overline{\bullet}$ is the mean value of all elements of matrix $\bullet$ and $\max (\bullet)$ is the maximal element of $ \bullet$. We call regularization showed in \[meanvalue\_rule\] as mean value rule and other one of \[maxvalue\_rule\] is maximum rule. Now it is time to face SSS problem occurred during the process of calculate $\widetilde{W}$. SSS problem still exit because $$\mathop{rank}({\bf{\widetilde{S}_w}})\leq \min \{\mathop{rank}({\bf{\Omega}}),\ \mathop{rank}({\bf{S_w}}) \}$$ As mentioned in introduction, DLDA is used to solve SSS problem in this paper to resolve the defect of losing important discriminant information deduced by *PCA plus LDA*. The most important innovation of DLDA is discard the null space of $\bf{\widetilde{S}_b}$ rather than discarding the null space of $\bf{\widetilde{S}_w}$. The benefit of this way is can reserve the most discriminative information from the subspace ${\bf{B}}' \cap {\bf{A}}$, where ${\bf{B}}'$ is the complementary space of **B**, which is null space spanned by eigenvectors corresponding to zero eigenvalues of $\bf{\widetilde{S}_b}$. **A** is the space which is spanned by eigenvectors corresponding to the relevant smaller eigenvalues of $\bf{\widetilde{S}_w}$. Before specifically explain the implementation process of DLDA, relevant denotes are given by here: $\bf{E_b}$ is a space spanned by eigenvectors of $\bf{\widetilde{S}_b}$ corresponding to its all eigenvalues which are used to construct diagonal matrix ${\bf{\Lambda}}_b$. Firstly we discard the eigenvectors of zero eigenvalues from $\bf{E_b}$, then $\hat{\bf{E}}_b$ stands for the space spanned by the remaining eigenvectors and $\hat{\bf{\Lambda}}_b$ is a diagonal matrix of which diagonal elements are the corresponding remaining eigenvalues. Thereby ${\bf{B}}'$ can be calculated through the following formulation, in this way we have ${\bf{B}}'^T{\bf{\widetilde{S}_b}}{\bf{B}}'=\bf{I}_n$, where $\bf{I}_n$ is an identity matrix with size $n\times n$ and $n$ is the number of the remaining eigenvectors. $$\label{B} {\bf{B}}'=\hat{\bf{E}}_b\hat{\bf{\Lambda}}_b^{-1/2}$$ Based on formula , the intersect subspace ${\bf{B}}' \cap {\bf{A}}$ can be achieved by the next steps. Firstly diagonalise ${\bf{B}}'^T{\bf{\widetilde{S}_w}}{\bf{B}}'$ to get the eigenvectors’ space ${\bf{E}}_w$ and the corresponding diagonal matrix ${\bf{\Lambda}}_w$ which is composed by corresponding eigenvalues. The second step is discard the eigenvectors corresponding to largest eigenvalues. We use $\hat{\bf{E}}_w$ and $\hat{\bf{\Lambda}}_w$ stand for the spaces spanned by the remaining eigenvectors and eigenvalues after discarding. Now $\widetilde{W}$ can be calculated by $$\widetilde{W}={\bf{B}'}\hat{\bf{E}}_w\hat{\bf{\Lambda}}_w^{-1/2}$$ Eventually according to , we get the discriminative matrix of $\bf{\Omega}$ by $$\label{W} W={\bf{\Omega}}\widetilde{W}$$   \ Training facial images set ${\bf{\Omega}}$\   \ Feature subspace $\Xi$ that used for linear transformation   \ Step 1. Calculate $\bf{\widetilde{S}_w}$ and $\bf{\widetilde{S}_b}$ of ${\bf{\Omega}}^T{\bf{\Omega}}$; Step 2. Regularize $\bf{\widetilde{S}_w}$ and $\bf{\widetilde{S}_b}$; Step 3. Calculate the eigenvectors of $\bf{\widetilde{S}_b}$ corresponding to non-zero eigenvalues: ${\bf{\hat{E}}}_b=[{e_b}_1,\dots,{e_b}_n],\ {\bf{\hat{\Lambda}}}_b^{-1/2}=[{\lambda_b}_1^{-1/2},\dots,{\lambda_b}_n^{-1/2}]$; Step 4. Let ${\bf{B'=\hat{E}}}_b{\bf{\hat{\Lambda}}}_b^{-1/2}$, then calculate the eigenvectors ${\bf{E}}_w$ of ${\bf{B'}}^T{\bf{\widetilde{S}}}_w{\bf{B'}}$; Step 5. Discard the eigenvectors of ${\bf{E}}_w$ with respect to the largest eigenvalues to obtain ${\bf{\hat{E}}}_w=[{e_w}_1,\dots,{e_w}_m]$; \[step\_5\] Step 6. Calculate feature subspace $\widetilde{W}$ of ${\bf{\Omega}}^T{\bf{\Omega}}$ by $\widetilde{W}={\bf{B}}'{\bf{\hat{E}}}_w{\bf{\hat{\Lambda}}}_w^{-1/2},\ {\bf{\hat{\Lambda}}}_w^{-1/2}=[{\lambda_w}_1^{-1/2},\dots, {\lambda_w}_m^{-1/2}]$; Step 7. Calculate discriminative matrix $W$ of ${\bf{\Omega}}$ by $W={\bf{\Omega}}\widetilde{W}$; Step 8. Compute the covariance of centred discriminative matrix $W$ through ${\bf{C}_W}=\displaystyle{\frac{1}{m}}(W-\bar{W})^T(W-\bar{W})$; Step 9. Select the eigenvectors $[{e_{\bf{C}}}_1,\dots,{e_{\bf{C}}}_p]$ with the top $p$ largest eigenvalues of $\bf{C}$; \[step\_9\] Step 10. Normalise $[{e_{\bf{C}}}_1,\dots,{e_{\bf{C}}}_p]$ to obtain feature subspace ${\bf{\Xi}}$ which is composed by $\left[\displaystyle{\frac{{e_{\bf{C}}}_1}{\Vert {e_{\bf{C}}}_1 \Vert}},\dots,\displaystyle{\frac{{e_{\bf{C}}}_p}{\Vert {e_{\bf{C}}}_p \Vert}} \right]$. The subsequent task of *Discriminative PCA* is perform PCA on discriminative matrix $W$ obtained by to get the feature space $\bf{\Xi}$. As steps explained in subsection \[PCA\], we firstly construct the covariance matrix ${\bf{C}}_W$ based on centred $W$ and then select the orthonormal eigenvectors with top $p$ largest eigenvalues of ${\bf{C}}_W$ to construct the feature space $\bf{\Xi}$. Then we can extract the discriminative principal features by mapping all training samples to $\bf{\Xi}$ through . The detailed process is described in the pseudocode of *Discriminative PCA* algorithm \[alg\_DPCA\]. Experimental Results ==================== For evaluating the effectiveness of *Discriminative PCA*, in this section we compare the performances in recognition accuracy and running time of *Discriminative PCA* with DLDA and PCA in face recognition on four facial databases, they are CMU PIE [@PIE], FERET[@FERET], YALE [@YALE] and ORL respectively. We use mean value rule to complete the regularization step of *Discriminative PCA* on the four experimental facial databases. All the face images we used in experiments are gray-scale and dealt with aligning by the locations of eyes. And only facial part image reserved after cropping. These four facial databases have their various properties. For an illustration, CMU PIE contains different factors in pose, illumination and expression. Specifically over 40,000 facial images of 68 individuals are collected. Each person is imaged across 13 different poses, under 43 different illumination conditions, and with 4 different expressions. Figure \[PIE\] shows the partial images of CMU PIE with brief description. FERET database derives from the Face Recognition Technology (FERET) program, which is a large database of facial images, divided into development and sequestered portions. There are color- and gray-version FERET databases containing more than 10,000 images under various situations involving pose, age, expressions, etc. We choose 50 people and 7 images per each individual of gray-version FERET database in experiments. These images are tiff format with pixels $80\times 80$ and partial of them are displayed in Figure \[FERET\]. ![Partial images of Gray FERET face database.[]{data-label="FERET"}](FERET_showfaces.jpg){width="7cm"} YALE face database contains 165 images of 15 individuals. There are 11 images per each person, one per different facial expression or configuration as partial images showed in figure \[YALE\]: center-light, w/glasses, happy, left-light, w/no glasses, normal, right-light, sad, sleepy, surprised, and wink. We select all facial images of YALE database to complete our experiments in this part. ![Partial images of YALE face database, in which all individuals are chosen for experiments and the size of each image is $80\times 80$ pixels.[]{data-label="YALE"}](YALE80_showfaces.jpg){width="7cm"} Figure \[ORL\] shows the partial images of ORL facial database collected under relatively ideal situations, which means that images of ORL contain less changes compared with PIE, FERET and YALE. ![Partial images of ORL, this database composed by 40 individuals and 10 images per person with the size of $112\times 92$ pixels.[]{data-label="ORL"}](ORL_showfaces.jpg){width="7cm"} Nearest-neighbour classifier is used to finish recognition step in our experiments and running time is the average value by running 20 times. More reliable results can be obtained through using different number of training images per person. In this way we can analytically compare the performances of *Discriminative PCA*, DLDA and PCA more comprehensive. The subscript of denotion ‘Property$\bf{_{Training \ number}}$’ showed in tables \[tab\_PIE\]-\[tab\_ORL\] stands for this index. There are three cases in PIE database we design for experiments. It is clearly observe from table \[tab\_PIE\] that whenever training number is small or large (from 5 to 20 images for each person), *Discriminative PCA* always be far superior to PCA and DLDA in recognition accuracy. Particularly when the training number is small, the advantage is more significant. And in running time, *Discriminative PCA* has a comparable results with PCA, in special case, it is even faster than PCA. Among the three algorithms, results of DLDA and PCA are consistent with our analysis that DLDA generally performs better in recognition rate whereas PCA is dominant in computational cost. Property$\bf{_{Training \ number}}$ PCA DLDA *Discriminative PCA* ------------------------------------- ---------- ---------- ---------------------- Accuracy$\bf{_5}$ 31.36% 26.82% Accuracy$\bf{_{15}}$ 53.53% 93.53% Accuracy$\bf{_{20}}$ 69.66% 93.33% Running time$\bf{_5}$ 0.668472 0.913645 Running time$\bf{_{15}}$ 0.961559 0.723554 Running time$\bf{_{20}}$ 1.005333 0.757531 Since images for each person we choose in PIE under much different situation in illumination of the same pose, results in table \[tab\_PIE\] indicate that *Discriminative PCA* resolve the problem of sensitive to illumination for PCA by discriminant enhancement and at the same time reserve the low computational complexity. For obtaining intuitively view, we display first ten basis images of their own feature subspaces in figure \[basis\_PIE\]. We find that in PCA, the basis images contain much more illumination information than *Discriminative PCA* and DLDA. Different from in PIE, images of FERET we select are mainly focus on variations in poses and expressions. Under these factors, results of table \[tab\_FERET\] also verify the similar conclusion that *Discriminative PCA* is far ahead on recognition rate than DLDA and PCA. More worth mentioning is the stable performance of *Discriminative PCA* even when training number is very small (2 images for each one). With increasing the training images, recognition rates of DLDA and *Discriminative PCA* higher, which is a different phenomenon compared with PCA. Even the training number is large, PCA can not perform well when face images much variant in illumination, pose and expressions, etc. It proves that PCA has good properties under ideal situations such in table \[tab\_ORL\] shows. Property$\bf{_{Training \ number}}$ PCA DLDA *Discriminative PCA* ------------------------------------- -------- ---------- ---------------------- Accuracy$\bf{_2}$ 55.20% 28.00% Accuracy$\bf{_3}$ 45.00% 63.00% Accuracy$\bf{_4}$ 52.00% 76.00% Accuracy$\bf{_5}$ 40.00% 80.00% Running time$\bf{_2}$ 0.798677 0.366499 Running time$\bf{_3}$ 0.848357 0.243254 Running time$\bf{_4}$ 0.824079 0.275457 Running time$\bf{_5}$ 0.828137 0.273732 Similar results can be concluded through observing figure \[basis\_FERET\], which shows the first ten basis images of three feature subspaces. The basis images of PCA contain more variant information in pose and expression than *Discriminative PCA* and DLDA. Different from other facial databases we used, images of YALE contain occlusion changes such as wearing glasses or not, furthermore the expression changes more significant. Without exception *Discriminative PCA* still outperform in recognition accuracy and also in running time in this case. When the number of images used for training is larger, PCA performs better than DLDA, details displayed in table \[tab\_YALE\]. Property$\bf{_{Training \ number}}$ PCA DLDA *Discriminative PCA* ------------------------------------- ---------- ---------- ---------------------- Accuracy$\bf{_3}$ 62.86% 66.67% Accuracy$\bf{_4}$ 57.14% 72.38% Accuracy$\bf{_5}$ 81.11% 77.78% Accuracy$\bf{_7}$ 91.11% 80.00% Running time$\bf{_3}$ 0.221750 0.769583 Running time$\bf{_4}$ 0.246141 0.847130 Running time$\bf{_5}$ 0.781730 0.245448 Running time$\bf{_7}$ 0.228120 0.764449 Among the four databases, only on ideal facial database ORL, PCA has stable performance that with training number increase, the recognition rate higher thereof. And in this case when training number is small *Discriminative PCA* still far superior than PCA, whereas if training number is large the three approaches have same good performances as table \[tab\_ORL\] shows. Property$\bf{_{Training \ number}}$ PCA DLDA *Discriminative PCA* ------------------------------------- ---------- ---------- ---------------------- Accuracy$\bf{_3}$ 85.36% 77.86% Accuracy$\bf{_5}$ 93.50% 89.00% Accuracy$\bf{_7}$ 95.83% 95.83% 95.83% Running time$\bf{_3}$ 0.795025 2.389738 Running time$\bf{_5}$ 2.351540 0.947311 Running time$\bf{_7}$ 2.437706 0.906449 Conclusions and Future Work =========================== We propose a novel feature extraction approach denoted by *Discriminative PCA* in this paper. The main purpose of *Discriminative PCA* is trying to find a feature subspace contains discriminant principal components. For achieving our goal, LDA is used to enhance the separability for PCA. The core idea of *Discriminative PCA* is performing PCA on discriminative matrix. During the implementation process, we adopt DLDA strategy to solve SSS problem when compute ${\bf{\widetilde{S}}_b}$ and ${\bf{\widetilde{S}}_w}$ of converted training sample matrix $\bf{\Omega^T\Omega}$, which is a trick that simultaneously reduce the computational complexity and solve SSS problem for the phase of calculating discriminative matrix. The superiorities of *Discriminative PCA* have been proved through experimental results on four popular facial databases in recognition rate and average running time. We remark that the number of discarded eigenvectors with top $m$ largest eigenvalues in step \[step\_5\] and basis image number $p$ in step \[step\_9\] of algorithm \[alg\_DPCA\] is important to face recognition rate. Therefore how to select appropriate $m,\ p$ is a task need to be completed. Another point is that in the experimental part, we regularize discriminant matrices by mean value rule. If we use of maximum rule, the results in running time is faster than use of mean value rule whereas the recognize accuracy relatively lower a little bit than using mean value rule, which means that the performance of face recognition for *Discriminative PCA* is not so outstanding among the compared approaches PCA and DLDA. On the other hand, the proposed *Discriminative PCA* is a linear pattern recognition approach, however many pattern samples lie on non-linear manifold to lead that linear models can not extract and represent non-linear information thereof well, adding discriminant information to non-linear approaches, such as kernel principal component analysis in order to improve the performances therefore is our next work. Acknowledgement {#acknowledgement .unnumbered} =============== This work is jointly supported by the grants from Guangxi Science and Technology Base and Talent Specialized Project No. 2018AD19038 and Doctoral Scientific Research Foundation No. GLUTQD2017142 of Guilin University of Technology. References {#references .unnumbered} ========== [^1]: *Discriminative PCA*
--- address: - | Department of Mathematics\ University of British Columbia\ Room 121, 1984 Mathematics Road\ Vancouver, BC V6T 1Z2\ Canada - | Mathematics Department\ Dartmouth College\ Hanover, NH 03755-3551\ U.S.A. author: - Greg Martin - Carl Pomerance title: 'The iterated Carmichael $\lambda$-function and the number of cycles of the power generator' --- [^1] introduction ============ A common pseudorandom number generator is the power generator: $x\mapsto x^\ell{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}$. Here, $\ell,n$ are fixed integers at least 2, and one constructs a pseudorandom sequence by starting at some residue mod $n$ and iterating this $\ell$th power map. (Because it is the easiest to compute, one often takes $\ell=2$; this case is known as the BBS generator, for Blum, Blum, and Shub.) To be a good generator, the period should be large. Of course, the period depends somewhat on the number chosen for the initial value. However, a universal upper bound for this period is $\lambda(\lambda(n))$ where $\lambda$ is Carmichael’s function. Here, $\lambda(m)$ is defined as the order of the largest cyclic subgroup of the multiplicative group $({{\mathbb Z}}/m{{\mathbb Z}})^\times$. It may be computed via the identity $\lambda({\mathop{\rm{lcm}}}\{a,b\})={\mathop{\rm{lcm}}}\{\lambda(a),\lambda(b)\}$ and its values at prime powers: with $\phi$ being Euler’s function, $\lambda(p^a)=\phi(p^a)=(p-1)p^{a-1}$ for every odd prime power $p^a$ and for 2 and 4, and $\lambda(2^a)=\phi(2^a)/2=2^{a-2}$ for $a\ge 3$. Statistical properties of $\lambda(n)$ were studied by Erdős, Schmutz, and the second author in [@EPS], and in particular, they showed that $\lambda(n)=n/\exp((1+o(1))\log\log n\log\log\log n)$ as $n\to\infty$ through a certain set of integers of asymptotic density 1. This does not quite pinpoint the normal order of $\lambda(n)$ (even the sharper version of this theorem from [@EPS] falls short in this regard), but it is certainly a step in this direction, and does give the normal order of the function $\log(n/\lambda(n))$. In this paper we prove a result of similar quality for the function $\lambda(\lambda(n))$, which we have seen arises in connection with the period of the power generator. We obtain the same expression as with $\lambda(n)$, except that the $\log\log n$ is squared. That is, $\lambda(\lambda(n))=n/\exp((1+o(1))(\log\log n)^2\log\log\log n)$ almost always. We are able to use this result to say something nontrivial about the number of cycles for the power generator. This problem has been considered in several papers, including [@BHM], [@BG], and [@R]. We show that for almost all integers $n$, the number of cycles for the $\ell$th power map modulo $n$ is at least $\exp((1+o(1))(\log\log n)^2\log\log\log n)$, and we conjecture that this lower bound is actually the truth. Under the assumption of the Generalized Riemann Hypothesis (GRH), and using a new result of Kurlberg and the second author [@KP], we prove our conjecture. (By the GRH, we mean the Riemann Hypothesis for Kummerian fields as used by Hooley in his celebrated conditional proof of the Artin conjecture.) For an arithmetic function $f(n)$ whose values are in the natural numbers, let $f_k(n)$ denote the $k$th iterate of $f$ evaluated at $n$. One might ask about the normal behavior of $\lambda_k(n)$ for $k\ge 3$. Here we make a conjecture for each fixed $k$. We also briefly consider the function $L(n)$ defined as the least $k$ such that $\lambda_k(n)=1$. A similar undertaking was made by Erdős, Granville, Spiro, and the second author in [@EGPS] for the function $F(n)$ defined as the least $k$ with $\phi_k(n)=1$. Though $\lambda$ is very similar to $\phi$, the behavior of $L(n)$ and $F(n)$ seem markedly different. We know that $F(n)$ is always of order of magnitude $\log n$, and it is shown in [@EGPS], assuming the Elliott–Halberstam conjecture on the average distribution of primes in arithmetic progressions with large moduli, that in fact $F(n)\sim\alpha\log n$ on a set of asymptotic density 1 for a particular positive constant $\alpha$. We know far less about $L(n)$, not even its typical order of magnitude. We raise the possibility that it is normally of order $\log\log n$ and show that it is bounded by this order infinitely often. A more formal statement of our results follows. \[lambda.lambda.normal.order.theorem\] The normal order of $\log\big( n/\lambda(\lambda(n)) \big)$ is $(\log\log n)^2 \log\log\log n$. That is, $$\lambda(\lambda(n))=n\exp\left(-(1+o(1))(\log\log n)^2\log\log\log n\right)$$ as $n\to\infty$ through a set of integers of asymptotic density $1$. We actually prove the slightly stronger result: given any function $\psi(n)$ going to infinity arbitrarily slowly, we have $$\lambda(\lambda(n)) = n\exp\!\big( {-(\log\log n)^2} ( \log\log\log n + O(\psi(n)) ) \big)$$ for almost all $n$. Given integers $\ell,n\ge2$, let $C(\ell,n)$ denote the number of cycles when iterating the modular power map $x\mapsto x^\ell{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}$. \[number.of.cycles.theorem\] Given any fixed integer $\ell\ge2$, there is a set of integers of asymptotic density $1$ such that as $n\to\infty$ through this set, $$\label{numberofcycles} C(\ell,n) \ge \exp\!\big((1+o(1)) (\log\log n)^2 \log\log\log n \big).$$ Further, if ${\varepsilon}(n)$ tends to $0$ arbitrarily slowly, we have $C(\ell,n)\le n^{1/2-{\varepsilon}(n)}$ for almost all $n$. Moreover, for a positive proportion of integers $n$ we have $C(\ell,n)\le n^{.409}$. Finally, if the Generalized Riemann Hypothesis (GRH) is true, we have equality in on a set of integers $n$ of asymptotic density $1$. \[lambdak.normal.order.conj\] The normal order of $\log(n/\lambda_k(n))$ is $(1/(k-1)!)(\log\log n)^k\log\log\log n$. That is, for each fixed integer $k\ge1$, $$\lambda_k(n) = n\exp\!\left({-\left(\frac1{(k-1)!}+o(1)\right)(\log\log n)^k}(\log\log\log n)\right)$$ for almost all $n$. Define $L(n)$ to be the number of iterations of $\lambda$ required to take $n$ to 1, that is, $L(n)$ equals the smallest nonnegative integer $k$ such that $\lambda_k(n)=1$. \[log.log.iterates.theorem\] There are infinitely many integers $n$ such that $L(n) < (1/\log 2+o(1))\log\log n$. Notation, strategy, and preliminaries ===================================== The proof of Theorem \[lambda.lambda.normal.order.theorem\], our principal result, proceeds by comparing the prime divisors of $\lambda(\lambda(n))$ with those of $\phi(\phi(n))$. The primes dividing $\phi(m)$ and $\lambda(m)$ are always the same. However, this is not always true for $\phi(\phi(m))$ and $\lambda(\lambda(m))$. The prime 2 clearly causes problems; for example, we have $\phi(\phi(8))=2$ but $\lambda(\lambda(8))=1$. However this problem also arises from the interaction between different primes, for example, $\phi(\phi(91))=24$ but $\lambda(\lambda(91))=2$. We shall use the following notation throughout the paper. The letters $p,q,r$ will always denote primes. Let $v_q(n)$ denote the exponent on $q$ in the prime factorization of $n$, so that $$n~=~\prod_q q^{v_q(n)}$$ for every positive integer $n$. We let ${\mathcal P_{n}} = \{p\colon p\equiv1{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}\}$. We let $x>e^{e^e}$ be a real number and $y=y(x)=\log\log x$. By $\psi(x)$ we denote a function tending to infinity but more slowly than $\log\log\log x=\log y$. In Sections 2–5, the phrase “for almost all $n$” always means “for all but $O(x/\psi(x))$ integers $n\le x$”. First we argue that the “large” prime divisors typically do not contribute significantly: For almost all $n\le x$, the prime divisors of $\phi(\phi(n))$ and $\lambda(\lambda(n))$ that exceed $y^2$ are identical. \[same.large.prime.divisors.prop\] For almost all $n\le x$, $$\sum_{\substack{q>y^2 \\ v_q(\phi(\phi(n)))\ge2}} v_q(\phi(\phi(n))) \log q~\ll~y^2\psi(x). \label{large.primes.in.phi.phi.eq}$$ \[large.primes.in.phi.phi.prop\] Next we argue that the contribution of “small” primes to $\lambda(\lambda(n))$ is typically small: For almost all $n\le x$, we have $$\sum_{q\le y^2} v_q(\lambda(\lambda(n))) \log q~\ll~y^2\psi(x).$$ \[small.primes.in.lambda.lambda.prop\] Finally, we develop an understanding of the typical contribution of small primes to $\phi(\phi(n))$ by comparing it to the additive function $h(n)$ defined by $$h(n)~=~\sum_{p\mid n} \sum_{r\mid p-1} \sum_{q\le y^2} v_q(r-1) \log q. \label{h.definition}$$ For almost all $n\le x$, $$\sum_{q\le y^2} v_q(\phi(\phi(n))) \log q ~=~h(n)+O(y\log y\cdot\psi(x)).$$ \[h.good.approximation.prop\] For almost all $n\le x$, we have $h(n) = y^2\log y + O(y^2)$. \[h.normal.order.prop\] Let $x$ be a sufficiently large real number. For any positive integer $n\le x$ we may write $$\log \frac n{\lambda(\lambda(n))}~=~\log \frac n{\phi(n)} + \log \frac{\phi(n)}{\phi(\phi(n))} + \log \frac{\phi(\phi(n))}{\lambda(\lambda(n))}.$$ Recall that $n/\phi(n) \ll \log\log n$, and so the first two terms are both $O(\log\log\log x)$. Thus, it suffices to show that $$\log \frac{\phi(\phi(n))}{\lambda(\lambda(n))}~=~(\log\log x)^2 (\log\log\log x+O(\psi(x)))~=~y^2\log y + O(y^2\psi(x)) \label{goal.equation}$$ for almost all $n\le x$. We write $$\begin{aligned} \log\frac{\phi(\phi(n))}{\lambda(\lambda(n))}~ &=~\sum_{q} \big( v_q(\phi(\phi(n))) - v_q(\lambda(\lambda(n))) \big) \log q \notag \\ &=~\sum_{q\le y^2} v_q(\phi(\phi(n))) \log q - \sum_{q\le y^2} v_q(\lambda(\lambda(n))) \log q \label{split.into.large.and.small.primes} \\ &\qquad+ \sum_{q>y^2} \big( v_q(\phi(\phi(n))) - v_q(\lambda(\lambda(n))) \big) \log q. \notag\end{aligned}$$ Since $\lambda(\lambda(n))$ always divides $\phi(\phi(n))$, the coefficients of $\log q$ in this last sum are all nonnegative. On the other hand, Proposition \[same.large.prime.divisors.prop\] tells us that for almost all $n\le x$, whenever $v_q(\phi(\phi(n))) > 0$ we have $v_q(\lambda(\lambda(n))) > 0$ as well. Therefore the primes $q$ for which $v_q(\phi(\phi(n))) \le 1$ do not contribute to this last sum at all, that is, $$\begin{aligned} 0~&\le~\sum_{q>y^2} \big( v_q(\phi(\phi(n))) - v_q(\lambda(\lambda(n))) \big) \log q \\ &=~\sum_{\substack{q>y^2 \\ v_q(\phi(\phi(n)))\ge2}} \big( v_q(\phi(\phi(n))) - v_q(\lambda(\lambda(n))) \big) \log q \\ &\le~\sum_{\substack{q>y^2 \\ v_q(\phi(\phi(n)))\ge2}} v_q(\phi(\phi(n))) \log q \ll y^2\psi(x)\end{aligned}$$ for almost all $n\le x$ by Propositions \[same.large.prime.divisors.prop\] and \[large.primes.in.phi.phi.prop\]. Moreover, Proposition \[small.primes.in.lambda.lambda.prop\] tells us that the second sum on the right-hand side of equation is $O(y^2\psi(x))$ for almost all $n\le x$. Therefore equation becomes $$\log\frac{\phi(\phi(n))}{\lambda(\lambda(n))}~=~\sum_{q\le y^2} v_q(\phi(\phi(n))) \log q + O(y^2\psi(x))$$ for almost all $n\le x$. By Proposition \[h.good.approximation.prop\], the sum on the right-hand side can be replaced by $h(n)$ for almost all $n\le x$, the error $O(y\log y\cdot \psi(x))$ in that proposition being absorbed into the existing error $O(y^2\psi(x))$. Finally, Proposition \[h.normal.order.prop\] tells us that $h(n) = y^2\log y + O(y^2)$ for almost all $n\le x$. We conclude that equation is satisfied for almost all $n\le x$, which establishes the theorem. Given integers $a$ and $n$, recall that $\pi(t;n,a)$ denotes the number of primes up to $t$ that are congruent to $a{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}$. The Brun–Titchmarsh inequality (see [@HR Theorem 3.7]) states that $$\pi(t;n,a)~\ll~\frac t{\phi(n)\log(t/n)} \label{real.BT}$$ for all $t>n$. We use repeatedly a weak form of this inequality, valid for all $t>e^e$, $$\sum_{\substack{p\le t\\ p\in{\mathcal P_{n}}}}\frac1p~\ll~\frac{\log\log t}{\phi(n)}, \label{primesum}$$ which follows from the estimate with $a=1$ by partial summation. When $n/\phi(n)$ is bounded, this estimate simplifies to $$\sum_{\substack{p\le t\\ p\in{\mathcal P_{n}}}} \frac1p ~\ll ~\frac {\log\log t}n. \label{BT.prime.power}$$ For example, we shall employ this last estimate when $n$ is a prime or a prime power and when $n$ is the product of two primes or prime powers; in these cases we have $n/\phi(n)\le3$. We also quote the fact (see Norton [@N] or the paper [@P] of the second author) that $$\sum_{\substack{p\in{\mathcal P_{n}} \\ p\le t}} \frac1{p}~ =~ \frac{\log\log t}{\phi(n)} + O\Big( \frac{\log n}{\phi(n)} \Big). \label{lose.the.minus.one}$$ This readily implies that $$\sum_{\substack{p\in{\mathcal P_{n}} \\ p\le t}} \frac1{p-1}~ =~ \frac{\log\log t}{\phi(n)} + O\Big( \frac{\log n}{\phi(n)} \Big) \label{second.author}$$ as well, since (noting that the smallest possible term in the sum is $p=n+1$) the difference equals $$\sum_{\substack{p\in{\mathcal P_{n}} \\ p\le t}} \frac1{(p-1)p} ~\le ~\sum_{i=1}^\infty \frac1{in(in+1)} \ll \frac1{n^2}.$$ We occasionally use the Chebyshev upper bound $$\sum_{p\le z} \log p~ \le~ \sum_{n\le z} \Lambda(n) ~\ll~ z, \label{Chebyshev.bound}$$ where $\Lambda(n)$ is the von Mangoldt function, as well as the weaker versions $$\sum_{p\le z} \frac{\log p}p~ \ll ~\log z,\qquad \sum_{p\le z} \frac{\log^2p}p~ \ll ~\log^2z \label{weaker.Chebyshev.bound}$$ and the tail estimates $$\sum_{p>z} \frac{\log p}{p^2} ~\ll ~\frac1z,\qquad \sum_{p>z} \frac1{p^2} \ll \frac1{z\log z}, \label{tail.estimates}$$ each of which can be derived from the estimate by partial summation. We shall also need at one point a weak form of the asymptotic formula of Mertens, $$\sum_{p\le z} \frac{\log p}p~ =~ \log z+O(1). \label{Mertens}$$ For any polynomial $P(x)$, we also note the series estimate $$\sum_{a=0}^\infty \frac{P(a)}{m^a} ~\ll_P~ 1$$ uniformly for $m\ge2$, valid since the series $\sum_{a=0}^\infty P(a)z^a$ converges uniformly for $|z|\le\frac12$. The estimates $$\sum_{a\in{{\mathbb N}}} \frac{P(a)}{m^a}~ \ll_P~ \frac1m, \qquad \sum_{\substack{a\in{{\mathbb N}}\\ m^a>z}} \frac{P(a)}{m^a} ~\ll_P~ \frac1z, \label{geometric.series}$$ valid uniformly for any integer $m\ge2$, follow easily by factoring out the first denominator occurring in each sum. Large primes dividing $\phi(\phi(n))$ and $\lambda(\lambda(n))$ =============================================================== If $q$ is any prime, then $q$ divides $\phi(\phi(n))$ if and only if at least one of the following criteria holds: - $q^3\mid n$, - there exists $p\in{\mathcal P_{q^2}}$ with $p\mid n$, - there exists $p\in{\mathcal P_{q}}$ with $p^2\mid n$, - there exist $r\in{\mathcal P_{q}}$ and $p\in{\mathcal P_{r}}$ with $p\mid n$, - $q^2\mid n$ and there exists $p\in{\mathcal P_{q}}$ with $p\mid n$, - there exist distinct $p_1,p_2\in{\mathcal P_{q}}$ with $p_1p_2\mid n$. In the first four of these six cases, it is easily checked that $q\mid\lambda(\lambda(n))$ as well. (This is not quite true for $q=2$, but in this proof we shall only consider primes $q>y^2$.) Therefore we can estimate the number of integers $n\le x$ for which $q$ divides $\phi(\phi(n))$ but not $\lambda(\lambda(n))$ as follows: $$\sum_{\substack{n\le x \\ q\mid\phi(\phi(n)) \\ q\dnd\lambda(\lambda(n))}} 1 ~\le~ \sum_{p\in{\mathcal P_{q}}} \sum_{\substack{n\le x \\ q^2p\mid n}} 1 + \sum_{p_1\in{\mathcal P_{q}}} \sum_{\substack{p_2\in{\mathcal P_{q}} \\ p_2\ne p_1}} \sum_{\substack{n\le x \\ p_1p_2\mid n}} 1 ~\le~ \sum_{p\in{\mathcal P_{q}}} \frac x{q^2p} + \sum_{p_1\in{\mathcal P_{q}}} \sum_{p_2\in{\mathcal P_{q}}} \frac x{p_1p_2}.$$ Using three applications of the Brun–Titchmarsh inequality , we conclude that for any odd prime $q$, $$\sum_{\substack{n\le x \\ q\mid\phi(\phi(n)) \\ q\dnd\lambda(\lambda(n))}} 1~ \ll ~\frac{xy}{q^3} + \frac{xy^2}{q^2} ~\ll ~\frac{xy^2}{q^2}.$$ Consequently, by the tail estimate and the condition $\psi(x)=o(\log y)$, $$\begin{aligned} \sum_{q>y^2} \sum_{\substack{n\le x \\ q\mid\phi(\phi(n)) \\ q\dnd\lambda(\lambda(n))}} 1 ~\ll~ xy^2 \sum_{q>y^2} \frac1{q^2} ~\ll ~\frac{xy^2}{y^2\log y^2}~ <~ \frac x{\log y}~ \ll~ \frac x{\psi(x)}.\end{aligned}$$ Therefore for almost all $n\le x$, every prime $q>y^2$ dividing $\phi(\phi(n))$ also divides $\lambda(\lambda(n))$, as asserted. Given a real number $x\ge3$ and a prime $q>y^2$, define $S_q=S_q(x)$ to be the set of all integers $n\le x$ for which at least one of the following criteria holds: - $q^2\mid n$, - there exists $p\in{\mathcal P_{q^2}}$ with $p\mid n$, - there exist $r\in{\mathcal P_{q^2}}$ and $p\in{\mathcal P_{r}}$ with $p\mid n$, - there exist distinct $r_1,r_2,r_3\in{\mathcal P_{q}}$ and $p\in{\mathcal P_{r_1r_2r_3}}$ with $p\mid n$, - there exist distinct $r_1,r_2,r_3,r_4\in{\mathcal P_{q}}$, $p_1\in{\mathcal P_{r_1r_2}}$, and $p_2\in{\mathcal P_{r_3r_4}}$ with $p_1p_2\mid n$. Then the cardinality of $S_q$ is $O(xy^2/q^2)$. \[less.squarefree.cases.lemma\] The number of integers up to $x$ for which any particular one of the five criteria holds is easily shown to be $O(xy^2/q^2)$. For the sake of conciseness, we show the details of this calculation only for the last criterion, which is the most complicated. The number of integers $n$ up to $x$ for which there exist distinct $r_1,r_2,r_3,r_4\in{\mathcal P_{q}}$, $p_1\in{\mathcal P_{r_1r_2}}$, and $p_2\in{\mathcal P_{r_3r_4}}$ with $p_1p_2\mid n$ is at most $$\sum_{r_1,r_2,r_3,r_4\in{\mathcal P_{q}}} \sum_{\substack{p_1\in{\mathcal P_{r_1r_2}} \\ p_2\in{\mathcal P_{r_3r_4}}}} \sum_{\substack{n\le x \\ p_1p_2\mid n}} 1 ~\le ~\sum_{r_1,r_2,r_3,r_4\in{\mathcal P_{q}}} \sum_{\substack{p_1\in{\mathcal P_{r_1r_2}} \\ p_2\in{\mathcal P_{r_3r_4}}}} \frac x{p_1p_2}.$$ Using six applications of the Brun–Titchmarsh estimate , we have $$\sum_{r_1,r_2,r_3,r_4\in{\mathcal P_{q}}} \sum_{\substack{p_1\in{\mathcal P_{r_1r_2}} \\ p_2\in{\mathcal P_{r_3r_4}}}} \frac x{p_1p_2} \ll \sum_{r_1,r_2,r_3,r_4\in{\mathcal P_{q}}} \frac{xy^2}{r_1r_2r_3r_4}~ \ll~ \frac{xy^6}{q^4} < \frac{xy^2}{q^2},$$ the last inequality being valid due to the hypothesis $q>y^2$. Define $S=S(x)$ to be the union of $S_q$ over all primes $q>y^2$, where $S_q$ is defined as in the statement of Lemma \[less.squarefree.cases.lemma\]. Using $\#A$ to denote the cardinality of a set $A$, Lemma \[less.squarefree.cases.lemma\] implies that $$\#S \le \sum_{q>y^2} \#S_q ~\ll~ \sum_{q>y^2} \frac{xy^2}{q^2}~ \ll ~\frac{xy^2}{y^2\log y^2} ~\ll~ \frac x{\psi(x)}$$ by the tail estimate and the condition $\psi(x) = o(\log y)$. Therefore to prove that the estimate holds for almost all integers $n\le x$, it suffices to prove that it holds for almost all integers $n\le x$ that are not in the set $S$. This in turn is implied by the upper bound $$\sum_{\substack{n\le x \\ n\notin S}} \sum_{\substack{q>y^2 \\ v_q(\phi(\phi(n)))\ge2}} v_q(\phi(\phi(n)))\log q ~\ll ~xy^2, \label{not.counting.S.eq}$$ which we proceed now to establish. Fix a prime $q>y^2$ and an integer $a\ge2$ for the moment. In general, there are many ways in which $q^a$ could divide $\phi(\phi(n))$, depending on the power to which $q$ divides $n$ itself, the power to which $q$ divides numbers of the form $p-1$ with $p\mid n$, and so forth. However, for integers $n\notin S$, most of these various possibilities are ruled out by one of the five criteria defining the sets $S_q$. In fact, for $n\notin S$, there are only two ways for $q^a$ to divide $\phi(\phi(n))$: - there are distinct $r_1,\dots,r_a\subset{\mathcal P_{q}}$ and distinct $p_1\in{\mathcal P_{r_1}}$, …, $p_a\in{\mathcal P_{r_a}}$ with $p_1\dots p_a|n$, - there are distinct $r_1,\dots,r_a\subset{\mathcal P_{q}}$, distinct $p_1\in{\mathcal P_{r_1}}$, …, $p_{a-2}\in{\mathcal P_{r_{a-2}}}$, and $p\in{\mathcal P_{r_{a-1}r_a}}$ with $p_1\dots p_a|n$. (We refer to the former case as the “supersquarefree” case.) Still considering $q$ and $a$ fixed, the number of integers $n$ up to $x$ satisfying each of these two conditions is at most $$\sum_{r_1,\dots,r_a\in{\mathcal P_{q}}} \frac1{a!} \sum_{\substack{p_1\in{\mathcal P_{r_1}} \\ \dots \\ p_a\in{\mathcal P_{r_a}}}} \sum_{\substack{n\le x \\ p_1\dots p_a\mid n}} 1~\le~\sum_{r_1,\dots,r_a\in{\mathcal P_{q}}} \frac1{a!} \sum_{\substack{p_1\in{\mathcal P_{r_1}} \\ \dots \\ p_a\in{\mathcal P_{r_a}}}} \frac x{p_1\dots p_a}$$ and $$\sum_{r_1,\dots,r_a\in{\mathcal P_{q}}} \frac1{2!(a-2)!} \sum_{\substack{p_1\in{\mathcal P_{r_1}} \\ \dots \\ p_{a-2}\in{\mathcal P_{r_{a-2}}} \\ p\in{\mathcal P_{r_{a-1}r_a}}}} \sum_{\substack{n\le x \\ p_1\dots p_{a-2}p\mid n}} 1~\le~\sum_{r_1,\dots,r_a\in{\mathcal P_{q}}} \frac1{(a-2)!} \sum_{\substack{p_1\in{\mathcal P_{r_1}} \\ \dots \\ p_{a-2}\in{\mathcal P_{r_{a-2}}} \\ p\in{\mathcal P_{r_{a-1}r_a}}}} \frac x{p_1\dots p_{a-2}p},$$ respectively, the factors $1/a!$ and $1/2!(a-2)!$ coming from the various possible permutations of the primes $r_i$. Letting $c\ge1$ be the constant implied in the Brun–Titchmarsh inequality as applied to moduli $n$ that are divisible by at most two distinct primes, we see that $$\sum_{r_1,\dots,r_a\in{\mathcal P_{q}}} \frac1{a!} \sum_{\substack{p_1\in{\mathcal P_{r_1}} \\ \dots \\ p_a\in{\mathcal P_{r_a}}}} \frac x{p_1\dots p_a}~\le ~\sum_{r_1,\dots,r_a\in{\mathcal P_{q}}} \frac1{a!} \frac{x(cy)^a}{r_1\dots r_a}~\le ~\frac{x(cy)^{2a}}{a!q^a}$$ and $$\sum_{r_1,\dots,r_a\in{\mathcal P_{q}}} \frac1{(a-2)!} \sum_{\substack{p_1\in{\mathcal P_{r_1}} \\ \dots \\ p_{a-2}\in{\mathcal P_{r_{a-2}}} \\ p\in{\mathcal P_{r_{a-1}r_a}}}} \frac x{p_1\dots p_{a-2}p}~\le ~\sum_{r_1,\dots,r_a\in{\mathcal P_{q}}} \frac1{(a-2)!} \frac{x(cy)^{a-1}}{r_1\dots r_a}~\le~\frac{x(cy)^{2a-1}}{(a-2)!q^a}.$$ Therefore the number of integers $n\le x$ such that $n\notin S$ and $q^a\mid\phi(\phi(n))$ is $$\le~\frac{x(cy)^{2a}}{a!q^a} + \frac{x(cy)^{2a-1}}{(a-2)!q^a}~< ~\frac{c^{2a}xy^4}{(a-2)!q^2}, \label{two.case.bound}$$ where we have used the assumption $q>y^2$. We now establish the estimate . Note that $$\begin{aligned} \sum_{\substack{n\le x \\ n\notin S}} \sum_{\substack{q>y^2 \\ v_q(\phi(\phi(n)))\ge2}} v_q(\phi(\phi(n)))\log q ~&\le~ 2\sum_{\substack{n\le x \\ n\notin S}} \sum_{\substack{q>y^2 \\ v_q(\phi(\phi(n)))\ge2}} \big( v_q(\phi(\phi(n))) -1\big) \log q \\ &=~2\sum_{q>y^2} \log q \sum_{a\ge2} \sum_{\substack{n\le x \\ n\notin S \\ q^a\mid\phi(\phi(n))}} 1.\end{aligned}$$ Therefore, using the bound for each pair $q$ and $a$, $$\begin{aligned} \sum_{\substack{n\le x \\ n\notin S}} \sum_{\substack{q>y^2 \\ v_q(\phi(\phi(n)))\ge2}} v_q(\phi(\phi(n)))\log q~&\le~2\sum_{q>y^2} \log q \sum_{a\ge2} \frac{c^{2a}xy^4}{(a-2)!q^2} \\ &=~2c^4e^{c^2}xy^4 \sum_{q>y^2} \frac{\log q}{q^2}~\ll~\frac{xy^4}{y^2}~=~ xy^2\end{aligned}$$ by the tail estimate . This establishes the estimate and hence the proposition. Small primes and the reduction to $h(n)$ ======================================== For any prime power $q^a$, the number of positive integers $n\le x$ for which $q^a$ divides $\lambda(\lambda(n))$ is $O(xy^2/q^a)$. \[prime.powers.in.lambda.lambda.lemma\] The prime power $q^a$ divides $\lambda(\lambda(n))$ only if at least one of the following criteria holds: - $q^{a+2}\mid n$, - there exists $p\in{\mathcal P_{q^a}}$ with $p^2\mid n$, - there exists $p\in{\mathcal P_{q^{a+1}}}$ with $p\mid n$, - there exist $r\in{\mathcal P_{q^a}}$ and $p\in{\mathcal P_{r}}$ with $p\mid n$. Thus $$\begin{aligned} \sum_{\substack{n\le x \\ q^a\mid\lambda(\lambda(n))}} 1~ &\le ~\sum_{\substack{n\le x \\ q^{a+2}\mid n}} 1 + \sum_{p\in{\mathcal P_{q^a}}} \sum_{\substack{n\le x \\ p^2\mid n}} 1 + \sum_{p\in{\mathcal P_{q^{a+1}}}} \sum_{\substack{n\le x \\ p\mid n}} 1 + \sum_{r\in{\mathcal P_{q^a}}} \sum_{p\in{\mathcal P_{r}}} \sum_{\substack{n\le x \\ p\mid n}} 1 \notag \\ &\le~ \frac x{q^{a+2}} + \sum_{\substack{p\in{\mathcal P_{q^a}} \\ p\le\sqrt x}} \frac x{p^2} + \sum_{\substack{p\in{\mathcal P_{q^{a+1}}} \\ p\le x}} \frac xp + \sum_{r\in{\mathcal P_{q^a}}} \sum_{\substack{p\in{\mathcal P_{r}}\\p\le x}} \frac xp. \label{four.criteria.split}\end{aligned}$$ In the first of these three sums, it is sufficient to notice that any $p\in{\mathcal P_{q^a}}$ must exceed $q^a$, which leads to the estimate $$\sum_{\substack{p\in{\mathcal P_{q^a}} \\ p\le\sqrt x}} \frac x{p^2}~ <~ \sum_{m>q^a} \frac x{m^2}~ <~ \frac x{q^a}.$$ To bound the second and third sums, we invoke the Brun–Titchmarsh estimate a total of three times: $$\begin{aligned} \sum_{\substack{p\in{\mathcal P_{q^{a+1}}} \\ p\le x}} \frac xp ~&\ll~ \frac{xy}{q^{a+1}} \\ \sum_{r\in{\mathcal P_{q^a}}} \sum_{\substack{p\in{\mathcal P_{r}}\\p\le x}} \frac xp~ &\ll ~\sum_{\substack{r\in{\mathcal P_{q^a}} \\ r\le x}} \frac{xy}r ~\ll ~\frac{xy^2}{q^a}.\end{aligned}$$ Using these three estimates, gives $$\sum_{\substack{n\le x \\ q^a\mid\lambda(\lambda(n))}} 1 ~\ll~ \frac x{q^{a+2}} + \frac x{q^a} + \frac{xy}{q^{a+1}} +\frac{xy^2}{q^a} ~\ll~ \frac{xy^2}{q^a},$$ which establishes the lemma. We have $$\sum_{q\le y^2} v_q(\lambda(\lambda(n))) \log q~ =~ \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\mid\lambda(\lambda(n))}} 1 ~\le ~\sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le y^2}} 1 + \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>y^2 \\ q^a\mid\lambda(\lambda(n))}} 1.$$ Since the first sum is simply $$\sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le y^2}} 1~ =~ \sum_{m\le y^2} \Lambda(m)~ \ll ~y^2$$ by the Chebyshev estimate , we have uniformly for $n\le x$, $$\sum_{q\le y^2} v_q(\lambda(\lambda(n))) \log q ~\ll~ y^2 + \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>y^2 \\ q^a\mid\lambda(\lambda(n))}} 1. \label{not.over.n.yet}$$ To show that this quantity is usually small, we sum this last double sum over $n$ and apply Lemma \[prime.powers.in.lambda.lambda.lemma\], yielding $$\sum_{n\le x} \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>y^2 \\ q^a\mid\lambda(\lambda(n))}} 1~ =~ \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>y^2}} \sum_{\substack{n\le x \\ q^a\mid\lambda(\lambda(n))}} 1 ~\ll ~\sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>y^2}} \frac{xy^2}{q^a}.$$ Using the geometric series sum and the Chebyshev estimate , this becomes $$\sum_{n\le x} \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>y^2 \\ q^a\mid\lambda(\lambda(n))}} 1 ~\ll ~\sum_{q\le y^2} \log q \cdot \frac{xy^2}{y^2}~ \ll~ xy^2.$$ Therefore if we sum both sides of over $n$, we obtain $$\begin{aligned} \sum_{n\le x} \sum_{q\le y^2} v_q(\lambda(\lambda(n))) \log q ~\ll ~xy^2.\end{aligned}$$ This implies that for almost all $n\le x$, we have $$\sum_{q\le y^2} v_q(\lambda(\lambda(n))) \log q ~\ll~ y^2\psi(x),$$ as desired. Fix a prime $q$ for the moment. For any positive integer $m$, the usual formula for $\phi(m)$ readily implies $$v_q(\phi(m))~ =~ \max\{0,v_q(m)-1\} + \sum_{p\mid m} v_q(p-1),$$ which we use in the form $$\sum_{p\mid m} v_q(p-1) ~\le~ v_q(\phi(m)) ~\le~ \sum_{p\mid m} v_q(p-1) + v_q(m).$$ Using these inequalities twice, first with $m=\phi(n)$ and then with $m=n$, we see that $$\begin{aligned} \sum_{p\mid\phi(n)} v_q(p-1)~ \le~ v_q(\phi(\phi(n)))~ &\le ~\sum_{p\mid\phi(n)} v_q(p-1) + v_q(\phi(n)) \notag \\ &\le~ \sum_{p\mid\phi(n)} v_q(p-1) + \sum_{p\mid n} v_q(p-1) + v_q(n). \label{two.phi.inequalities}\end{aligned}$$ Now a prime $r$ divides $\phi(n)$ if and only if either $r^2\mid n$ or there exists a prime $p\mid n$ such that $r\mid p-1$. Therefore $$\sum_{p\mid n} \sum_{r\mid p-1} v_q(r-1) ~\le~ \sum_{r\mid\phi(n)} v_q(r-1)~ \le~ \sum_{p\mid n} \sum_{r\mid p-1} v_q(r-1) + \sum_{r\colon r^2\mid n} v_q(r-1),$$ the latter inequality accounting for the possibility that both criteria hold for some prime $r$. When we combine these inequalities with those in equation and subtract the double sum over $p$ and $r$ throughout, we obtain $$\begin{aligned} 0~ \le~ v_q(\phi(\phi(n))) - \sum_{p\mid n} \sum_{r\mid p-1} v_q(r-1) ~&\le~ \sum_{r\colon r^2\mid n} v_q(r-1) + \sum_{p\mid n} v_q(p-1) + v_q(n) \\ &\le~ 2\sum_{p\mid n} v_q(p-1) + v_q(n).\end{aligned}$$ Now we multiply through by $\log q$ and sum over all primes $q\le y^2$ to conclude that for any positive integer $n$, $$0~ \le~ \sum_{q\le y^2} v_q(\phi(\phi(n))) \log q - h(n) ~\le~ 2 \sum_{q\le y^2} \sum_{p\mid n} v_q(p-1) \log q + \sum_{q\le y^2} v_q(n) \log q.$$ It remains to show that the right-hand side of this last inequality is $O(y\log y\cdot\psi(x))$ for almost all $n\le x$, which we accomplish by establishing the estimate $$\sum_{n\le x} \sum_{q\le y^2} \sum_{p\mid n} v_q(p-1) \log q + \sum_{n\le x} \sum_{q\le y^2} v_q(n) \log q ~\ll~ xy\log y. \label{hn.prop.to.show}$$ We may rewrite the first term on the left-hand side as $$\begin{aligned} \sum_{n\le x} \sum_{q\le y^2} \sum_{p\mid n} v_q(p-1) \log q~ &= ~\sum_{n\le x} \sum_{q\le y^2} \sum_{p\mid n} \sum_{\substack{a\in{{\mathbb N}}\\ q^a\mid p-1}} \log q \\ &=~ \sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \sum_{p\in{\mathcal P_{q^a}}} \sum_{\substack{n\le x \\ p\mid n}} 1 ~\le~ \sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \sum_{p\in{\mathcal P_{q^a}}} \frac xp.\end{aligned}$$ Using the Brun–Titchmarsh inequality and the geometric series estimate , we obtain $$\sum_{n\le x} \sum_{q\le y^2} \sum_{p\mid n} v_q(p-1) \log q ~\ll~ x \sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \frac y{q^a}~ \ll~ xy \sum_{q\le y^2} \frac{\log q}q~ \ll~ xy\log y^2.$$ The second term on the left-hand side of is even simpler: we have $$\sum_{n\le x} \sum_{q\le y^2} v_q(n) \log q~ =~ \sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \sum_{\substack{n\le x \\ q^a\mid n}} 1 ~\le~ \sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \frac x{q^a},$$ and using the geometric series bound and the weak Chebyshev estimate yields $$\sum_{n\le x} \sum_{q\le y^2} v_q(n) \log q~ \ll~ x \sum_{q\le y^2} \frac{\log q}q ~\ll~ x\log y^2.$$ The last two estimates therefore establish and hence the proposition. The normal order of $h(n)$ {#h.normal.order.section} ========================== Recall the definition : $h(n)~ =~ \sum_{p\mid n} \sum_{r\mid p-1} \sum_{q\le y^2} v_q(r-1) \log q$. We now calculate the normal order of the additive function $h(n)$ via the Turán–Kubilius inequality (see [@K], Lemma 3.1). If we define $$M_1(x)~=~\sum_{p\le x}\frac{h(p)}p,\qquad M_2(x)~=~\sum_{p\le x}\frac{h(p)^2}p,$$ then the Turán-Kubilius inequality asserts that $$\sum_{n\le x}(h(n)-M_1(x))^2~\ll ~xM_2(x). \label{TK}$$ We have $M_1(x) = y^2\log y + O(y^2)$ for all $x>e^{e^e}$. \[M1.prop\] We have $M_2(x) \ll y^3\log^2y$ for all $x>e^{e^e}$. \[M2.prop\] Let $N$ denote the number of $n\le x$ for which $|h(n)-M_1(x)|>y^2$. The contribution of such $n$ to the sum in is at least $y^4N$. Thus, Proposition \[M2.prop\] implies that $N\ll x(\log y)^2/y$. Hence, Proposition \[M1.prop\] implies that $h(n)=y^2\log y+O(y^2)$ for all $n\le x$ but for a set of size $O(x(\log y)^2)/y)$. This proves Proposition \[h.normal.order.prop\]. To calculate $M_1(x)$ and $M_2(x)$ we shall first calculate $\sum_{p\le t} h(p)$ and $\sum_{p\le t} h(p)^2$ and then account for the weights $1/p$ using partial summation. We begin the evaluation of $\sum_{p\le t} h(p)$ with a lemma. Let $b$ be a positive integer and $t>e^e$ a real number. 1. If $b>t^{1/4}$ then $$\sum_{r\in{\mathcal P_{b}}} \pi(t;r,1)~ \ll~ \frac{t\log t}b.$$ 2. If $b\le t^{1/4}$ then $$\sum_{\substack{r\in{\mathcal P_{b}} \\ r>t^{1/3}}} \pi(t;r,1) ~\ll~ \frac {bt}{\phi(b)^2\log t}.$$ and $$\sum_{r\in{\mathcal P_{b}}} \pi(t;r,1) ~\ll~ \frac{t\log\log t}{\phi(b)\log t}$$ \[tricky.to.get.right.lemma\] [*Remark.*]{} The exponents $\frac14$ and $\frac13$ are rather arbitrary and chosen only for simplicity; any two exponents $0<\alpha<\beta<\frac12$ would do equally well. Notice that in all three sums, the only contributing terms are those with $r>b$ and $r<t$. If $b>t^{1/4}$, then the trivial bound $\pi(t;r,1)\le t/r$ gives $$\sum_{r\in{\mathcal P_{b}}}\pi(t;r,1) ~\le~ \sum_{\substack{r\in{\mathcal P_{b}} \\ t^{1/4}<r\le t}} \frac tr ~\le ~\sum_{\substack{m\equiv1{{\ifmmode\text{\rm\ (mod~$b$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$b$)\fi}}\\ t^{1/4}<m\le t}}\frac tm ~\ll~\frac{t\log t}{b},$$ proving part (a) of the lemma. We now assume $b\le t^{1/4}$. We have $$\begin{aligned} \sum_{\substack{r\in{\mathcal P_{b}} \\ r>t^{1/3}}} \pi(t;r,1)~ &=~ \#\{ (m,r)\colon r\equiv1{{\ifmmode\text{\rm\ (mod~$b$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$b$)\fi}},\, r>t^{1/3},\, mr+1\le t,\, \text{$mr+1$ and $r$ both prime} \} \\ &\le~ \sum_{m<t^{2/3}} \#\{ r<\tfrac tm\colon r\equiv1{{\ifmmode\text{\rm\ (mod~$b$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$b$)\fi}},\, \text{$mr+1$ and $r$ both prime} \} \\ &\ll~ \sum_{m<t^{2/3}} \frac bt{\phi(mb)\phi(b)\log^2 \frac t{mb}}\end{aligned}$$ by Brun’s sieve method (see [@HR Corollary 2.4.1]). We have $\frac t{mb} \ge t^{1/12}$ and so $\log \frac t{mb} \gg \log t$. We also have $\phi(mb) \ge \phi(m)\phi(b)$ and the standard estimate $$\sum_{m\le z} \frac1{\phi(m)} ~\ll~ \log z. \label{phi.reciprocal.sum}$$ Therefore $$\sum_{\substack{r\in{\mathcal P_{b}} \\ r>t^{1/3}}} \pi(t;r,1) ~\ll~ \sum_{m<t^{2/3}} \frac {bt}{\phi(m)\phi(b)^2\log^2 t} ~\ll~ \frac{bt\log t^{2/3}}{\phi(b)^2\log^2t} ~\le~ \frac {bt}{\phi(b)^2\log t},$$ establishing the first estimate in part (b). Finally, by the Brun–Titchmarsh inequalities and , $$\sum_{\substack{r\in{\mathcal P_{b}} \\ r\le t^{1/3}}} \pi(t;r,1) ~\ll~ \sum_{\substack{r\in{\mathcal P_{b}} \\ r\le t^{1/3}}} \frac t{\phi(r)\log\frac tr} ~\ll~ \sum_{\substack{r\in{\mathcal P_{b}} \\ r\le t^{1/3}}} \frac t{r\log t}~ \ll~ \frac {t\log\log t}{\phi(b)\log t}.$$ Combining this estimate with the first half of part (b) and the standard estimate $b/\phi(b)\ll\log\log b$ establishes the second half. For all real numbers $x>e^{e^e}$ and $t>e^e$, we have $$\sum_{p\le t}h(p)~ = ~\frac{2t\log\log t\log y}{\log t} + O\Big( \frac{t\log\log t}{\log t} + \frac{t\log^2y}{\log t} + t^{3/4}\log t\cdot y^2 \Big).$$ \[hp.sum.lemma\] [*Remark.*]{} In particular, we have $\sum_{p\le x}h(p) \ll x\log\log x\log y/\log x = xy\log y/\log x$. We may rewrite $$\begin{gathered} \sum_{p\le t} h(p)~ =~ \sum_{p\le t} \sum_{r\mid p-1} \sum_{q\le y^2} v_q(r-1) \log q~ =~ \sum_{p\le t} \sum_{r\mid p-1} \sum_{q\le y^2} \sum_{\substack{a\in{{\mathbb N}}\\ q^a \mid r-1}} \log q \\ ~=~ \sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \sum_{r\colon q^a \mid r-1} \sum_{\substack{p\le t \\ r\mid p-1}} 1 ~=~ \sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \sum_{r\in{\mathcal P_{q^a}}} \pi(t;r,1). \label{hp.sum.before.split}\end{gathered}$$ The main contribution to this triple sum comes from the terms with $q^a\le t^{1/4}$ and $r\le t^{1/3}$. In fact, using Lemma \[tricky.to.get.right.lemma\](a) we can bound the contribution from the terms with $q^a$ large by $$\begin{aligned} \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>t^{1/4}}} \sum_{r\in{\mathcal P_{q^a}}} \pi(t;r,1) ~&\le~ \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>t^{1/4}}} \sum_{r\in{\mathcal P_{q^a}}} \pi(t;r,1) ~\ll~ \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>t^{1/4}}} \frac{t\log t}{q^a} \\ &\ll~ t\log t \sum_{q\le y^2} \frac{\log q}{t^{1/4}}~ \ll~ t^{3/4}\log t\cdot y^2,\end{aligned}$$ where the last two estimates are due to the geometric series bound and the Chebyshev bound . Similarly, using the first half of Lemma \[tricky.to.get.right.lemma\](b) we can bound the contribution from the terms with $q^a$ small and $r$ large by $$\sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le t^{1/4}}} \sum_{\substack{r\in{\mathcal P_{q^a}} \\ r>t^{1/3}}} \pi(t;r,1)~\ll~ \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le t^{1/4}}} \frac t{q^a\log t} ~\ll~\frac t{\log t}\sum_{q\le y^2}\frac{\log q}q~\ll~\frac{t\log y}{\log t},$$ where again the last two estimates are due to the geometric series bound and the weak Chebyshev bound . In light of these two estimates, equation becomes $$\sum_{p\le t} h(p) ~=~ \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le t^{1/4}}} \sum_{\substack{r\in{\mathcal P_{q^a}} \\ r\le t^{1/3}}} \pi(t;r,1) + O \Big( t^{3/4}\log t\cdot y^2 + \frac{t\log y}{\log t} \Big). \label{hp.sum.after.split}$$ Define $E(t;r,1) = \pi(t;r,1) - {\mathop{\rm li}}(t)/(r-1)$. We have $$\begin{gathered} \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le t^{1/4}}} \sum_{\substack{r\in{\mathcal P_{q^a}} \\ r\le t^{1/3}}} \pi(t;r,1)~=~\sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le t^{1/4}}} \sum_{\substack{r\in{\mathcal P_{q^a}} \\ r\le t^{1/3}}} \Big( \frac{{\mathop{\rm li}}(t)}{r-1} + E(t;r,1) \Big) \\ =~\sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le t^{1/4}}} \sum_{\substack{r\in{\mathcal P_{q^a}} \\ r\le t^{1/3}}} \frac{{\mathop{\rm li}}(t)}{r-1} + O\bigg( \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le t^{1/4}}} \sum_{\substack{r\in{\mathcal P_{q^a}} \\ r\le t^{1/3}}} |E(t;r,1)| \bigg). \label{set.up.BV.application}\end{gathered}$$ Let $\Omega(m)$ denote the number of divisors of $m$ that are primes or prime powers. Using the estimate $\Omega(m) \ll \log m$, we quickly dispose of $$\begin{aligned} \sum_{q\le y^2} \log q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le t^{1/4}}} \sum_{\substack{r\in{\mathcal P_{q^a}} \\ r\le t^{1/3}}} |E(t;r,1)|~&=~\log y \sum_{r\le t^{1/3}} |E(t;r,1)| \sum_{q\le y^2} \sum_{\substack{a\in{{\mathbb N}}\\ q^a\mid r-1}} 1 \\ &\le~\log y \sum_{r\le t^{1/3}} |E(t;r,1)|\, \Omega(r-1) \\ &\ll~\log y\log t\sum_{r\le t^{1/3}}|E(t;r,1)|~\ll~\frac{t\log y}{\log t}\end{aligned}$$ by the Bombieri–Vinogradov theorem (we could equally well put any power of $\log t$ in the denominator of the final expression if we needed). Inserting this estimate into equation , we see that equation becomes $$\sum_{p\le t} h(p)~=~{\mathop{\rm li}}(t) \sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \sum_{\substack{r\in{\mathcal P_{q^a}} \\ r\le t^{1/3}}} \frac1{r-1} + O \Big( t^{3/4}\log t\cdot y^2 + \frac{t\log y}{\log t} \Big). \label{hp.sum.after.BV}$$ We have by equation $$\begin{aligned} \sum_{q\le y^2} \log q&\sum_{a\in{{\mathbb N}}} \sum_{\substack{r\in{\mathcal P_{q^a}} \\ r\le t^{1/3}}} \frac1{r-1}~=~\sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \Big( \frac{\log\log t^{1/3}}{\phi(q^a)} + O\Big( \frac{\log q^a}{q^a} \Big) \Big) \\ &=~(\log\log t + O(1)) \sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \Big( \frac1{q^a} + O\Big( \frac1{q^{a+1}} \Big) \Big) + O\bigg( \sum_{q\le y^2} \log^2 q \sum_{a\in{{\mathbb N}}} \frac{a}{q^a} \bigg) \\ &=~(\log\log t + O(1)) \sum_{q\le y^2} \Big( \frac{\log q}{q} + O\Big( \frac{\log q}{q^2} \Big) \Big) + O\bigg( \sum_{q\le y^2} \frac{\log^2 q}q \bigg),\end{aligned}$$ using the geometric series estimate . Using the Mertens formula to evaluate the main term and the weak Chebyshev estimates to bound the error terms, we see that $$\sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \sum_{\substack{r\in{\mathcal P_{q^a}} \\ r\le t^{1/3}}} \frac1{r-1}~=~\log\log t\log y^2 + O(\log y + \log\log t + \log^2y).$$ We conclude from equation and the fact that ${\mathop{\rm li}}(t) = t/\log t + O( t/\log^2t)$ that $$\begin{aligned} \sum_{p\le t} h(p)~&=~{\mathop{\rm li}}(t) \big( \log\log t\log y^2 + O(\log y + \log\log t + \log^2y) \big) \\ &\qquad{}+ O \Big( t^{3/4}\log t\cdot y^2 + \frac{t\log y}{\log t} \Big) \\ &=~\frac{2t\log\log t\log y}{\log t} + O\Big( \frac{t\log\log t}{\log t} + \frac{t\log^2y}{\log t} + t^{3/4}\log t\cdot y^2 \Big),\end{aligned}$$ as asserted. In an explicit example of the technique of partial summation, we write $$\begin{aligned} M_1(x)~=~\sum_{p\le x} \frac{h(p)}p~&=~\sum_{p\le e^e} \frac{h(p)}p + \sum_{e^e<p\le x} h(p) \bigg( \frac1x + \int_p^x \frac{dt}{t^2} \bigg) \\ &=~O(1) + \frac1x \sum_{e^e<p\le x} h(p) + \int_{e^e}^x \frac{dt}{t^2} \sum_{e^e<p\le t} h(p).\end{aligned}$$ The quantity $\sum_{p\le t} h(p)$ has been evaluated asymptotically in Lemma \[hp.sum.lemma\], and the quantity $\sum_{e^e<p\le t} h(p)$ differs by only $O(1)$. Therefore we may use Lemma \[hp.sum.lemma\] and the remark following its statement to write $$\begin{aligned} M_1(x)~&=~O(1) + \frac1x O\Big( \frac{xy\log y}{\log x} \Big) \\ &\qquad{}+ \int_{e^e}^x \frac{dt}{t^2} \Big( \frac{2t\log\log t\log y}{\log t} + O\Big( \frac{t\log\log t}{\log t} + \frac{t\log^2y}{\log t} + t^{3/4}\log t\cdot y^2 \Big) \Big) \\ &=~O\Big( \frac{y\log y}{\log x} \Big) +\log y \int_{e^e}^x \frac{2\log\log t}{t\log t} \,dt \\ &\qquad{}+ O\bigg( \int_{e^e}^x \frac{\log\log t}{t\log t} \,dt + \log^2y \int_{e^e}^x \frac{dt}{t\log t} + y^2 \int_{e^e}^x \frac{dt}{t^{5/4}} \bigg).\end{aligned}$$ Each of these integrals can be explicitly evaluated, resulting in the asymptotic formula $$\begin{aligned} M_1(x)~&=~\log y \big( (\log\log x)^2-1 \big)+ O\Big( \frac{y\log y}{\log x} + (\log\log x)^2 + \log^2y\cdot \log\log x + y^2 \Big) \\ &=~y^2\log y + O(y^2),\end{aligned}$$ as claimed. Now we turn our attention to $M_2(x)$, beginning with some preliminary lemmas. For all real numbers $x>e^{e^e}$ and $t>e^e$, we have $$\sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{a_1,a_2\in{{\mathbb N}}} \sum_{r\in{\mathcal P_{q_1^{a_1}}} \cap {\mathcal P_{q_2^{a_2}}}} \sum_{\substack{p\le t \\ p\equiv1{{\ifmmode\text{\rm\ (mod~$r$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$r$)\fi}}}} 1~\ll~t^{7/8}\log t\cdot y^2\log y + \frac{t\log\log t\cdot \log^2y}{\log t}.$$ \[hp2.sum.error.lemma\] Since the exact form of ${\mathcal P_{q_1^{a_1}}} \cap {\mathcal P_{q_2^{a_2}}}$ depends on whether or not $q_1=q_2$, we split the expression in question into two separate sums: $$\begin{aligned} \sum_{q_1,q_2\le y^2}&\log q_1 \log q_2\sum_{a_1,a_2\in{{\mathbb N}}} \sum_{r\in{\mathcal P_{q_1^{a_1}}} \cap {\mathcal P_{q_2^{a_2}}}} \sum_{\substack{p\le t \\ p\equiv1{{\ifmmode\text{\rm\ (mod~$r$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$r$)\fi}}}} 1 \label{qs.equal.or.not} \\ &=~\sum_{q\le y^2} \log^2q \sum_{a_1,a_2\in{{\mathbb N}}} \sum_{r\in{\mathcal P_{q^{\max\{a_1,a_2\}}}}} \pi(t;r,1) + \sum_{\substack{q_1,q_2\le y^2 \\ q_1\ne q_2}} \log q_1 \log q_2 \sum_{a_1,a_2\in{{\mathbb N}}} \sum_{r\in{\mathcal P_{q_1^{a_1}q_2^{a_2}}}} \pi(t;r,1). \notag\end{aligned}$$ Noting that there are exactly $2a-1$ ordered pairs $(a_1,a_2)$ for which $\max\{a_1,a_2\}=a$, we have $$\begin{aligned} \sum_{q\le y^2} \log^2q&\sum_{a_1,a_2\in{{\mathbb N}}} \sum_{r\in{\mathcal P_{q^{\max\{a_1,a_2\}}}}} \pi(t;r,1)~=~\sum_{q\le y^2} \log^2q \sum_{a\in{{\mathbb N}}} (2a-1) \sum_{r\in{\mathcal P_{q^a}}} \pi(t;r,1) \\ &\ll~\sum_{q\le y^2} \log^2q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>t^{1/4}}}\frac{at\log t}{q^a} + \sum_{q\le y^2} \log^2q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le t^{1/4}}}\frac{at\log\log t}{q^a\log t}\end{aligned}$$ by Lemma \[tricky.to.get.right.lemma\]. Since $$\sum_{q\le y^2} \log^2q \sum_{\substack{a\in{{\mathbb N}}\\ q^a>t^{1/4}}}\frac{at\log t}{q^a}~\ll~t\log t \log y^2\sum_{q\le y^2} \frac{\log q}{t^{1/4}}~\ll~t^{3/4}\log t\cdot y^2\log y$$ by the Chebyshev bound , and $$\sum_{q\le y^2} \log^2q \sum_{\substack{a\in{{\mathbb N}}\\ q^a\le t^{1/4}}}\frac{at\log\log t}{q^a\log t}~\ll~\frac{t\log\log t}{\log t} \sum_{q\le y^2} \frac{\log^2q}q~\ll~\frac{t\log\log t\cdot\log^2y}{\log t}$$ by and its weaker version , the first term on the right-hand side of equation is bounded by the estimate asserted in the statement of the lemma. It remains to satisfactorily bound the second term on the right-hand side of equation . Again dividing the sum so that Lemma \[tricky.to.get.right.lemma\] can be applied, we have $$\begin{aligned} \sum_{\substack{q_1,q_2\le y^2 \\ q_1\ne q_2}} \log q_1 \log q_2 \sum_{a_1,a_2\in{{\mathbb N}}} \sum_{r\in{\mathcal P_{q_1^{a_1}q_2^{a_2}}}} & \pi(t;r,1)~\ll~ \sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{\substack{a_1,a_2\in{{\mathbb N}}\\ q_1^{a_1}q_2^{a_2} > t^{1/4}}} \frac{t\log t}{q_1^{a_1}q_2^{a_2}} \\ &\qquad{}+ \sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{\substack{a_1,a_2\in{{\mathbb N}}\\ q_1^{a_1}q_2^{a_2}\le t^{1/4}}} \frac{t\log\log t}{q_1^{a_1}q_2^{a_2}\log t}.\end{aligned}$$ In the first of these two terms, at least one of the $q_i^{a_i}$ must exceed $t^{1/8}$, and so using the estimates , , and we see that $$\begin{aligned} \sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{\substack{a_1,a_2\in{{\mathbb N}}\\ q_1^{a_1}q_2^{a_2} > t^{1/4}}} \frac{t\log t}{q_1^{a_1}q_2^{a_2}}~&\le~ 2t\log t \sum_{q_1\le y^2} \log q_1 \sum_{\substack{a_1\in{{\mathbb N}}\\ q_1^{a_1} > t^{1/8}}} \frac1{q_1^{a_1}} \sum_{q_2\le y^2} \log q_2 \sum_{a_2\in{{\mathbb N}}} \frac1{q_2^{a_2}} \\ &\ll~t\log t \sum_{q_1\le y^2} \frac{\log q_1}{t^{1/8}} \sum_{q_2\le y^2} \frac{\log q_2}{q_2} \\ &\ll~t^{7/8}\log t\cdot y^2\log y.\end{aligned}$$ In the second, we simply ignore the restriction $q_1^{a_1}q_2^{a_2}\le t^{1/4}$ and use the estimates and , obtaining $$\begin{aligned} \sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{a_1,a_2\in{{\mathbb N}}} \frac{t\log\log t}{q_1^{a_1}q_2^{a_2}\log t} &=~\frac{t\log\log t}{\log t} \bigg( \sum_{q\le y^2} \log q \sum_{a\in{{\mathbb N}}} \frac1{q^a} \bigg)^2 \\ &\ll~\frac{t\log\log t}{\log t} \bigg( \sum_{q\le y^2} \frac{\log q}q \bigg)^2 \\ &\ll~\frac{t\log\log t\cdot\log^2y}{\log t}.\end{aligned}$$ This concludes the proof of the lemma. The following lemma is similar in spirit to Lemma \[tricky.to.get.right.lemma\] but is a bit more complicated to state and prove. Let $b_1$ and $b_2$ be positive integers and $t>e^e$ a real number. 1. If $b_1>t^{1/8}$ or $b_2>t^{1/8}$ then $$\sum_{r_1\in{\mathcal P_{b_1}}} \sum_{r_2\in{\mathcal P_{b_2}}} \pi(t;r_1r_2,1)~\ll~ \frac{t\log^2t}{b_1b_2}.$$ 2. If neither $b_1$ nor $b_2$ exceeds $t^{1/8}$ then $$\sum_{r_1\in{\mathcal P_{b_1}}} \sum_{\substack{r_2\in{\mathcal P_{b_2}} \\ r_1r_2>t^{1/3}}} \pi(t;r_1r_2,1)~\ll~\frac{b_2t\log\log t}{\phi(b_1)\phi(b_2)^2\log t}$$ and $$\sum_{r_1\in{\mathcal P_{b_1}}} \sum_{r_2\in{\mathcal P_{b_2}}} \pi(t;r_1r_2,1)~\ll~ \frac{t(\log\log t)^2}{\phi(b_1)\phi(b_2)\log t}.$$ \[double.tricky.lemma\] [*Remark.*]{} Again, the values $1/8$ and $1/3$ for the exponents are rather arbitrary. The bound in part (a) follows from the trivial estimate $\pi(t;r_1r_2,1) \ll t/r_1r_2$, just as in the proof of Lemma \[tricky.to.get.right.lemma\](a). For the first estimate in part (b), we my assume that $r_1\le r_2$ by symmetry. We use Brun’s method again: $$\begin{aligned} \sum_{r_1\in{\mathcal P_{b_1}}} \sum_{\substack{r_2\in{\mathcal P_{b_2}} \\ r_1\le r_2 \\ r_1r_2>t^{1/3}}} &\pi(t;r_1r_2,1) \\ &=~\#\{(m,r_1,r_2)\colon r_1\equiv1{{\ifmmode\text{\rm\ (mod~$b_1$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$b_1$)\fi}},\, r_2\equiv1{{\ifmmode\text{\rm\ (mod~$b_2$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$b_2$)\fi}},\, r_1\le r_2,\, r_1r_2>t^{1/3}, \\ &\qquad\quad mr_1r_2+1\le t,\, \text{and $r_1$, $r_2$, and $mr_1r_2+1$ are all prime} \} \\ &\le~\sum_{m<t^{2/3}} \sum_{\substack{r_1<\sqrt{t/m}\\ r_1\in{\mathcal P_{b_1}} }} \sum_{\substack{r_2<t/mr_1\\ r_2\in{\mathcal P_{b_2}}\\ mr_1r_2+1~{\rm prime}}}1\\ &\ll~\sum_{m<t^{2/3}} \sum_{\substack{r_1<\sqrt{t/m}\\ r_1\in{\mathcal P_{b_1}} }} \frac{mr_1b_2}{\phi(b_2)\phi(mr_1b_2)}\cdot\frac{t/mr_1}{\log^2(t/ mr_1b_2)}. $$ Notice that $t/mr_1b_2 > (\sqrt{t/m})/b_2 > t^{1/6}/t^{1/8} = t^{1/24}$, and so $$\begin{aligned} \sum_{r_1\in{\mathcal P_{b_1}}} \sum_{\substack{r_2\in{\mathcal P_{b_2}}\\ r_1\le r_2\\ r_1r_2>t^{1/3}}} \pi(t;r_1r_2,1) &\ll~\frac t{\log^2t}\sum_{m<t^{2/3}}\sum_{\substack{r_1<\sqrt{t/m}\\ r_1\in{\mathcal P_{b_1}}}} \frac{b_2}{\phi(b_2)^2\phi(m)\phi(r_1)}\\ &\ll~\frac{b_2t\log\log t}{\phi(b_1)\phi(b_2)^2\log^2t}\sum_{m<t^{2/3}}\frac1{\phi(m)}\\ &\ll~\frac{b_2t\log\log t}{\phi(b_1)\phi(b_2)^2\log t}. $$ by the estimates and as desired. The second estimate of part (b) is a consequence of the first estimate and $$\sum_{r_1\in{\mathcal P_{b_1}}} \sum_{\substack{r_2\in{\mathcal P_{b_2}} \\ r_1r_2\le t^{1/3}}} \pi(t;r_1r_2,1)~\ll~\frac{t(\log\log t)^2}{\phi(b_1)\phi(b_2)\log t},$$ which follows from the Brun–Titchmarsh inequality just as in the proof of Lemma \[tricky.to.get.right.lemma\](b). We may rewrite $$\begin{aligned} \sum_{p\le t} h(p)^2~&=~\sum_{p\le t} \bigg( \sum_{r\mid p-1} \sum_{q\le y^2} \sum_{\substack{a\in{{\mathbb N}}\\ q^a\mid r-1}} \log q \bigg)^2 \\ &=~\sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{a_1,a_2\in{{\mathbb N}}} \sum_{\substack{r_1\in{\mathcal P_{q_1^{a_1}}} \\ r_2\in{\mathcal P_{q_2^{a_2}}}}} \sum_{\substack{p\le t \\ p\equiv1{{\ifmmode\text{\rm\ (mod~$r_1$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$r_1$)\fi}} \\ p\equiv1{{\ifmmode\text{\rm\ (mod~$r_2$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$r_2$)\fi}}}} 1 \\ &=~\sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{a_1,a_2\in{{\mathbb N}}} \sum_{\substack{r_1\in{\mathcal P_{q_1^{a_1}}} \\ r_2\in{\mathcal P_{q_2^{a_2}}} \\ r_1\ne r_2}} \sum_{\substack{p\le t \\ p\equiv1{{\ifmmode\text{\rm\ (mod~$r_1$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$r_1$)\fi}} \\ p\equiv1{{\ifmmode\text{\rm\ (mod~$r_2$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$r_2$)\fi}}}} 1 \\ &\qquad{}+ O\Big( t^{7/8}\log t\cdot y^2\log y + \frac{t\log\log t\cdot \log^2y}{\log t} \Big),\end{aligned}$$ the last step due to Lemma \[hp2.sum.error.lemma\]. Since $r_1$ and $r_2$ are distinct primes, the innermost sum is simply $\pi(t;r_1r_2,1)$, and thus $$\begin{gathered} \sum_{p\le t} h(p)^2~\le~\sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{a_1,a_2\in{{\mathbb N}}} \sum_{\substack{r_1\in{\mathcal P_{q_1^{a_1}}} \\ r_2\in{\mathcal P_{q_2^{a_2}}}}} \pi(t;r_1r_2,1) \\ + O\Big( t^{7/8}\log t\cdot y^2\log y + \frac{t\log\log t\cdot \log^2y}{\log t} \Big). \label{unhelpful.name}\end{gathered}$$ The contribution to the sum on the right-hand side of equation from those terms for which $q_1^{a_1}>t^{1/8}$ is $$\begin{aligned} \sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{\substack{a_1,a_2\in{{\mathbb N}}\\ q_1^{a_1}>t^{1/8}}} \sum_{\substack{r_1\in{\mathcal P_{q_1^{a_1}}} \\ r_2\in{\mathcal P_{q_2^{a_2}}}}} & \pi(t;r_1r_2,1) \\ &\ll~\sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{\substack{a_1,a_2\in{{\mathbb N}}\\ q_1^{a_1}>t^{1/8}}} \frac{t\log^2t}{q_1^{a_1}q_2^{a_2}} \\ &\ll~t\log^2t \sum_{q_1\le y^2} \sum_{\substack{a_1\in{{\mathbb N}}\\ q_1^{a_1}>t^{1/8}}} \frac{\log q_1}{q_1^{a_1}} \sum_{q_2\le y^2} \sum_{a_2\in{{\mathbb N}}} \frac{\log q_2}{q_2^{a_2}} \\ &\ll~t\log^2t \sum_{q_1\le y^2} \frac{\log q_1}{t^{1/8}} \sum_{q_2\le y^2} \frac{\log q_2}{q_2} \\ &\ll~t^{7/8}\log^2t \cdot y^2\log y\end{aligned}$$ by Lemma \[double.tricky.lemma\](a) and the estimates , , and ; the contribution from the terms for which $q_1^{a_1}>t^{1/8}$ is bounded likewise. The remaining contribution is $$\begin{aligned} \sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{\substack{a_1,a_2\in{{\mathbb N}}\\ q_1^{a_1},q_2^{a_2}\le t^{1/8}}} \sum_{\substack{r_1\in{\mathcal P_{q_1^{a_1}}} \\ r_2\in{\mathcal P_{q_2^{a_2}}}}} & \pi(t;r_1r_2,1) \\ &\ll~\sum_{q_1,q_2\le y^2} \log q_1 \log q_2 \sum_{\substack{a_1,a_2\in{{\mathbb N}}\\ q_1^{a_1},q_2^{a_2}\le t^{1/8}}} \frac{t(\log\log t)^2}{q_1^{a_1}q_2^{a_2}\log t} \\ &\ll~\frac{t(\log\log t)^2}{\log t} \bigg( \sum_{q\le y^2} \sum_{a\in{{\mathbb N}}} \frac{\log q}{q^a} \bigg)^2 \\ &\ll~\frac{t(\log\log t)^2\log^2 y}{\log t}\end{aligned}$$ by Lemma \[double.tricky.lemma\](b) and the estimates and . Using both these bounds in equation , we conclude that $$\sum_{p\le t} h(p)^2~\ll~t^{7/8}\log t\cdot y^2\log y + \frac{t(\log\log t)^2 \log^2y}{\log t}.$$ We now evaluate $M_2(x)$ using partial summation. We have $$\begin{aligned} M_2(x)~=~\sum_{p\le x} \frac{h(p)^2}p~&=~\sum_{p\le e^e} \frac{h(p)^2}p + \frac1x \sum_{e^e<p\le x} h(p)^2 + \int_{e^e}^x \frac{dt}{t^2} \sum_{e^e<p\le t} h(p)^2 \\ &\ll~1 + \frac1x \cdot \frac{x(\log\log x)^2\log y}{\log x} \\ &\qquad{}+ \int_{e^e}^x \frac{dt}{t^2} \Big( t^{7/8}\log t\cdot y^2\log y + \frac{t(\log\log t)^2 \log^2y}{\log t} \Big) \\ &\ll~\frac{y^2\log y}{\log x} + y^2\log y \int_{e^e}^x \frac{\log t\, dt}{t^{9/8}} + \log^2y \int_{e^e}^x \frac{(\log\log t)^2}{t\log t}\,dt.\end{aligned}$$ Evaluating these two integrals explicitly, we obtain $$M_2(x)~\ll~\frac{y^2\log y}{\log x} + y^2\log y + \log^2 y\cdot (\log\log x)^3 \ll y^3\log^2y$$ as claimed. Normal number of cycles for the power generator =============================================== If $(u,n)=1$, then the sequence $u^i{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}$ for $i=1,2,\dots$ is purely periodic. We denote the length of the period by ${\operatorname{ord}}(u,n)$, which of course is the multiplicative order of $u$ in $({{\mathbb Z}}/n{{\mathbb Z}})^\times$. Even when $(u,n)>1$, the sequence $u^i{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}$ is eventually periodic, and we denote the length of the eventual cycle by ${\operatorname{ord^*}}(u,n)$. So, letting $n_{(u)}$ denote the largest divisor of $n$ coprime to $u$, we have ${\operatorname{ord^*}}(u,n)={\operatorname{ord}}(u,n_{(u)})$. For example, let $u=2,\,n=24$. The sequence $u^i{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}$ is $2,4,8,16,8,16,\dots$ with cycle length 2, and so ${\operatorname{ord^*}}(2,24)={\operatorname{ord}}(2,3)=2$. When iterating the $\ell$th power map modulo $n$, the length of the eventual cycle starting with $x=u$ is given by ${\operatorname{ord^*}}(\ell,{\operatorname{ord^*}}(u,n))$. We would like to have a criterion for when a residue is part of some cycle, that is, for when a residue is eventually sent back to itself when iterating $x\mapsto x^\ell{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}$. A residue $u$ is part of some cycle under iteration of the map $x\mapsto x^\ell{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}$ if and only if $(\ell,{\operatorname{ord^*}}(u,n))=1$ and, with $d=(u,n)$, we have $(d,n/d)=1$. \[cycle.criterion.lemma\] If $(u,n)=d$, then high powers of $u$ will be $\equiv0{{\ifmmode\text{\rm\ (mod~$n/n_{(d)}$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n/n_{(d)}$)\fi}}$. Thus, for $u$ to be in a cycle it is necessary that $n/n_{(d)}=d$, that is, $(d,n/d)=1$. Further, it is necessary that $(\ell,{\operatorname{ord^*}}(u,n))=1$. Indeed, if $\sigma={\operatorname{ord^*}}(u,n)$, we would need $\ell^i{{\ifmmode\text{\rm\ (mod~$\sigma$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$\sigma$)\fi}}$ to be purely periodic, which is equivalent to $(\ell,\sigma)=1$. This proves the necessity of the condition. For the sufficiency, we have just noted that $(\ell,\sigma)=1$ implies that $\ell^i{{\ifmmode\text{\rm\ (mod~$\sigma$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$\sigma$)\fi}}$ is purely periodic. This implies in turn that the sequence $u^{\ell^i}{{\ifmmode\text{\rm\ (mod~$n_{(u)}$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n_{(u)}$)\fi}}$ is purely periodic. But the condition $(d,n/d)=1$ implies that $n_{(u)}=n/d$, and as each $u^{\ell^i}\equiv0{{\ifmmode\text{\rm\ (mod~$d$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$d$)\fi}}$, we have that $u^{\ell^i}{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}$ is purely periodic. For $d|n$ with $(d,n/d)=1$, let $C_d(\ell,n)$ denote the number of cycles in the $\ell$th power map mod $n$ that involve residues $u$ with $(u,n)=d$. For the lower bound in Theorem \[number.of.cycles.theorem\] we shall deal only with $C_1(\ell,n)$, that is, cycles involving numbers coprime to $n$. We have $C_1(\ell,n) \ge \phi(n)_{(\ell)}/\lambda(\lambda(n))$. \[c1.lower.bound.lemma\] It is easy to see that the subgroup of $({{\mathbb Z}}/n{{\mathbb Z}})^\times$ of residues $u$ with $(\ell,{\operatorname{ord}}(u,n))=1$ has size $\phi(n)_{(\ell)}$. (In fact, this is true for any finite abelian group $G$: the size of the subgroup of elements with order coprime to $\ell$ is $|G|_{(\ell)}$.) As the length of [*any*]{} cycle in the $\ell$th power map is bounded above by $\lambda(\lambda(n))$, the lemma follows immediately. To investigate the normal size of $\phi(n)_{(\ell)}$, we introduce the function $$f_\ell(n)=\sum_{p\mid\ell} v_p(\phi(n))\log p.$$ We also make use of the notation $q^a\| n$, which means that $q^a$ is the exact power of $q$ dividing $n$, that is, $q^a$ divides $n$ but $q^{a+1}$ does not. For any fixed $\ell$, we have $f_\ell(n) \le (\log\log n)^2$ for almost all $n$, in fact for all but $O_\ell(x/\log\log x)$ integers $n\le x$. \[fn.small.aa.prop\] We have $$\begin{aligned} \sum_{n\le x}f_\ell(n)~&=~\sum_{p\mid\ell}\sum_{n\le x}\sum_{q^a\| n}v_p(\phi(q^a))\log p ~\le~x\sum_{p\mid\ell}\log p\sum_{q^a\le x}\frac{v_p(\phi(q^a))}{q^a}\\ &\le~x\sum_{p\mid\ell}\log p\sum_{p^a\le x}\frac{a-1}{p^a}+ x\sum_{p\mid\ell}\log p\sum_{q\le x}\frac{v_p(q-1)}{q}.\end{aligned}$$ Now $$x\sum_{p\mid\ell}\log p\sum_{p^a\le x}\frac{a-1}{p^a} ~\ll_\ell~x$$ and, by , $$\begin{aligned} x\sum_{p\mid\ell}\log p\sum_{q\le x}\frac{v_p(q-1)}{q} &=~x\sum_{p\mid\ell}\log p\sum_{a\ge1}\sum_{q\in{\mathcal P_{p^a}},\,q\le x}\frac1q\\ &\ll~x\sum_{p\mid\ell}\log p\sum_{a\ge1}\frac{\log\log x}{p^a} ~\ll_\ell~x\log\log x.\end{aligned}$$ Hence, $$\sum_{n\le x}f_\ell(n)~\ll_\ell~x\log\log x,$$ so that the number of $n\le x$ with $f_\ell(n)>(\log\log n)^2$ is $O_\ell(x/\log\log x)$. It is interesting that one can prove an Erdős–Kac theorem for $f_\ell(n)$ using as a tool the criterion of Kubilius–Shapiro (see [@K], [@S]). Noting that $\phi(n)_{(\ell)}=\phi(n)/e^{f_\ell(n)}$, we have $\phi(n)_{(\ell)} \ge \phi(n)/\exp((\log\log n)^2)$ for almost all $n$ by Proposition \[fn.small.aa.prop\]. Of course, $n\ge \phi(n) \gg n/\log\log n$ for all $n\ge3$. Therefore, using Lemma \[c1.lower.bound.lemma\] and Theorem \[number.of.cycles.theorem\], we have $$\begin{aligned} C(\ell,n)~\ge~C_1(\ell,n)~\ge~ \frac{\phi(n)_{(\ell)}}{\lambda(\lambda(n))}&\ge~ \frac{\phi(n)}{\exp((\log\log n)^2)\lambda(\lambda(n))}\\ &=~\frac{\phi(n)/n}{\exp((\log\log n)^2)}\frac{n}{\lambda(\lambda(n))}\\ &=~\exp((1+o(1))(\log\log n)^2\log\log\log n)\end{aligned}$$ for almost all $n$. This completes the proof of the lower bound in Theorem \[number.of.cycles.theorem\]. We now consider the upper bounds in Theorem \[number.of.cycles.theorem\], first establishing a lemma. \[fpsanalog\] Suppose $m$ is a positive integer and $(d,m)=1$. For any integer $j\mid\lambda(m)$, the number of integers $u\in[1,m]$ with $(u,m)=1$ and ${\operatorname{ord}}(du,m)\mid\lambda(m)/j$ is at most $\phi(m)/j$. In fact, we prove a more general statement for any finite abelian group $G$: let $\lambda(G)$ denote the exponent of $G$, that is, the order of the largest cyclic subgroup of $G$, or equivalently the least common multiple of the orders of the elements of $G$. Then for any $d\in G$ and any $j\mid\lambda(G)$, the number of elements $u\in G$ for which the order of $du$ divides $\lambda(G)/j$ is at most $\#G/j$. It is clear that the lemma follows immediately from this statement upon taking $G$ to be $({{\mathbb Z}}/m{{\mathbb Z}})^\times$. It is also clear that in this statement, the element $d$ plays no role whatsoever except to shuffle the elements of $G$ around, and so we assume without loss of generality that $d$ is the identity of $G$. Let $p$ be any prime dividing $\lambda(G)$, and choose $a\le b$ so that $p^a\|j$ and $p^b\|\lambda(G)$. When we write $G$ canonically as isomorphic to the direct product of cyclic groups of prime-power order, at least one of the factors must be isomorphic to ${{\mathbb Z}}/p^b{{\mathbb Z}}$. In every such factor, only one out of every $p^a$ elements has order dividing $\lambda(G)/j$, since all but $p^{b-a}$ elements of the factor have order divisible by $p^{b-a+1}$. Since there is at least one such factor for every $p^a\|j$, we conclude that at most one out of every $j$ elements of $G$ has order dividing $\lambda(G)/j$, as claimed. Note that this result in the case $d=1$ is Lemma 1 in [@FPS]. The above proof, while similar in spirit to the proof in [@FPS], is simpler. Let $\tau(m)$ denote the number of positive divisors of $m$. \[cycleupperbd\] For any integers $\ell,n\ge2$ we have $C(\ell,n)\le n\tau(\lambda(n))\tau(n)/{\operatorname{ord^*}}(\ell,\lambda(n))$. It is sufficient to show that for each $\ell,n\ge2$ and each $d\mid n$ with $(d,n/d)=1$, we have $$\label{cycled} C_d(\ell,n)~\le~\frac{n\tau(\lambda(n))}{{\operatorname{ord^*}}(\ell,\lambda(n))}.$$ Let $d\mid n$ with $(d,n/d)=1$. We have seen in Lemma \[cycle.criterion.lemma\] that for a residue $u{{\ifmmode\text{\rm\ (mod~$n$)}\else\discretionary{}{}{\hbox{ }}\rm(mod~$n$)\fi}}$ with $(u,n)=d$ to be involved in a cycle, it is necessary and sufficient that $(\ell,{\operatorname{ord}}(u,n/d))=1$. For each integer $j\mid\lambda(n/d)$, let $C_{d,j}(\ell,n)$ denote the number of cycles corresponding to residues $u$ with $(u,n)=d$ and ${\operatorname{ord}}(u,n/d)=\lambda(n/d)/j$. Writing such a residue $u$ as $du_1$, we have $u_1\in[1,n/d]$ and $(u_1,n/d)=1$. Thus, by Lemma \[fpsanalog\], we have that the number of such residues $u$ is at most $\phi(n/d)/j\le n/dj$. Hence we have $$C_{d,j}(\ell,n)~\le~\frac{n/dj}{{\operatorname{ord}}(\ell,\lambda(n/d)/j)}.$$ Now $\lambda(n/d)=\lambda(n)/d_1$ for some integer $d_1\le d$. It is shown in (15) of [@KP] that for $k\mid m$ we have ${\operatorname{ord^*}}(a,m/k)\ge{\operatorname{ord^*}}(a,m)/k$ for any nonzero integer $a$. Hence $${\operatorname{ord}}(\ell,\lambda(n/d)/j)~=~{\operatorname{ord}}(\ell,\lambda(n)/ d_1j)~\ge~{\operatorname{ord^*}}(\ell,\lambda(n))/d_1j,$$ so that $$C_{d,j}(\ell,n)~\le~\frac{n/dj}{{\operatorname{ord^*}}(\ell,\lambda(n))/ d_1j}~\le~\frac{n}{{\operatorname{ord^*}}(\ell,\lambda(n))}.$$ Letting $j$ range over all divisors of $\lambda(n/d)$, we get that $$C_d(\ell,n)~\le~\frac{n\tau(\lambda(n/d))}{{\operatorname{ord^*}}(\ell,\lambda(n))},$$ which immediately gives . Note that from [@EP Theorem 4.1], we have $\tau(\lambda(n))<\exp((\log\log n)^2)$ for almost all $n$. Furthermore, letting $\Omega(n)$ denote the number of prime factors of $n$ counted with multiplicity, we know that the normal order of $\Omega(n)$ is $\log\log n$; in particular, we have $\Omega(n) < \log\log n/\log 2$ for almost all $n$. Since the inequality $\tau(n) \le 2^{\Omega(n)}$ is elementary, this implies that $\tau(n)<\log n$ for almost all $n$. We conclude from Proposition \[cycleupperbd\] that $$C(\ell,n)~<~n\exp(2(\log\log n)^2) / {\operatorname{ord^*}}(\ell,\lambda(n))$$ for almost all $n$. The three upper bounds in Theorem \[number.of.cycles.theorem\] therefore follow respectively from three results in the new paper of Kurlberg and the second author [@KP]: Theorem 4 (1), which states that for any function ${\varepsilon}(n)\to0$, we have ${\operatorname{ord^*}}(\ell,\lambda(n))\ge n^{1/2+{\varepsilon}(n)}$ almost always; Theorem 22, which states that a positive proportion of integers $n$ have ${\operatorname{ord^*}}(\ell,\lambda(n))\ge n^{.592}$; and Theorem 28, which states that if the GRH is true, then $${\operatorname{ord^*}}(\ell,\lambda(n))~=~n/\exp((1+o(1))(\log\log n)^2\log\log\log n)$$ on a set of asymptotic density 1. (Note that the proof of this result uses Theorem \[lambda.lambda.normal.order.theorem\] of the current paper.) Higher iterates =============== Here we sketch what we believe to be a viable strategy for establishing an analogue of Theorem \[lambda.lambda.normal.order.theorem\] for the higher iterates $\lambda_k$ where $k\ge3$. As in the case of $k=2$, we have generally that $$\frac{n}{\lambda_k(n)}~=~\frac{n}{\phi_k(n)}\frac{\phi_k(n)}{\lambda_k(n )}.$$ We always have $n/\phi_k(n)\le(c\log\log n)^k$, which is already a good enough estimate for our purposes. Even better, however, it is known [@EGPS] that for each fixed $k$, we have $n/\phi_k(n)\ll(\log\log\log n)^k$ for almost all $n$. The problem therefore reduces to comparing $\lambda_k(n)$ to $\phi_k(n)$. Probably it is not hard to get analogs of Propositions \[same.large.prime.divisors.prop\] and \[large.primes.in.phi.phi.prop\], where we replace $y^2$ with $y^k$. The problem comes in with the proliferation of cases needed to deal with small prime factors. As with the second iterate, we expect the main contribution to come from the “supersquarefree" case. In particular, let $$h_k(n) ~=~ \sum_{p_1\mid n} \sum_{p_2\mid p_1-1} \dots \sum_{p_k\mid p_{k-1}-1} \sum_{q\le y^k} v_q(p_k-1) \log q.$$ We expect $h_k(n)$ to be the dominant contribution to $\log(\phi_k(n)/\lambda_k(n))$ almost always. But it seems hard not only to prove this in general but also to establish the normal order of $h_k(n)$. It would seem useful in this endeavor to have a uniform estimate of the shape $$\label{strongrecipsum} \sum_{p\in{\mathcal P_{m}},\,p\le x}\frac1p~\sim~\frac{\log\log x-\log\log m}{\phi(m)} ~\text{ for }~x\ge m^{1+{\varepsilon}}.$$ Even under the assumption of the Riemann Hypothesis for Dirichlet $L$-functions, seems difficult, and maybe it is false. It implies with $x=m^2$ that the sum is $\ll1/\phi(m)$, when all we seem to be able to prove, via sieve methods, is that it is $\ll(\log\log m)/\phi(m)$. Assuming uniformity in , it seems that on average $$h_k(n) \sim \frac1{(k-1)!}(\log\log n)^k\log\log\log n,$$ supporting Conjecture \[lambdak.normal.order.conj\]. It would be a worthwhile enterprise to try to verify or disprove the Conjecture in the case $k=3$, which may be tractable. Going out even further on a limb, it may be instructive to think of what Conjecture \[lambdak.normal.order.conj\] has to say about the normal order of $L(n)$, the minimum value of $k$ with $\lambda_k(n)=1$. The expression $(1/(k-1)!)(\log\log n)^k\log\log\log n$ reaches its maximum value when $k\approx\log\log n$. Is this formula then trying to tell us that we have $L(n)\ll \log\log n$ almost always? Perhaps so. There is a second argument supporting the thought that $L(n)\ll\log\log n$ almost always. Let $P(n)$ denote the largest prime factor of an integer $n>1$, and let $\ell(n)=P(n)-1$ for $n>1$, $\ell(1)=1$. Clearly, $\ell(n)\mid\lambda(n)$ for all $n$, so that if $L_0(n)$ is the least $k$ with $\ell_k(n)=1$, then $L_0(n)\le L(n)$. It may be that the difference $L(n)-L_0(n)$ is usually not large. In any event, it seems safe to conjecture that $L_0(n)$ is usually of order of magnitude $\log\log n$, due to the following argument. For an odd prime $p$, consider the quantity $\log\ell(p)/\log p\approx\log P(p-1)/\log (p-1)$. It may be that this quantity is distributed as $p$ varies through the primes in the same way that $\log P(n)/\log n$ is distributed as $n$ varies through the integers, namely the Dickman distribution. Such a conjecture has been made in various papers. If so, it may be that the sequence $$\frac{\log\ell(p)}{\log p},~ \frac{\log\ell_2(p)}{\log\ell(p)},~\dots$$ behaves like a sequence of independent random variables, each with the Dickman distribution. And if so, it may then be reasonable to assume that almost always we get down to small numbers and terminate in about $\log\log n$ steps. A similar probabilistic model is considered in [@B], but for the simpler experiment of finding the joint distribution of logarithmic sizes of the various prime factors of a given number $n$. At the very least, we can prove that $L(n)\ll\log\log n$ infinitely often. Notice that the definition of $\lambda(n)$ as a least common multiple, together with the fact that $\lambda(p^a)\mid \lambda(p^{a+1})$ always, implies that $$\lambda\big( {\mathop{\rm{lcm}}}\{ m_1,\dots,m_j \} \big)~=~{\mathop{\rm{lcm}}}\big\{ \lambda(m_1), \dots, \lambda(m_j) \big\}$$ for any positive integers $m_1,\dots,m_j$. A trivial induction then shows that $$\lambda_k\big( {\mathop{\rm{lcm}}}\{ m_1,\dots,m_j \} \big)~=~{\mathop{\rm{lcm}}}\big\{ \lambda_k(m_1), \dots, \lambda_k(m_j) \big\}$$ for any $k\ge0$. Since the least common multiple of a set of numbers equals 1 precisely when each number in the set equals 1, we deduce that $$L\big( {\mathop{\rm{lcm}}}\{ m_1,\dots,m_j \} \big)~=~\max\big\{ L(m_1), \dots, L(m_j) \big\}.$$ We apply this identity with $m_i=i$. Let $n_j = {\mathop{\rm{lcm}}}\{1,2,\dots,j\}$. We have $\log n_j = \sum_{i\le j} \Lambda(i)$, which is asymptotic to $j$ by the prime number theorem. On the other hand, it is trivial that for any number $n$ we have $L(n) \le1+ (1/\log 2)\log n$, as $\lambda_{i+1}(n)\le(1/2)\lambda_i(n)$ for $1\le i<L(n)$. Therefore $$\begin{aligned} L(n_j)~&=~\max\{ L(1),\dots,L(j) \}~\le~ 1+\max\left\{\frac{\log1}{\log2},\dots,\frac{\log j}{\log2}\right\}\\ &=~1+\frac{\log j}{\log2} ~=~\left(\frac1{\log2}+o(1)\right)\log\log n_j.\end{aligned}$$ We can improve on the estimate in Theorem \[log.log.iterates.theorem\], but not by much. Say we let $N_j$ be the product of all primes $p\le j^{3.29}$ with $p-1\mid n_j$, with $n_j$ as in the above proof. It follows from Friedlander [@F] that a positive proportion of the primes $p\le j^{3.29}$ have the required property. Thus, $N_j>\exp(cj^{3.29})$ for some positive constant $c$ and all sufficiently large values of $j$. But $\lambda(N_j)\mid n_j$, so that $L(N_j)\le 2+j/\log 2$. Hence $L(N_j)<.439\log\log N_j$ for $j$ sufficiently large. (This result can be improved by a very small margin using a more recent result of Baker and Harman [@BH], but the argument is a bit more difficult, since they do not get a positive proportion of the primes with the required property.) It is likely that $L(n)\ll\log\log\log n$ infinitely often, possibly even that $L(n) \ll_k \log_k n$ infinitely often for arbitrary $k$-fold-iterated logarithms. One may also study the maximal order of $L(n)$. The analogous problem for the iterated $\phi$-function is relatively trivial, but not so for $\lambda$. If there can exist very long “Sophie Germain chains", that is, sequences of primes $p_1,p_2,\dots,p_k$ where each $p_i=2p_{i-1}+1$, for $i>1$, then we might have $L(p_k)\sim(1/\log 2)\log p_k$. We might even perturb such a chain by a small amount and keep the asymptotic relation, say by occasionally having $p_i=4p_{i-1}+1$. It seems hard to prove that long enough chains to get the the asymptotic for $L(p_k)$ do not exist, but probably they don’t on probabilistic grounds. We can at least say that $L(n)\ge1+(1/\log 3)\log n$ infinitely often, since this inequality is attained when $n$ is a power of 3. [99]{} E. Bach, Analytic methods in the analysis and design of number-theoretic algorithms, MIT Press, Cambridge, MA, 1985. R. Baker and G. Harman, Shifted primes without large prime factors, Acta Arith. [**83**]{} (1998), 331–361. E. Blanton, S. Hurd, J. McCranie, On the digraph defined by squaring mod $m$, when $m$ has primitive roots, Cong. Numerantium [**82**]{} (1992), 167–177. J. J. Brennan and B. Geist, Analysis of iterated modular exponentiation: the orbit of $x^\alpha$ mod $N$, Designs, Codes, and Cryptography [**13**]{} (1998), 229–245. P. Erdős, A. Granville, C. Pomerance, and C. Spiro, On the normal behavior of the iterates of some arithmetic functions, in Analytic number theory (Allerton Park, IL, 1989), 165–204, Progr. Math., 85, Birkhäuser Boston, Boston, MA, 1990. P. Erdős and C. Pomerance, On the normal number of prime factors of $\varphi(n)$, Rocky Mountain J. Math., [**15**]{} (1985), 343–352. Corrigendum in [@EGPS]. P. Erdős, C. Pomerance, and E. Schmutz, Carmichael’s lambda function, Acta Arith., [**58**]{} (1991), 363–385. J. B. Friedlander, Shifted primes without large prime factors, in Number theory and applications (Banff, AB, 1988), 393–401, NATO Adv. Sci. Inst. Ser. C Math. Phys. Sci., 265, Kluwer Acad.  Publ., Dordrecht, 1989. J. B. Friedlander, C. Pomerance, and I. E. Shparlinski, Period of the power generator and small values of [C]{}armichael’s function, Math. Comp., [**70**]{} (2001), 1591–1605. Corrigendum. Math. Comp., [**71**]{} (2002), 1803–1806. H. Halberstam and H.-E. Richert, Sieve methods, Academic Press \[A subsidiary of Harcourt Brace Jovanovich, Publishers\], London-New York, 1974. London Mathematical Society Monographs, No. 4. J. P. Kubilius, Probabilistic methods in the theory of numbers, Translations of Mathematical Monographs, Vol. 11, American Math. Soc., Providence, 1964. P. Kurlberg and C. Pomerance, On the period of the linear congruential and power generators, to appear. K. K. Norton, On the number of restricted prime factors of an integer. I, Illinois J. Math., [**20**]{} (1976), 681–705. C. Pomerance, On the distribution of amicable numbers, J. Reine Angew. Math., 293/294 (1977), 217–222. T. D. Rogers, The graph of the square mapping on the prime fields, Discrete Math., [**148**]{} (1996), 317–324. H. N. Shapiro, Distribution functions of additive arithmetic functions, Proc. Nat. Acad. Sci. USA, [**42**]{} (1956), 426–430. [^1]: G.M. is supported in part by the National Sciences and Engineering Research Council of Canada. C.P. is supported in part by the National Science Foundation.
--- abstract: 'Using first-principles based Density Functional Theory (DFT), we have investigated the structural instabilities in the austenite phases of Mn$_{2}$Ni[*X*]{} ([*X*]{}= Al, Ga, In, Sn) magnetic shape memory alloys (MSMA). A complete softening is observed in the acoustic TA$_{2}$ branches for all the materials along \[$\xi\xi$0\] directions leading to the instability in the austenite structure which effectively stabilizes into martensitic structure. The reasons behind this softening are traced back to the repulsion from the optical T$_{2g}$ branches and to the nesting features in the Fermi surfaces. The vibrational density of states, the force constants and the elastic moduli are also computed and analyzed, which reconfirm the underlying mechanism behind the instabilities. The results indicate that the phonon anomalies are related to the occurrence of possible pre-martensitic phases which can be quite complex.' author: - Souvik Paul - Biplab Sanyal - Subhradip Ghosh title: 'First-principles study of the lattice instabilities in Mn$_{2}$Ni[*X*]{} ([*X*]{}= Al, Ga, In, Sn) magnetic shape memory alloys' --- Introduction ============ Magnetic shape memory alloys (MSMA) are excellent candidates for technological applications due to their coupling between different degrees of freedom, such as caloric, magnetic, elastic etc., that introduce multifunctionality in these materials. The magneto-structural coupling results in a phase transition between high temperature austenite structure and low temperature martensitic variants driven by magnetic field under ambient conditions. Microscopically, this martensitic transformation is merely a consequence of reshuffling of atomic planes, which is often mediated through different periodically modulated meta-stable structures called premartensitic structure. Very often, the microscopic origin behind the martensitic phase transformation can be explained by softening of some phonon modes, related to the softness in elastic stiffness constants caused by the nesting topology between parallel Fermi surfaces due to intense electron-phonon coupling.This has been the case for almost all the ternary MSMA’s crystallizing in Heusler structure. The most extensively studied ternary MSMA is Ni$_{2}$MnGa, which in single crystal environments and close to the stoichiometric composition, exhibit nearly 10$\%$ magnetic field-induced strains (MFIS) under a magnetic field of less than 1 Tesla[@nimnga1; @nimnga2], making it a strong contender for micro-mechanical sensors and actuators. The structural instability of Ni$_{2}$MnGa in the austenite phase has been linked with an anomalous phonon softening of transverse acoustic TA$_{2}$ branch along \[$\xi\xi0$\] direction. The softening occurs at a fractional wave vector $\xi$=(0.33,0.33,0) and it becomes more prominent as one approaches towards the martensitic phase with decreasing temperature[@ps1; @ps2; @ps3; @ps4]. The phonon softening has been found to correlate with the premartensitic phase, which is anticipated by the precursor phonon softening at that wave vector. Inelastic neutron scattering experiments and elastic constants measurements on the high temperature phase corroborated the theoretical calculations of softening of acoustic branch. A complete softening of acoustic TA$_{2}$ branch along \[$\xi\xi0$\] direction with unstable phonon modes was reported theoretically in Ni$_{2}$MnAl[@nimnaltheo]. Later, Moya [*et al.*]{} verified the Kohn anomaly observed in the theoretical study of TA$_{2}$ branch in this material by performing inelastic neutron scattering experiment on nearly stoichiometric Ni$_{2}$MnAl [@nimnalexp]. However, the experimental softening is not complete since the phonon frequencies remain finite even at lowest temperature. This could be related to the fact that the composition needed for the martensitic phase transformation to occur in Ni$_{2}$MnAl is slightly off-stoichiometric[@nimnaloffsti]. First-principles calculations observed similar phonon softening of the same acoustic branch in Ni$_{2}$MnIn, Ni$_{2}$MnSb, and Ni$_{2}$MnSn[@nimnin; @nimnsn-sb]. Although Ni$_{2}$MnGa near the stoichiometric composition is the first discovered ternary system in Heusler structure exhibiting magnetic shape memory effect, and has been studied extensively revealing a lot of interesting physics, its use in practical applications is hindered due to the martensitic transformation temperature being lower than the room temperature, and poor ductility in poly-crystalline phase [@web; @nimnga3]. Attempts were made to improve the functionality of the material by introducing disorder with various possibilities and replacing Ga conjointly with Al, Ge, In, Sn and Sb. However, the yield is not as fruitful as expected. Therefore, a quest for new MSMA began with higher operating temperatures and better elastic properties compared to Ni$_{2}$Mn[*X*]{}. Recently, Mn$_{2}$NiGa has been reported to be a MSMA with promising functional properties [@mn2niga; @mn2niga1; @mn2niga2; @mn2niga3]. It has a martensitic transformation temperature close to room temperature (270 K) and much broader hysteresis loop [@mn2niga]. An excellent two-way shape memory effect with strains of 1.7$\%$ and field controllable shape memory effect up to 4$\%$ has been observed experimentally in single crystalline environment [@mn2niga]. Experiments with poly-crystalline sample found the martensitic transformation temperature to be at 230 K [@mn2niga2]. It also observed that the structural transformation is dependent upon residual stress. According to the analysis of their Powder X-ray diffraction data, the system undergoes a martensitic transformation to either a non-modulated tetragonal structure or a monoclinic modulated structure at room temperature, depending upon the residual stress. Neutron Powder diffraction experiments on this system confirmed the presence of an orthorhombic modulated structure, which is independent of temperature [@mn2niga3]. These results generated interests in this system in the context of understanding its structural stability and connections to shape memory effect. Another reason is that this material draws attention due to its relatively high Curie temperature (T$_{C}\sim$ 588 K) [@mn2niga] compared to Ni$_{2}$MnGa. In this respect, it is notable that Mn$_{2}$NiSn also has a high T$_{C}$ (530 K)[@tcmn2nisn]. Driven by the possibilities of realizing new MSMAs with functionalities better than the Ni$_{2}$MnX ones, first-principles electronic structure calculations have been done on Mn$_{2}$Ni[*X*]{} ([*X*]{}= Al, Ga, In, Sn) systems [@pauljap; @mn2nigatheo; @mn2nialtheo; @mn2niintheo; @paulmn2nisn]. The results are quite encouraging as the total energy calculations predicted transformations from cubic austenite to a non-modulated tetragonal phase at low temperatures conserving the volume, a signature of shape-memory property. These results thus open up the possibility to further investigate the origin of such transformations and their consequences in these materials. In this paper, we, therefore, make an attempt to understand the physical origin behind the transformations by examining the vibrational properties of these materials in a systematic way. We compute the phonon dispersion, the vibrational density of states, the elastic constants and the Fermi surfaces in order to see whether connections to the martensitic transformations can be made for these materials. The paper is organized as follows: in section II, we provide details of the computational methods used, in section III, we discuss the phonon dispersion relations, the vibrational densities of states, the inter-atomic force constants, the elastic constants and the Fermi surfaces in order to ascertain the mechanisms driving the martensitic transformations and finally we summarize our results indicating their relevance for future research. Computational Details ===================== The electronic structures of the systems considered were calculated using the Plane-Wave Pseudopotential (PW-PP) formalism of the Density Functional Theory (DFT), as implemented in <span style="font-variant:small-caps;">Quantum Espresso</span>[@qe] . UltraSoft Pseudo Potentials (USPP)[@uspp] were used to accurately calculate the electronic ground states. Spin polarized Generalized Gradient Approximation (GGA) scheme was used as the exchange-correlation part of the potential with Perdew-Wang 91 parameterizations (PW91)[@pw91]. Plane waves with energies up to 544 eV were used to describe electronic wave functions. Fourier component of the augmented charge density with cut-off energy up to 6530 eV was taken after convergence tests. The Brillouin zone integrations were carried out with finite temperature Methfessel-Paxton smearing[@mp] method using 12$\times$12$\times$12 uniform $\it {k}$-mesh, which effectively leads to 364 $\it {k}$-points in the irreducible wedge of the Brillouin zone. The value of the smearing parameter was taken as 0.27 eV. Such choices of the parameters ensure the convergence of phonon frequencies within 5$\%$. The phonon dispersion relations were computed using Density Functional Perturbation Theory (DFPT)[@dfpt]. The DFPT scheme is employed to accurately calculate the dynamical properties in condensed matter systems with the precision at par with the electronic structure calculations. The energy threshold value for convergence was 10$^{-16}$ Ry in phonon calculations. Dynamical matrices were conveniently calculated in reciprocal space from the ground state charge density and from its linear response to the distortion in the ionic configurations. Fourier transform was employed thereafter to obtain the real space force constants. The dynamical matrices were calculated in a 4$\times$4$\times$4 $\it {q}$-point grid for all the structures. Convergence of phonon frequencies within $1-2 \%$ was ensured by comparing frequencies calculated directly and frequencies obtained by the Fourier transform of the dynamical matrices. Such convergence tests ensured accuracy in elastic constants as they are calculated from the slopes of the phonon dispersion curves. The Fermi surfaces were calculated on $24\times24\times24$ highly dense uniform $\it {k}$-point grid. It may be noted that the strength of the phonon anomaly is extremely sensitive to temperature. An increase in temperature can reduce the nesting features of Fermi surfaces and thus weaken the anomaly. In DFT based calculations, the smearing parameter $\sigma$ plays the role of fictitious electronic temperature. Therefore, to reduce the effect of finite temperature in the calculations of Fermi surfaces, we kept $\sigma$=0.01 eV all along. Results and Discussions ======================= Phonon dispersion ----------------- ![(Color online) (a) The fcc L2$_{1}$ usual Heusler structure of Mn$_{2}$Ni[*X*]{} (Ni$_{2}$Mn[*X*]{}) systems. The red, gray and green spheres represents Mn (Ni), Ni (Mn) and [*X*]{} ([*X*]{}) atoms, respectively. (b) The fcc Hg$_{2}$CuTi inverse Heusler structure of Mn$_{2}$Ni[*X*]{} systems. The red, blue, gray and green spheres represent MnI, MnII, Ni and [*X*]{} atoms, respectively.](fig1.eps){width="8.5cm"} ![image](fig2.eps){width="5in"} Experimental measurements [@mn2niga; @mn2niga1; @mn2niga2; @mn2nisnexp]and theoretical calculations [@pauljap; @mn2nigatheo; @mn2niintheo; @mn2nialtheo]have confirmed that the alloys considered here favor Hg$_{2}$CuTi structure (Space group $F\bar{4}3m$), also known as inverse Heusler structure, in the cubic austenite phase as opposed to the usual Heusler structure of Ni$_{2}$Mn[*X*]{}. The latter structure is best visualized as four interpenetrating f.c.c sub-lattices at (0,0,0), (0.25,0.25,0.25), (0.50,0.50,0.50) and (0.75,0.75,0.75), where the first and the third positions are occupied by Mn atoms, second and the fourth positions by Ni and [*X*]{} atoms, respectively(FIG. 1(a)). Interchanging the tetrahedral Mn atom at (0.50,0.50,0.50) with octahedral Ni atom at (0.25,0.25,0.25) keeping the remaining atoms fixed at their positions, leads to inverse Heusler structure (FIG. 1(b)). Hereafter, Mn atom at (000) sub-lattice will be denoted as MnI and the one at (0.25,0.25,0.25) as MnII. Due to the unavailability of experimental results on the lattice constants of Mn$_{2}$NiAl and Mn$_{2}$NiIn, we have calculated the equilibrium lattice constants of all the four materials with GGA exchange-correlation functional and used them here. The total energies as a function of lattice parameters were fitted to Murnaghan equation of state to accurately calculate the equilibrium lattice constants. Our calculated lattice constant for Mn$_{2}$NiGa is 5.85 Å and for Mn$_{2}$NiSn is 6.15 Å which agree well with the available experimental results, i.e., 5.90 Å for Mn$_{2}$NiGa [@mn2niga1] and 6.1 Å for Mn$_{2}$NiSn[@mn2nisnexp]. On the other hand, our calculated lattice constants for Mn$_{2}$NiIn (a= 6.16 Å) matches well with available theoretical result[@mn2niintheo]. Since the experimental results, for Mn$_{2}$NiGa and Mn$_{2}$NiSn, agree well with our calculated results, we consider our lattice constants as good representations of the experimental ones. The phonon dispersion spectra calculated at those lattice constants along \[$\xi\xi0$\] highly symmetric direction in the irreducible segment of the Brillouin zone (IBZ) are shown in FIG. 2. The main interest lies in the transverse acoustic TA$_{2}$ branch, which exists due to the atomic displacements \[$\xi\bar{\xi}$0\] perpendicular to the propagation direction \[$\xi\xi$0\]. For all Heusler systems exhibiting martensitic transformation, this branch shows an anomalous behavior. Therefore, our aim is to investigate the behavior of acoustic TA$_{2}$ branch along \[$\xi\xi0$\] direction. The most important features in the dispersion curves are the anomalous dips of the acoustic TA$_{2}$ branches where the phonon frequencies become imaginary, suggesting instabilities in the cubic austenite structures which usher in a phase transition to stable martensitic phases in all four materials. In Mn$_{2}$NiGa and Mn$_{2}$NiAl, the acoustic TA$_{2}$ branches have negative slopes at $\Gamma$ point, indicating a pure elastic instability in their parent structure. The range of this instability extends up to $\xi$=0.50 for Mn$_{2}$NiGa and up to $\xi$=0.35 for Mn$_{2}$NiAl. The maximum of the dip occur at wave vectors $\xi$=0.35 and $\xi$=0.25 for Mn$_{2}$NiGa and Mn$_{2}$NiAl, respectively. For Mn$_{2}$NiIn, the instability of TA$_{2}$ branch starts from $\xi$=0.3 producing maximum of the dip at wave vector $\xi$=0.50. For Mn$_{2}$NiSn, unlike the other materials, the softening extends up to the wedge of the Brillouin zone with the maximum of the dip at $\xi$=0.50. In previous studies of lattice dynamics on ternary MSMAs with Heusler structures, phonon anomalies of TA$_{2}$ were correlated with the precursor phenomenon prior to the martensitic phase when the systems are cooled from high temperatures. The wave vectors corresponding to the imaginary phonon frequencies indicated shuffling of atomic planes which stabilize the (c/a)$<$1 phases compared to the parent phase ((c/a)$=$1). The occurrence of 3M, 5M and 7M modulated structures and even incommensurate structures were confirmed experimentally. Possibilities of such modulated structures can be inferred from the anomalies in our calculated dispersion relations for Mn$_{2}$Ni[*X*]{} systems. A modulated structure with a periodicity of 8 atomic planes (2M structure) can be associated with an instability at $\xi$=0.25, one with a periodicity of 6 atomic planes (3M structure) can be associated with an instability at $\xi$=0.33 and one with a shuffling of 14 atomic planes (7M structure) can be associated with an instability at $\xi$=0.29. For Mn$_{2}$NiAl, the unstable mode occurs for $\xi$=0.0 to $\xi$=0.35 with the maximum of the dip at $\xi$=0.25 . This suggests the possibilities of occurrence of several modulated phases. The commensurate wave vector closest to the maximum of the dip in the TA$_{2}$ branch of of Mn$_{2}$NiGa occurs at $\xi$=0.33 which can be related to the occurrence of the 3M structure. Since, in Mn$_{2}$NiGa, the imaginary frequencies extends up to $\xi$=0.50, in addition to aforementioned modulated structures 5M, modulation can also be observed at $\xi$=0.43 which stabilizes with the shuffling of 10 atomic planes. In cases of Mn$_{2}$NiIn and Mn$_{2}$NiSn, the maximum in the dip of the TA$_{2}$ branch occurs at $\xi$=0.5, which although cannot be connected to the known modulated structures mentioned above, but the extent of the instabilities in these systems can be connected to the 3M and 5M modulations. These suggest possibilities of occurrence of new kinds of modulations leading to precursor phenomena in these materials or that there may be more complicated structures with co-existence of multiple modulated phases. Signatures of 7M modulated phases have been observed experimentally [@mn2niga2; @mn2niga3] in Mn$_{2}$NiGa, but the occurrences of these were either dependent on the amount of stress in the system [@mn2niga3] or on the sublattice occupancies [@mn2niga2]. Thus, no definite conclusion on the kind of modulation in this system and the resulting pre-martensitic structures can be made from the available experimental results. Detailed systematic calculations on the non-cubic variants for these systems are to be carried out in order to settle the issue. However, this is beyond the aim and scope of the present study. Energetically lowest optical T$_{2g}$ branch is Raman active in nature with \[$\xi\bar{\xi}0$\] polarization and the other optical branches are infrared active with T$_{1}$u symmetry. It is known that phonon branches with same symmetry would repel each other. Since, acoustic TA$_{2}$ branch also has same state of polarization; it would be repelled by the T$_{2g}$ branches. In a previous theoretical study, Zayak [*et al.*]{} [@entel] argued that due to this repulsion the TA$_{2}$ branch is pushed downward and becomes unstable. To prove this, they compared the position of T$_{2g}$ branches at $\Gamma$ point of some stable Heusler alloys at cubic phase like Co$_{2}$MnGa and Co$_{2}$MnGe to unstable systems like Ni$_{2}$Mn[*X*]{} ([*X*]{} =Ga, Ge, In, Al) and illustrated that energetically lowered T$_{2g}$ branches in the unstable alloys compared to those alloys with stable cubic phases, produce the necessary repulsive thrust to the lowest vibrational branch. The results in FIG. 2 suggest the same explanation for the phonon instabilities in Mn$_{2}$Ni[*X*]{}. The repulsion due to the already low lying T$_{2g}$ modes at the $\Gamma$ point for all four materials push the TA$_{2}$ frequencies down setting up the unstable modes. In reference 30, the authors attributed the occurrence of anomalous unstable modes in Ni$_{2}$MnGa to the inversion of modes of Ni and Ga. They showed that the contributions to the T$_{2g}$ branches come from the dynamics of Ni atoms and due to the inversion of optical modes, the Ni atoms vibrate at lower frequencies making the frequencies of the T$_{2g}$ mode lower. The repulsion of TA$_{2}$ modes by these T$_{2g}$ modes pull the frequencies of the former down making them imaginary. For the materials investigated here, an analysis of the vibrational amplitudes show that the T$_{2g}$ modes are dominated by the vibrations from Ni and MnI atoms who occupy crystallographic equivalent sites, and in fact the same ones as the two equivalent Ni atoms in Ni$_{2}$MnGa. Therefore, it would be interesting to examine whether such an inversion of optical mode is also happening for these materials. In the next subsection, we explore this by looking at the vibrational density of states (VDOS). Vibrational density of states (VDOS) ------------------------------------ ![(Color online) Atom projected vibrational density of states (VDOS) showing contributions from different constituent atoms for (a) Mn$_{2}$NiAl, (b) Mn$_{2}$NiGa, (c) Mn$_{2}$NiIn and (d) Mn$_{2}$NiSn over the frequency range.](fig3.eps){width="8.5cm"} In what follows, the atom projected VDOS for Mn$_{2}$NiAl, Mn$_{2}$NiGa, Mn$_{2}$NiIn and Mn$_{2}$NiSn are presented in FIG. 3. It is observed that the vibrational contributions from two Mn atoms occupy different frequency regions in the VDOS plots. This occurs mainly because of the following reasons: the two Mn atoms have different crystallographic symmetry; the atom occupying (000) sub-lattice, labeled as MnI, have tetrahedral symmetry and the other one at (0.25,0.25,0.25), labeled as MnII, sub-lattice have octahedral symmetry; as a consequence of this their nearest neighbor environments are different leading to different bond stiffness’s (force constants) for the bonds connected to the Mn atoms. A comparison of all the VDOSs show that the VDOSs of Mn$_{2}$NiIn and Mn$_{2}$NiSn materials are quite similar and are very different from the VDOSs of the other two materials in the series. FIG. 3 suggests that for Mn$_{2}$NiIn and Mn$_{2}$NiSn, vibrations of MnI atoms are prominent between 6 THz to 7 THz, whereas contributions from MnII atoms are predominantly lie between 4.5 THz to 6 THz. Due to the slightly larger atomic mass than Mn atom, Ni vibrations occur mostly between 2.5 THz to 4.5 THz. As expected, the lower frequency regions are dominated by In and Sn because they have larger atomic masses than Ni and Mn. For Mn$_{2}$NiGa, vibrations in the range 7 THz to 8 THz are mainly dominated by MnI atom, while vibrations from 5.5 THz to 7 THz have contributions from MnII atoms. A strong peak originated from MnI vibrations coinciding with a peak originating from vibrations of MnII atoms is also observed at 6 THz. In the frequency range 3 THz to 5 THz, vibrations of Ni atoms are predominant and the lowermost part of the spectrum is dominated by the vibrations of the Ga atoms. The features in the VDOS of Mn$_{2}$NiAl is different than the other three. The modes due to the vibrations of Al atoms occur at around 10 THz due to extremely light mass of Al. The Ni modes also occur at lower frequencies, similar to the cases of the other three. The vibrations of MnI and MnII atoms dominate the middle of the spectrum with their respective peaks at 6.25 THz and 7.3 THz. In case of Ni$_{2}$MnGa, Zayak [*et al.*]{} [@entel] showed that the positions of Ga and Ni contributions to the VDOS were “inverted", that is, the vibrations of the lighter Ni atoms were at frequencies lower than those of heavier Ga atoms. They connected this anomalous mode inversion to the instability of the TA$_{2}$ modes of Ni$_{2}$MnGa. In case of the systems studied here, the overall features in the VDOSs of all four materials suggest that there is no signature of inversion of Ni (MnI) modes with those of the modes from the element [*X*]{}. Thus the occurrence of unstable TA$_{2}$ modes cannot be associated to this. Inter-atomic force constants ---------------------------- ![(Color online) Longitudinal component of nearest neighbor inter-atomic force constants between MnI, MnII, Ni and X atoms of Mn$_{2}$Ni[*X*]{} materials.](fig4.eps){width="8.5cm"} In order to understand the features in the VDOS, we analyze the behavior of the real space inter-atomic force constants. In FIG. 4, we plot the longitudinal component of nearest neighbor force constants of Mn$_{2}$Ni[*X*]{} systems. The transverse components of force constants are not shown in the plot, since, their contributions compared to the longitudinal ones are negligible. The force constants between any pair of nearest neighbor atoms are nearly equal for Mn$_{2}$NiAl with Mn$_{2}$NiGa. Same is true for Mn$_{2}$NiIn with Mn$_{2}$NiSn. However, substantial changes in the force constants between any pairs are observed as one moves from Mn$_{2}$NiGa to Mn$_{2}$NiIn. Due to the increase in the inter-atomic distances, as a result of expansion in their equilibrium lattice constants from 5.850 Å to 6.162 Å the MnI-MnII and MnII-Ni longitudinal force constants become softer in Mn$_{2}$NiIn and Mn$_{2}$NiSn in comparison to Mn$_{2}$NiGa. On the other hand, the force constants related to [*X*]{} elements, i.e., MnI-[*X*]{} and Ni-[*X*]{} become harder in Mn$_{2}$NiIn and Mn$_{2}$NiSn as compared to Mn$_{2}$NiGa and Mn$_{2}$NiAl. This opposite behavior is observed since the sizes of the [*X*]{} elements for the former two alloys are larger than those in the latter two, and thus are able to overcome the expansion of the inter-atomic distances occurring in the former two as compared to the latter two. The nearest neighbor force constants associated to MnII atom, the MnII-Ni and the MnII-MnI, become softer as one moves from Mn$_{2}$NiGa to Mn$_{2}$NiIn and Mn$_{2}$NiSn. Therefore, vibration frequencies corresponding to MnII atoms would be lower in the latter two materials, which agree with the features in the VDOS. In Mn$_{2}$NiGa, vibrations of MnII extend from 5.5 THz to 7 THz, which in case of Mn$_{2}$NiIn and Mn$_{2}$NiSn shift to lower frequencies, around 5.5 THz. The dynamical behavior of MnI and Ni atoms are more complicated. For both of the atoms, two sets of inter-atomic force constants behave opposite to one another. For Ni, the Ni-[*X*]{} nearest neighbor force constants harden, as one goes from Ga to In and Sn. This should force Ni atoms to vibrate at higher frequencies as one goes from Mn$_{2}$NiGa to Mn$_{2}$NiIn and Mn$_{2}$NiSn. However, the vibrations of Ni atom remain more or less around the same frequency for all the materials, since the previous effect is compensated by increasing softening of the MnII-Ni bonds as one goes from Mn$_{2}$NiGa to Mn$_{2}$NiIn and Mn$_{2}$NiSn. Similarly, hardening of MnI-[*X*]{} force constant does not affect MnI vibrations, as this is compensated by the softening of MnI-MnII inter-atomic force constants. Fermi surfaces -------------- ![(Color online) (a) Topology of 3D Fermi surfaces for Mn$_{2}$NiGa. The blue and magenta surfaces represent 18$^{th}$ and 19$^{th}$ spin minority bands, respectively. (b) and (c) illustrate those spin minority 18$^{th}$ and 19$^{th}$ bands separately.](fig5.eps){width="8cm"} ![(Color online) Illustration of the 110 cross section (k$_{x}$+k$_{y}$=1) in fcc irreducible Brillouin zone (IBZ).](fig6.eps){width="8.5cm"} ![image](fig7.eps){width="16cm"} Previous first-principles studies in Ni$_{2}$Mn[*X*]{} relate the martensitic instability of those materials with Fermi surface nesting [@entel; @fs; @fs1; @fs2; @fs3]. The anomalies in the phonon branch mainly depend on the shape of the Fermi surfaces and the electron-phonon matrix elements via the phonon wave vector $\xi$ [@fs1; @fs2]. This phenomenon occurs due to strong attraction between two flat-parallel Fermi surfaces connected by a nesting vector ***[q]{}***, at the expense of atomic displacements and at the wave vector where the maximum dip of the acoustic phonon branch is observed. However, this cannot be generalized for all ternary alloys showing martensitic instabilities. For Co$_{2}$NiGa, a newly found shape memory alloy, Siewart [*et al.*]{} [@co2niga] observed that softening in TA$_{2}$ phonon branch was absent as a result of nonappearance of nesting features in the Fermi surfaces of Co$_{2}$NiGa. Here, we present Fermi surfaces corresponding to the spin-minority bands only, since most prominent features are observed in this spin channel as the systems undergo martensitic transitions [@pauljap]. The three dimensional Fermi surfaces of Mn$_{2}$NiGa for 18$^{th}$ and 19$^{th}$ spin-minority bands are shown in FIG. 5. The figure clearly exhibits flat portions of both the minority bands. However, to examine the Fermi surfaces in details, to obtain clues about the nesting between different parallel Fermi surfaces and hence, to relate this novel feature to observed phonon anomaly, two dimensional (2D) projections are necessary. In FIG. 7 we show the two-dimensional cross-sections of Fermi surfaces with the (110) plane for the four systems (The relevant portion of the Irreducible Brillouin zone is shown in FIG. 6). The cross-sections for Mn$_{2}$NiGa, Mn$_{2}$NiAl and Mn$_{2}$NiSn bear close resemblances while that of the Mn$_{2}$NiSn is somewhat different. Inspite of this difference, the nesting vectors (indicated by red arrows in Fermi surfaces plots) are consistent with the wave vectors at which the phonon anomalies are observed in our phonon dispersion curves. Thus, we can conclusively associate the occurrences of unstable modes in the Mn$_{2}$Ni[*X*]{} alloys with the Fermi surface nesting. We refrain from further discussions on the differences in shapes of Fermi surfaces between materials with the element $X$ belonging to different columns in the periodic table because it is not necessary in the present discussion where the focus is on to establish the nesting features in the Fermi surfaces and their relations to the martensitic instabilities found in these systems. In reference 36, Barman [*et al.*]{} also computed the Fermi surfaces of Mn$_{2}$NiGa. Surprisingly, they observed Fermi surface nesting in the austenite phases along (100) and (010) directions only, and not along (110) direction like we did. The $q$ value for one of the nesting vectors found by them is quite close to ours (The $q$ value found by them is 0.31 which is very close to our value, $q$=0.35) though. The nesting along (110) direction was observed by them in the martensitic phase with the $q$ value 0.75. Though they attributed this to the possible instabilities in the TA$_{2}$ phonon mode, it wasn’t substantiated by computations of the phonon spectra. Our results are qualitatively different from theirs as we found nesting along (110) direction in the austenite phase of Mn$_{2}$NiGa. Moreover, our results are consistent as the Fermi surface nesting along (110) could be related to the computed instabilities in the TA$_{2}$ phonon mode along (110) with the nesting vector computed from the Fermi surfaces agreeing with the wave vector at which the maximum of the instability occurs. Elastic constants ----------------- -------------- -------------- -------------- --------------- --------------- ------------------------ Systems c$^{\prime}$ c$_{11}$ c$_{12}$ c$_{44}$ A (GPa) (GPa) (GPa) (GPa) (=c$_{44}/c^{\prime}$) Mn$_{2}$NiAl -33.13 100.35 127.19 131.66 -3.97 Mn$_{2}$NiGa -13.42 58.91 125.17 111.00 -8.27 (90.55)[@ec] (128.00)[@ec] (124.42)[@ec] Mn$_{2}$NiIn 16.44 118.64 85.76 41.47 2.52 Mn$_{2}$NiSn 15.43 146.05 115.19 64.27 4.17 -------------- -------------- -------------- --------------- --------------- ------------------------ : Calculated elastic constants and elastic anisotropy ratio for Mn$_{2}$Ni[*X*]{} materials. Experimental elastic constants are only available for Mn$_{2}$NiGa and shown in brackets. The dynamical stability of crystalline phase implies that the strain energy changes be positive definite against all possible small deformations. This condition imposes restrictions on elastic constants. The stability criteria for cubic crystals requires [@elconst] $$\begin{aligned} c_{44}> 0, c_{11}> |c_{12}|, c_{11}+2c_{12}> 0\end{aligned}$$ Therefore to introspect the kinds of instabilities present in the materials considered here and to validate our calculated phonon dispersion results, we compute the elastic constants for all the four materials from the initial slope ($\xi$$\rightarrow$ 0) of phonon dispersion plots along \[$\xi\xi0$\] direction. The elastic constants c$_{44}$, c$^\prime$ (=$\frac{1}{2}(c_{11}-c_{12}$)) and c$_{L}$ (=$\frac{1}{2}(c_{11}+c_{12}+2c_{44}$)) are related to TA$_{1}$, TA$_{2}$ and LA acoustic modes [@elconst]. These elastic constants are connected to ultrasound velocity via c$_{ij}$=$\rho\upsilon^{2}$ relation [@elconst] where $\rho$ is the mass density. The three independent elastic constants of cubic crystal are tabulated in TABLE I. Our computed c$_{12}$ and c$_{44}$ agree quite well with the experimental results available only for Mn$_{2}$NiGa, whereas in our calculation, c$_{11}$ is underestimated[@ec]. Overall the agreement with experiment is good for Mn$_{2}$NiGa. This, in effect, is an indirect indication to the accuracy of calculated phonon spectra. The results show that the Equation (1) is satisfied by Mn$_{2}$NiIn and Mn$_{2}$NiSn only. This indicates that Mn$_{2}$NiAl and Mn$_{2}$NiGa are unstable in the cubic structure. We gain further insight into the nature of stabilities of these materials by looking at the other two parameters listed in TABLE I, the shear constant and the elastic anisotropy ratio. Since acoustic TA$_{2}$ branch is related to shear constant (c$^{\prime}$), hence, negative c$^{\prime}$ for Mn$_{2}$NiAl and Mn$_{2}$NiGa is an indication of pure elastic instability which stabilizes though shear deformation across ($\xi\xi0$) planes in \[$\xi\bar{\xi}$0\] direction. The same is not true for the other two materials. Although they satisfy Equation (1) and have sizable c$^{\prime}$, their anisotropy ratios A are high enough to bring in a martensitic transformation [@niti]. The elastic anisotropy ratio A (=c$_{44}/c^{\prime}$) is an important quantity to measure of stability of cubic structures under stress across ($\xi\xi0$) planes [@zener]. Larger the value it acquires, more unstable the structure becomes. For systems undergoing martensitic transformations, the value of A varies from 2 onward [@niti; @cunial; @shapiro1; @shapiro2; @nimnsn-sb; @acet]. In cases of Mn$_{2}$NiIn and Mn$_{2}$NiSn, the values of A lie well within the limits observed in shape memory alloys. The origin of this could be rather small value of the shear modulus c$^{\prime}$. Additionally, we find that c$_{44}$ in cases of Mn$_{2}$NiIn and Mn$_{2}$NiSn are much softer than those for the other two materials. The comparative softening in c$_{44}$ for Mn$_{2}$NiIn and Mn$_{2}$NiSn as compared to Mn$_{2}$NiGa and Mn$_{2}$NiAl, indicate that the cubic Mn$_{2}$NiIn and Mn$_{2}$NiSn will transform to different martensitic phases compared to the other two where the transformations would be driven by softening in c$^{\prime}$ as has been observed in cases of other shape memory alloys [@niti]. The results on elastic constants therefore corroborate the inferences drawn from the differences in dispersion relations for the materials studied. The vibrational and elastic properties discussed in this work show a clear trend. Mn$_{2}$NiGa and Mn$_{2}$NiAl are quite similar in their behaviors; same goes for Mn$_{2}$NiIn and Mn$_{2}$NiSn. The vibrational and elastic properties among these two groups are significantly different. The origin of such differences can be traced back to the differences in their electronic structures [@pauljap]. The signatures of mechanical instability were reflected in electronic structures of Mn$_{2}$NiGa and Mn$_{2}$NiAl, where high densities of states, as compared to Mn$_{2}$NiIn and Mn$_{2}$NiSn, were found at the Fermi level. The origin of this was larger hybridizations between the Mn and Ni atoms at the octahedral positions for the former two systems. For the latter two systems, rather small densities of states at Fermi level, due to smaller hybridizations between the magnetic atoms at octahedral positions, originating from larger distances between those magnetic atoms (due to the atoms sitting in a larger lattice compared to the former two which happens as In and Sn have larger sizes than Ga and Al), signified that it would take external influences to induce instabilities into these systems. Summary and Conclusions ======================= We have investigated the lattice dynamics of Mn$_{2}$Ni[*X*]{} ([*X*]{}= Al, Ga, In, Sn) MSMAs in their austenite phase using first-principles based density functional theory calculations. The calculated phonon spectra show anomalous behavior of the acoustic TA$_{2}$ branch along \[$\xi\xi0$\] direction for all the four materials indicating structural instability. Instabilities in the said acoustic mode can be related to the repulsion by the optical T$_{2g}$ mode having the same symmetry as the TA$_{2}$ mode. Unlike Ni$_{2}$MnGa, no inversion of optical modes could be observed, thus ruling this out as one of the possible mechanisms behind the anomalous features in phonon spectra. The features in the vibrational densities of states can be explained from the qualitative variations of the interatomic force constants across the materials. The calculated elastic constants corroborate the structural instabilities inferred from phonon dispersion relations. Negative shear constants for Mn$_{2}$NiAl and Mn$_{2}$NiGa indicate pure elastic instabilities in these materials. Finally, the nesting features in the Fermi surfaces confirm that the observed phonon anomalies are associated with them. The wave vectors at which the maximum anomaly occur indicate the possibility of formation of pre-martensitic modulated phases which are yet to be confirmed by experiments. The results also indicate that these modulated pre-martensitic phases could be quite complex and further investigations into this aspect is necessary. Acknowledgments =============== Financial assistance from the Swedish Research Links (VR-SIDA) is acknowledged. The Swedish National Computing facilities, computation facilities from C-DAC, Pune, India and from Department of Physics, IIT Guwahati funded under the FIST programme of DST, India are also acknowledged. SG and SP would like to acknowledge Dr. Munima B. Sahariah, IASST, Guwahati, India for the help in plotting the Fermi surfaces. [99]{} K. Ullakko, J. K. Huang, C. Kanter, R. C. O’Handley and V. V. Kokorin, [*Appl. Phys. Lett.*]{} [**69**]{}, 1966 (1996). A. Sozinov, A. A. Likhachev, N. Lanska and K. Ullakko, [*Appl. Phys. Lett.*]{} [**80**]{}, 1746 (2002). A. Zheludev, S. M. Shapiro, P. Wochner, A. Schwartz, M. Wall and L. E. Tanner, [*Phys. Rev. B* ]{} [**51**]{}, 11310 (1995). A. Zheludev, S. M. Shapiro, P. Wochner and L. E. Tanner, [*Phys. Rev. B*]{} [**54**]{}, 15045 (1996). L. Mañosa, A. Planes, J. Zarestky, T. Lograsso, D. L. Schlagel and C. Stassis, [*Phys. Rev. B*]{} [**64**]{}, 024305 (2001). A. T. Zayak, P. Entel, J. Enkovaara, A. Ayuela and R. M. Nieminen, [*Phys. Rev. B*]{} [**68**]{}, 132402 (2003). A. T. Zayak and P. Entel, [*J. Magn. Magn. Mater.*]{} [**290-291**]{}, 874 (2005). X. Moya, L. Mañosa, A. Planes, T. Krenke, M. Acet, V. O. Garlea, T. A. Lograsso, D. L. Schlagel and J. L. Zarestky, [*Phys. Rev. B*]{} [**73**]{}, 064303 (2006). T. Büsgen, J. Feydt, R. Hassdorf, S. Thienhaus, M. Moske, M. Boese, A. Zayak and P. Entel, [*Phys. Rev. B*]{} [**70**]{} 014111 (2004). S. Ağduk and G. Gökoğlu, [*Eur. Phys. J. B*]{} [**79**]{}, 509 (2011). S. Ağduk and G. Gökoğlu, [*J. Alloys Compd.*]{} [**511**]{}, 9 (2012). P. J. Webster, K. R. A. Ziebeck, S. L. Town and M. S. Peak, [*Philo. Mag. B*]{} [**49**]{}, 295 (1984). A. Zheludev, S. M. Shapiro and P. Wochner, [*Phys. Rev. B*]{} [**54**]{}, 15045 (1996). G. D. Liu, J. L. Chen, Z. H. Liu, X. F. Dai, G. H. Wu, B. Zhang and X. X. Zhang, [*Appl. Phys. Lett.*]{} [**87**]{}, 262504 (2005). G. D. Liu, X. F. Dai, S. Y. Yu, Z. Y. Zhu, J. L. Chen, G. H. Wu, B. Zhang and X. X. Zhang, [*Phys. Rev. B*]{} [**74**]{}, 054435 (2006). S. Singh, M. Maniraj, S. W. D’Souza, R. Ranjan and S. R. Barman, [*Appl. Phys. Lett.*]{} [**96**]{}, 081904 (2010). P. J. Brown, T. Kanomata, K. Neumann, K. -U. Neumann, B. Ouladiaff, A. Sheikh and K. R. A. Ziebeck, [*J. Phys.: Condens. Matter.*]{} [**22**]{}, 506001 (2010). N. Lakshmi, K. Pandey and N. Venugopalan, [*Bull. Mater. Sci.*]{} [**25**]{}, 309 (2002). S. Paul and S. Ghosh, [*J. Appl. Phys.*]{} [**110**]{}, 063523 (2011). S. R. Barman and A. Chakrabarti, [*Phys. Rev. B*]{} [**77**]{}, 176401 (2008). A. Chakrabarti and S. R. Barman, [*Appl. Phys. Lett.*]{} [**94**]{}, 161908 (2009). H. Luo, G. Liu, F. Meng, S. Li, W. Zhu, G. Wu, X. Zhu and C. Jiang, [*Physica B*]{} [**405**]{}, 3092 (2010). S. Paul, B. Sanyal and S. Ghosh, [*J. Phys.: Condens. Matter.*]{} [**25**]{}, 236005 (2013). P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. Dal Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. MartinSamos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari and R. M. Wentzcovitch, [*J. Phys.: Condens. Matter.*]{} [**21**]{}, 395502 (2009). D. Vanderbilt, [*Phys. Rev. B*]{} [**41**]{}, 7892 (1990). J. P. Perdew in [*Electronic Structure of Solids*]{}, edited by P. Ziesche and H. eschrig, (Akademic Verlag, Berlin 1991), p. 11. M. Methfessel and A. T. Paxton, [*Phys. Rev. B*]{} [**40**]{}, 3616 (1989). S. Baroni, S. De Gironcoli, A. Dal Corso and P. Giannozzi, [*Rev. Mod. Phys.*]{} [**73**]{}, 515 (2001). R. B. Helmholdt and K. H. J. Buschow, [*J. Less-Comm. Metals*]{} [**128**]{}, 167 (1987). A. T. Zayak, P. Entel, K. M. Rabe, W. A. Adeagbo and M. Acet, [*Phys. Rev. B*]{} [**72**]{}, 054113 (2005). Y. Lee, J. Y. Rhee and B. N. Harmon, [*Phys. Rev. B*]{} [**66**]{}, 054424 (2002). C. Bungaro, K. M. Rabe and A. Dal Corso, [*Phys. Rev. B*]{} [**68**]{}, 134104 (2003). O. I. Velikokhatnyi and I. I. Naumov, [*Phys. Solid State*]{} [**41**]{}, 617 (1999). P. Entel, V. D. Buchelnikov, M. E. Gruner, A. Hucht, V. V. Khovailo, S. K. Nayak and A. T. Zayak, [*Mater. Sci. Forum*]{} [**583**]{}, 21 (2008). M. Siewert, M. E. Gruner, A. Dannenberg, A. Hucht, S. M. Shapiro, G. Xu, D. L. Schlagel, T. A. Lograsso and P. Entel, [*Phys. Rev. B*]{} [**82**]{}, 064420 (2010). S. R. Barman, S. Banik, A. K. Shukla, C. Kamal and A. Chakrabarti, [*Europhys. Lett.*]{} [**80**]{}, 57002 (2007). Z. Jian-Tao, Z. Kun, W. Jia-Jia, Y. Xin-Quan, Y. Jin and W. San-Xie, [*Acta. Phys. Sin.*]{} [**61**]{} 213102 (2012). M. Born and K. Huang, in [*Dynamical Theory of Crystal Lattices*]{}, (Clarendon, Oxford, 1956). K. Otsuka and X. Ren, in [*Progress in Materials Science*]{}, [**50**]{}, 511 (2005). C. M. Zener, [*Phys. Rev.*]{} [**71**]{}, 846 (1947). T. $\check{C}$ernoch, M. Landa, P. Nov$\acute{a}$k, P. Sedl$\acute{a}$k and P. $\check{S}$ittner, [*J. Alloys. compd.*]{} [**378**]{}, 140 (2004). S. M. Shapiro, G. Xu, G. Gu, J. Gardner and R. W. Fonda, [*Phys. Rev. B*]{} [**73**]{}, 214114 (2006). S. M. Shapiro, G. Xu, B. L. Winn, D. L. Schlagel, T. Lograsso and R. Erwin, [*Phys. Rev. B*]{} [**76**]{}, 054305 (2007). A. Planes and L. Ma$\tilde{n}$osa, [*Solid State Phys.*]{} [**55**]{}, 159 (2001).
--- abstract: 'In [@or] the authors investigate the so called origami rings. Taking this paper as a starting point we find some further properties of origami rings.' author: - 'Dmitri Nedrenco[^1]' title: On origami rings --- We work in the complex plane ${\mathbb C}$ and identify it occasionally with ${\mathbb R}^2$. Let $U$ be a set of “directions" which are determined by complex numbers $e^{i\alpha}$ for some angles $\alpha\in[0,2\pi)$. Two directions $e^{i\alpha}$ and $e^{i\beta}$ are equal iff $\alpha = \beta \!\mod\pi$. Let $L_u(p)$ denote the line with the direction $u$ through the point $p$, i.e. $L_u(p)=p+{\mathbb R}u$. Also, let $I_{u,v}(p,q)$ denote the intersection point of two lines $L_u(p)$ and $L_v(p)$ for two different directions $u,v$. We set $M_0:=\{0,1\}$ and define $M_j$ as the set of all intersection points $I_{u,v}(p,q)$ for $u\neq v$ and $p\neq q$ in which $u,v$ take on all values from $U$ and $p,q$ take on all values in $M_{j-1}$. Finally, we define $R(U):=\bigcup_{j\geq 0} M_j$. In [@or] the authors investigate the set $R(U)$ and they prove that $R(U)$ is a ring for every multiplicative semigroup $U$. We try to answer a part of a question asked in the cited work: is $R(U)$ a ring even if $U$ is not a semigroup? First we collect some properties of the points $I_{u,v}(p,q)$ which are all proved in [@or]. Let $u=e^{i\alpha},v=e^{i\beta}\in U$ be two different directions and $p,q$ two different elements of $R(U)$. Moreover, let $s_{x,y}:= x\overline{y}-\overline{x}y$, where $\bar{\cdot}$ means complex conjugation. The following statements then hold: 1. \[eigI1\] $I_{u,v}(p,q) = \frac{s_{u,p}}{s_{u,v}}v + \frac{s_{v,q}}{s_{v,u}}u$. 2. \[eigI2\] $I_{u,v}(p,q)=I_{v,u}(q,p)$. 3. \[eigI3\] $I_{u,v}(p,q)= I_{u,v}(p,0)+I_{u,v}(0,q)$. 4. \[eigI4\] $I_{u,v}(p+q,0) = I_{u,v}(p,0)+I_{u,v}(q,0)$ und $I_{u,v}(rp,0)=rI_{u,v}(p,0)$ for all $r\in{\mathbb R}$. 5. \[eigI5\] $I_{u,v}(0,1) = \frac{s_{v,1}}{s_{v,u}}u = \frac{\operatorname{Im}(v)}{\operatorname{Im}(v\overline{u})}u = \frac{\operatorname{sin}\beta}{\operatorname{sin}(\beta-\alpha)}e^{i\alpha}= \frac{1-v^2}{1-(\frac{v}{u})^2}$. Also $R(U)$ is an additive group.$\square$ In [@or] the authors pointed out that for some sets $U$ the set $R(U)$ is a ring even if $U$ is not a semigroup, for instance $R(0^\circ, 45^\circ,90^\circ)={\mathbb Z}[i]$. There are also other obvious examples: $$R(0^\circ, 30^\circ,60^\circ)={\mathbb Z}[e^{\frac{2\pi i}{3}}]=R(0^\circ, 60^\circ,120^\circ) \quad \text{and} \quad R(0^\circ, 45^\circ,135^\circ)={\mathbb Z}[i].$$ ### Special case $U=\{1,u,v\}$ {#special-case-u1uv .unnumbered} One could conjecture that $R(U)$ is always a ring. However, we show that this is not the case by considering the ring structure of $R(U)$ for sets $U$ with three directions. \[ruz\] Let $U=\{1,u,v\}$ with $u=e^{i\alpha}$ and $v=e^{i\beta}$ be given, where $0\neq \alpha\neq \beta \neq 0 \mod{\pi}$ holds. Moreover, let $$z:=I_{u,v}(0,1) = \frac{s_{v,1}}{s_{v,u}}u.$$ Then we have $R(U) = {\mathbb Z}+z{\mathbb Z}$. () at (7,0); () at (\[xshift=4cm\]-20:3); () at (\[xshift=4cm\]30:3); (-3,0)–(8,0); (210:1.8)–(0,0)–(30:4) \[xshift=4cm\] (210:3)–(0,0)–(30:3) ; ($(30:2)!-2cm!(4,0)$)–($(4,0)!-1!(30:2)$) \[xshift=-4cm\] ($(30:2)!-0.5!(4,0)$)–($(4,0)!-2!(30:2)$); (intersection-1) circle (2pt) node\[label=$z$\] (intersection-2) circle (2pt) node\[label=$1-z$\] ; (intersection-1) circle (2pt) node\[label=$0$\] ; (intersection-1) circle (2pt) node\[label=$1$\] ; Since $1$ and $z$ belong to the additive group $R(U)$, we have ${\mathbb Z}+z{\mathbb Z}{\subseteq}R(U)$. We prove the other inclusion by showing that $M_j{\subseteq}{\mathbb Z}+z{\mathbb Z}$ via induction on $j$. This holds for $j=0$. Let $s,t\in M_j$; then there exist $a,b\in{\mathbb Z}$ satisfying $s=a+bz$. Due to Theorem \[eigI\]\[eigI2\] and \[eigI\]\[eigI3\] it suffices to show that $I_{x,y}(s,0)\in{\mathbb Z}+z{\mathbb Z}$ for $\{x,y\}{\subseteq}U$. We have to show that the following six points $$I_{u,v}(s,0),\; I_{v,u}(s,0),\; I_{u,1}(s,0),\; I_{v,1}(s,0),\; I_{1,u}(s,0),\; I_{1,v}(s,0)$$ belong to ${\mathbb Z}+z{\mathbb Z}.$ For this purpose we calculate $$\begin{aligned} \tag{$\star$} z&=\tfrac{s_{v,1}}{s_{v,u}}u, \text{ here we have } s_{v,u}\neq 0 \text{ as } \alpha\neq \beta \!\!\!\mod{\pi},\; \tfrac{s_{v,1}}{s_{v,u}}\in{\mathbb R}\text{ and}\\ \tag{$\star\star$} s_{u,z} &= u\overline{z}-\overline{u}z = u\overline{u}\big(\tfrac{s_{v,1}}{s_{v,u}}-\tfrac{s_{v,1}}{s_{v,u}}\big)=0.\end{aligned}$$ It is easy to *see* how the points we are looking for are constructed. We provide an analytic proof by using Theorem \[eigI\] although it is very helpful to draw a picture first in order to get better understanding of the calculations. $$\begin{aligned} \bullet\; I_{u,v}(s,0) = &~ I_{u,v}(a+bz,0) = aI_{u,v}(1,0)+bI_{u,v}(z,0) = a(1-z)\in{\mathbb Z}+z{\mathbb Z},\\ & \text{ since } I_{u,v}(1,0) = 1-z \text{ and } I_{u,v}(z,0) = \frac{s_{u,z}}{s_{u,v}}v = 0 \text{ because of } (\star\star). \\[3mm] \bullet\; I_{v,u}(s,0) = &~ aI_{v,u}(1,0)+bI_{v,u}(z,0) = (a+b)z\in{\mathbb Z}+z{\mathbb Z},\\ &\text{ since } I_{v,u}(1,0)=z \text{ and } I_{v,u}(z,0) = I_{u,v}(0,z)=\frac{s_{v,z}}{s_{v,u}}u = z.\\[3mm] \bullet\; I_{u,1}(s,0) = &~ aI_{u,1}(1,0)+bI_{u,1}(z,0) = a,\\ &\text{ since } I_{u,1}(1,0) = \frac{s_{u,1}}{s_{u,1}}\cdot 1 = 1 \text{ and } I_{u,1}(z,0) = \frac{s_{u,z}}{s_{u,1}}\cdot 1 = 0 \text{ because of } (\star\star).\\[3mm] \bullet\; I_{v,1}(s,0) = &~ aI_{v,1}(1,0)+bI_{v,1}(z,0) = a+b,\\ &\text{ since } I_{v,1}(z,0)= \frac{s_{v,z}}{s_{v,1}}\cdot 1 = \frac{v\overline{z}-\overline{v}z}{s_{v,1}}\overset{(\star)}{=} \frac{(v\overline{u}-\overline{v}u)\cdot\ \frac{s_{v,1}}{s_{v,u}}}{s_{v,1}} = 1.\\[3mm] \bullet\; I_{1,u}(s,0) = &~ aI_{1,u}(1,0)+bI_{1,u}(z,0) = bz \text{ because of } I_{1,u}(1,0)= \frac{s_{1,1}}{s_{1,u}}u = 0\text{ and }\\& I_{1,u}(z,0)=\frac{s_{1,z}}{s_{1,u}}u = \frac{\overline{z}-z}{\overline{u}-u}u = \frac{\overline{u}-u}{\overline{u}-u}\cdot\frac{s_{v,1}}{s_{v,u}}u \overset{(\star)}{=} z. \\[3mm] \bullet\; I_{1,v}(s,0) = &~ aI_{1,v}(1,0)+bI_{1,v}(z,0) = b(z-1), \text{ since } I_{1,v}(1,0) = 0 \text{ and }\\ &I_{1,v}(z,0) = \frac{s_{1,z}}{s_{1,v}}v = \frac{\overline{z}-z}{\overline{v}-v}v = \frac{\overline{u}-u}{\overline{v}-v}\cdot \frac{s_{v,1}}{s_{v,u}}v = \frac{\overline{u}v-uv}{\overline{v}-v}\cdot \frac{s_{v,1}}{s_{v,u}} = \frac{uv-\overline{u}v}{v\overline{u}-\overline{v}u}\\ & \text{but on the other hand }z-1 = \frac{s_{v,1}}{s_{v,u}}u - 1 = \frac{(v-\overline{v})u - v\overline{u}+\overline{v}u}{s_{v,u}}= \frac{uv-\overline{u}v}{v\overline{u}-\overline{v}u},\\ & \text{ so } I_{1,v}(z,0)=z-1.\end{aligned}$$ Thus, we dealt with all the cases, so the proof is complete. ### General case for $|U|=3$ {#general-case-for-u3 .unnumbered} In Theorem \[ruz\] we assume that one of the directions in $U$ is determined by the angle of $0^\circ$. In fact this is no loss of generality: Suppose $U=\{x,u,v\} $ is a set with three different directions and $M_0=\{0,1\}$, then we define $1'\!\!:=I_{x,v}(0,1)$. Because $GL_2{\mathbb R}$ operates transitively on ${\mathbb R}^2\backslash\{0\}$, we can transform $M_0$ by a linear transformation to $\{0,1'\}$. Now we can in fact assume that one direction equals $1$. This means, up to a linear transformation, we proved that $R(U)$ is of the form ${\mathbb Z}+z{\mathbb Z}$. ### Ring structure of $R(1,u,v)$ {#ring-structure-of-r1uv .unnumbered} Next, we want to clarify for which directions $u=e^{i\alpha},v=e^{i\beta}$ different from $\pm 1$ where $\beta \neq \alpha \operatorname{mod}{\pi}$ the set $R(1,u,v)={\mathbb Z}+z{\mathbb Z}$ is a ring. Since $$(a+bz)(c+dz) = ac + (ad+bc)z+bdz^2$$ we see that ${\mathbb Z}+z{\mathbb Z}$ is closed under multiplication iff $z^2$ lies in it. This amounts to the same as to say that the coefficients of the quadratic minimal polynomial of $z$ over ${\mathbb R}$ are integers. This is in turn the same as to say that $z$ is an algebraic number of degree $2$ (by the construction of $z$ it is not real). So for instance if $\alpha = \operatorname{arctan}\sqrt{7}$ and $\beta = \pi-\operatorname{arctan}\sqrt{7}$, then the set $R(1,e^{i\alpha}, e^{i\beta})$ is a ring, for it is $z^2=z-2$. \[bem1i\] Let $\alpha = \pm \frac{\pi}{2}$ i.e. $u=\pm i$. Since $u=i$ and $u=-i$ represent the same direction, we let without loss of generality $u=i$. The two directions $1$ and $i$ are perpendicular and[^2] let $z$ be $I_{i,v}(0,1)=ri$ for some $r\in{\mathbb R}$. More precisely we have $r= \operatorname{tan}(\pi - \beta) = -\operatorname{tan}\beta\neq 0$ since $1,i,v$ are different directions. Hence the question whether $R(1,u,v)$ is a ring is reduced to the question whether or not $z^2=-(\operatorname{tan}\beta)^2$ is an integer. From this we infer that $R(U)$ is a ring for $U=\{1,i,v\}$ iff $\beta=\operatorname{arctan}\sqrt{d}$ where $d$ is a positive squarefree integer. In this case $v=exp({i\operatorname{arctan}\sqrt{d}})$ and $R(1,i,v)= {\mathbb Z}+ i\tan\beta\,{\mathbb Z}= {\mathbb Z}+\sqrt{-d}\,{\mathbb Z}$. \[ruring\] The complex number $z=I_{u,v}(0,1)\in{\mathbb C}, z\not\in{\mathbb R}$ has the real minimal polynomial $(x-z)(x-\overline{z}) = x^2 -(z+\overline{z})x + z\overline{z}\in{\mathbb R}[x]$. By the above considerations this means that $R(U)$ is a ring exactly when (cf. Theorem \[eigI\]\[eigI5\]): $$\begin{aligned} k:&=z+{\overline{z}}= 2\operatorname{Re}z = 2\cdot\frac{\operatorname{sin}\beta\operatorname{cos}\alpha}{\operatorname{sin}(\beta-\alpha)}\in{\mathbb Z},\\ m:&=z{\overline{z}}= |z|^2 = \frac{\operatorname{sin}^2\beta}{\operatorname{sin}^2(\beta-\alpha)}\in{\mathbb Z}.\end{aligned}$$ Thus, in particular $k = 2 \sqrt{m}\cos\alpha$ and $\cos^2\alpha = \frac{k^2}{4m}$ is a rational number. Hence, if $R(U)$ is a ring, then necesssarily $\cos^2\alpha$ is a rational number. Since we could start with $u$ and $v$ interchanged it follows by symmetry that $\cos^2\beta\in {\mathbb Q}$. Obviously this property is not enough to guarantee that $R(U)$ is a ring; this is clear by looking at the example $R(0^\circ, 60^\circ,150^\circ)$, for $\cos 60^\circ = \frac{1}{2}$, but $k = 2\operatorname{Re}(z)= \frac{1}{2} \not\in{\mathbb Z}$ (here $z = \frac{1}{4}(1+i)$). At least in the following sense the condition $\cos\alpha\in{\mathbb Q}$ is sufficient (see figure 2): If $\cos\alpha = \frac{s}{t}$ and without loss of generality $s,t\in{\mathbb N}$ and relatively prime, then we set $\operatorname{Re}z = s$ and $|z| = t$. Therefore $k$ as well as $m$ are integers. With this choice we have $z=s+i\sqrt{t^2-s^2}$ and $\beta = \operatorname{arctan}(\frac{\sqrt{t^2-s^2}}{s-1})$. Hence, for every $\alpha$ with $\cos\alpha\in{\mathbb Q}$ we found $\beta$ such that $R(1,u=e^{i\alpha},v=e^{i\beta})$ is a ring. For infinitely many pairs $u,v$ of directions the set $R(1,u,v)$ is a ring.[$\square$]{} If $s$ and $t$ from the fraction $\frac{s}{t}$ in Remark \[ruring\] are not relatively prime, we get other values of $\beta$ and $z$ (see figure 2), but since $\cos\alpha = \frac{s}{t} =\frac{s\cdot \gamma}{t\cdot \gamma}$ where $\gamma\in{\mathbb Z}$ it holds that $z'=\gamma\cdot z$ so ${\mathbb Z}+z'{\mathbb Z}{\subseteq}{\mathbb Z}+z{\mathbb Z}$ and we only obtain a subring. (-2,0)–(7,0); \(z) at (5,3); (0) at (0,0); (1) at (2,0); (s) at (5,0); \(a) at (0.8,0); (b) at (2.6,0); () at (-2,0); () at ($(0)!-1.5cm!(z)$); () at ($(1)!-1.6cm!(z)$); ($(0)!-1.5cm!(z)$)–($(z)!-1.8cm!(0)$) node \[sloped,midway,above, red\][$t=|z|$]{}; ($(1)!-1.5cm!(z)$)–($(z)!-1.8cm!(1)$); (0)–(z)–(1) ; ($(z)!-1cm!(s)$)–($(s)!-1cm!(z)$); (z) circle (2pt) (0) circle(2pt) (1) circle (2pt) (s) circle (2pt); ### Quadratic number fields {#quadratic-number-fields .unnumbered} If we think of the minimal polynomial $x^2+kx+m$ of $z$ we see that $z=\frac{k}{2}+\frac{\sqrt{k^2-4m}}{2}$ (in what follows it does not matter whether we use $z$ or $\overline{z}$). Thus, the quadratic number field ${\mathbb Q}(z) = \faktor{{\mathbb Q}[x]}{(x^2+kx+m){\mathbb Q}[x]}$ can be written as ${\mathbb Q}(z) = {\mathbb Q}(\sqrt{k^2-4m})$. Quadratic number fields are characterized as follows (cf. [@stewart p.62, Proposition 3.1]). The quadratic number fields are exactly the fields ${\mathbb Q}(\sqrt{d})$ for some squarefree integer $d$. [$\square$]{} Now we wish to know for which squarefree integers $d$ the field ${\mathbb Q}(\sqrt{d})$ equals ${\mathbb Q}(z)$ for a suitable choice $k$ and $m$ (so actually of $\alpha$ and $\beta$). Since $z\in{\mathbb C}\backslash{\mathbb R}$, such a $d$—if it exists—has to be negative, so we really look only at purely imaginary quadratic number fields. In Remark \[bem1i\] we saw that if we choose $\alpha=\frac{\pi}{2}$ and $\beta=\operatorname{arctan}(\sqrt{d})$ where $d$ is a squarefree positive integer, then the equation $z^2=-(\operatorname{tan}\beta)^2 = -d$ holds. Hence for every squarefree positive integer $d$ there exist angles $\alpha$ and $\beta$ such that ${\mathbb Q}(z)={\mathbb Q}(\sqrt{-d})$ is true. ### Ring of algebraic numbers in ${\mathbb Q}(z)$ {#ring-of-algebraic-numbers-in-mathbb-qz .unnumbered} Assume that $R(1,u,v)={\mathbb Z}+z{\mathbb Z}$ is a ring. Then $z$ is an algebraic number of degree $2$. Since integers are for trivial reasons algebraic numbers, the ring $\mathcal{O}_{{\mathbb Q}(z)}$ of algebraic numbers of the field ${\mathbb Q}(z)$ contains the ring ${\mathbb Z}+z{\mathbb Z}$. We ask ourselves when the equality holds. \[ganzalg\] Let $d$ be a squarefree integer. Then the ring of algebraic numbers of ${\mathbb Q}(\sqrt{d})$ equals[^3] $${\mathcal{O}_{{\mathbb Q}(\sqrt{d})}} =\begin{cases} {\mathbb Z}+\sqrt{d}\,{\mathbb Z}& d\not\equiv 1 \!\!\mod 4\\ {\mathbb Z}+ \frac{1+\sqrt{d}}{2}\,{\mathbb Z}& d\equiv 1\!\!\mod 4. \end{cases}$$ \[ganzalgring\] Let $1,u=e^{i\alpha},v=e^{i\beta}$ be pairwise different directions, let $z=I_{u,v}(0,1)$ and $k=z+\overline{z}$. Moreover let $R(U)={\mathbb Z}+z{\mathbb Z}$ be a ring. In this case the ring ${\mathcal{O}_{{\mathbb Q}(z)}}$ of algebraic integers of ${\mathbb Q}(z)$ equals $R(1,u,v)$ exactly in the following cases: - $k$ is odd and $(k\tan\alpha)^2$ is a squarefree positive integer; - $k$ is even and $(\frac{k}{2}\tan\alpha)^2$ is a squarefree positive integer congruent to $2$ or $3$ modulo $4$; - $k=0$ and $\tan^2\!\beta$ is a squarefree positive integer congruent to $2$ or $3$ modulo $4$. We discuss the following cases: $k$ is odd, $0\neq k $ is even and $k=0$. In Remark \[bem1i\] we dealt already with $k=0$: here ${\mathcal{O}_{{\mathbb Q}(z)}}=R(U)={\mathbb Z}+z{\mathbb Z}={\mathbb Z}+\sqrt{-d}\,{\mathbb Z}$ iff $d$ is a squarefree positive integer congruent to $2$ or $3$ modulo $4$, therefore iff $\tan^2\!\beta$ is such a $d$. For the case $k\neq 0$ we calculate first the following: $$\tag{$\dagger$} \label{ktan} k^2-4m=k^2(1-\frac{4m}{k^2})=k^2 (1-\frac{4m}{4m\cos^2\!\alpha})=k^2\cdot\frac{\cos^2\!\alpha-1}{\cos^2\!\alpha}=-(k\tan\alpha)^2.$$ Let $k^2-4m = y^2 d $ where $d$ is a squarefree integer and $y\in{\mathbb Z}$. Keeping Lemma \[ganzalg\] in mind we ask when $\frac{1+\sqrt{d}}{2}\in{\mathbb Z}+z{\mathbb Z}$ is satisfied. Since $z = \frac{k}{2} + \frac{\sqrt{k^2-4m}}{2} = \frac{k}{2} + \frac{y\sqrt{d}}{2}$ this is the case exactly when $y=\pm 1$ and $k$ is odd. If $k$ is in fact odd, then it follows that $k^2-4m \equiv 1\mod 4$ and so $y^2d \equiv k^2-4m \equiv 1\!\!\mod 4$ and $y^2\equiv 1 \equiv d\!\mod 4$. Therefore, for $k$ odd, the equation ${\mathbb Z}+ z{\mathbb Z}= {\mathcal{O}_{{\mathbb Q}(z)}}$ is true iff $(k\operatorname{tan}\alpha)^2$ is an odd squarefree integer, cf. (\[ktan\]). If $k$ is even and ${\mathbb Z}+ z{\mathbb Z}= {\mathcal{O}_{{\mathbb Q}(z)}}$ we infer that $y^2d \equiv k^2-4m \equiv 0\!\mod 4$ and as we have seen above $d\not\equiv1{\;\!\!\!\!\mod{4}}$. We check when $\sqrt{d}\in{\mathbb Z}+ z{\mathbb Z}$. The fact that $d$ is squarefree enforces in $\sqrt{d} = a +b(\frac{k}{2}+\frac{y\sqrt{d}}{2})$ the conditions $y=\pm 2$ and $b = \pm 1$ (so $a = \pm\frac{k}{2})$. Therefore, $\sqrt{d}\in{\mathbb Z}+ z{\mathbb Z}$ is true iff $k^2-4m = 4d$. That is why, for an even $k\neq 0$, the equation ${\mathbb Z}+ z{\mathbb Z}= {\mathcal{O}_{{\mathbb Q}(z)}}$ is satisfied exactly when $(\frac{k}{2}\tan\alpha)^2$ is a squarefree positive integer congruent to $2$ or $3$ modulo $4$. It does occur that ${\mathbb Z}+z{\mathbb Z}$ is a ring although ${\mathbb Z}+z{\mathbb Z}\neq {\mathcal{O}_{{\mathbb Q}(z)}}$ holds: For $z=5+i\sqrt{56}$ we have $\cos\alpha = \frac{5}{9}$, thus ${\mathbb Z}+z{\mathbb Z}$ is a ring according to Remark \[ruring\]. However, $\tan\alpha = \frac{\sqrt{56}}{5}$ and $k=2\cdot 5$. Hence, we see that $\frac{k}{2}\tan\alpha = \sqrt{56}$ and $56$ is not squarefree. Thus, Theorem \[ganzalgring\] says that ${\mathbb Z}+z{\mathbb Z}\varsubsetneq {\mathcal{O}_{{\mathbb Q}(z)}}$. ### The structure of $R(1,u,v,w)$ {#the-structure-of-r1uvw .unnumbered} What can we say about $R(U)$ for a set $U$ consisting of $4$ different directions? As discussed above we can assume, up to a linear transformation, that the set $U$ equals $\{1,u,v,w\}$ for some different directions $1,u,v,w$. Moreover, we can choose $u,v,w$ such that $u=e^{i\alpha}, v= e^{i\beta}, w=e^{i\gamma}$ where $0<\alpha<\beta<\gamma<\pi$. Let $$p:=I_{u,w}(0,1) \quad \text{ and } \quad r:=I_{1,v}(p,0).$$ Then we have $r<p$ on the line $L_1(p)$ with its natural ordering. \(0) at (0,0); (1) at (5,0); (t) at (\[xshift=5cm\]-40:1); (-1.5,0)–(6,0); (240:1)–(0,0)–(60:5); (280:1)–(0,0)–(100:6); (t)–($(1)!-8.5cm!(t)$); \(p) circle (2pt) node\[label=${p=I_{u,w}(0,1)}$\] ; (p)–++(4,0)–++(-9,0); \(r) circle (2pt) node\[label=$r$\] ; (r)–++(140:3)–++(-40:8.7); (p1) circle (2pt) node\[label=$p_1$\] ; (s1) circle (2pt) node\[label=$s_1$\] ; (p1)–++(5,0)–++(-8.5,0); (r1) circle (2pt) node\[label=$r_1$\] ; (r1)–++(140:4)–++(-40:8); (p2) circle (2pt) node\[label=$p_2$\] ; (s2) circle (2pt) node\[label=$s_2$\] ; (p2)–++(5.5,0)–++(-8,0); (r2) circle (2pt) node\[label=$r_2$\] ; (r2)–++(140:4)–++(-40:7); (p3) circle (2pt) node\[label=$p_3$\] ; (s3) circle (2pt) node\[label=$s_3$\] ; (0) circle(2pt) (1) circle (2pt); () at ($(r)+(139:2.4)$); () at ($(r)+(99:3)$); () at ($(p)+(59:1.3)$); Since $L_w(r) \,||\, L_w(1)=L_w(p)$, it holds that $I_{1,w}(0,r)=L_1(0)\cap L_w(r) < 1$. But due to the choice $\gamma > \beta$, it follows that $I_{1,w}(p,0) < r$ on $L_1(p)$. Since $L_w(0)\, ||\, L_w(r)$ we have $I_{1,w}(0,r)=L_1(0)\cap L_w(r) > 0$. We infer that the triangle $0p1$ is similar to the triangle $0p_1s_1$, where $$p_1:=I_{u,w}(0,r)\quad \text{ and } \quad s_1:=I_{1,w}(0,r) \quad\text{ with } p_1\in(0,p), s_1\in (0,1) .$$ Using the point $p_1$ we construct another point on $L_v(0)$, viz. $r_1:=I_{1,v}(p_1,0)$. Since $p_1\in(0,p)$ it follows for this point that $r_1\in(0,r)$. With a similar argumentation as above we construct (see figure 3) $$p_2:=I_{u,w}(0,r_1) \quad \text{ and } \quad s_2:=I_{1,w}(0,r_1)\quad \text{ with } p_2\in(0,p_1), s_2\in (0,s_1).$$ Iteratively we construct the sequences $(p_i)_i$ and $(s_i)_i$ (as well as the auxiliary sequence $(r_i)_i$) as follows: $$p_i :=I_{u,w}(0,r_{i-1}),\quad s_i:=I_{1,w}(0,r_{i-1}),\quad r_{i-1}:=I_{1,v}(p_{i-1},0).$$ Since $0$ is the only point on $L_v(0)$ and $L_w(0)$, the points $p_i$ and $s_i$ are well defined. Furthermore the construction yields $p_i\in(0,p_{i-1})$ and $s_i\in(0,s_{i-1})$; therefore the triangles $0p1$, $0p_is_i$ for $i=1,2,\ldots$ are all similar. In particular, due to compactness of the segments $(0,1)$ and $(0,p)$ each of the sequences $(p_i)_i$ and $(s_i)_i$ have a convergent subsequence. \[abschluss\] Let $X$ be a topologically closed subgroup of ${\mathbb R}^2$, which is not contained in a line. Then there exists a basis $b,b'$ for ${\mathbb R}^2$ such that - $X = {\mathbb R}b + {\mathbb R}b' = {\mathbb R}^2$ - $X= {\mathbb R}b + {\mathbb Z}b' $ - $X= {\mathbb Z}b + {\mathbb Z}b'$. [@salzman 8.6, p. 83].[$\square$]{} If $U$ is a set containing at least $4$ different directions, then $R(U)$ is dense in ${\mathbb C}$. Since $R(U)$ is an additive subgroup of ${\mathbb R}^2$ (cf.Theorem \[eigI\]), its closure has to be ${\mathbb R}^2$, for the vectors $0p_1$ and $0s_1$ are linearly independent, cf. Lemma \[abschluss\]. Therefore $R(U)$ is dense in ${\mathbb R}^2$ resp. ${\mathbb C}$. [Salzman]{} Buhler, J.; Butler, S.; De Launey, W.; Graham, R.; *Origami rings*, Journal of the Australian Mathematical Society (2012), vol. 92, no. 3, pp. 299–311. Salzmann, H.; Grundhöfer, T.; Hähl, H.; Löwen, R.; *The classical fields: structural features of the real and rational numbers*, Cambridge University Press (2007). Stewart, I.; Tall, D.; *Algebraic number theory and Fermat’s last theorem*, AK Peters (2002). [^1]: University of Wuerzburg, Department of Mathematics. dmitri.nedrenco@mathematik.uni-wuerzburg.de [^2]: The following is not a restriction: we can consider $I_{i,v}(0,1)$ instead of $I_{v,i}(0,1)=1-z$. [^3]: For a proof see for instance [@stewart p. 62, Theorem 3.1].
--- abstract: 'The free-space transfer of high-fidelity optical signals between remote locations has many applications, including both classical and quantum communication, precision navigation, clock synchronization, etc. The physical processes that contribute to signal fading and loss need to be carefully analyzed in the theory of light propagation through the atmospheric turbulence. Here we derive the probability distribution for the atmospheric transmittance including beam wandering, beam shape deformation, and beam-broadening effects. Our model, referred to as the elliptic-beam approximation, applies to weak, weak-to-moderate, and strong turbulence and hence to the most important regimes in atmospheric communication scenarios.' author: - 'D. Vasylyev' - 'A. A. Semenov' - 'W. Vogel' title: Atmospheric Quantum Channels with Weak and Strong Turbulence --- #### Introduction.– {#introduction. .unnumbered} The transmission of quantum light to remote receivers recently attracted great interest in connection with the implementation of quantum communication protocols over large distances. Experimental advances in this field allowed one to demonstrate the successful quantum-light transmission over horizontal communication links  [@Ursin; @Scheidl; @Fedrizzi2009; @Capraro; @Yin; @Ma; @Peuntinger] and paved the way for the realization of ground-to-satellite quantum communication [@Nauerth; @Wang; @Bourgoin]. The main obstacle for the transmission of quantum light in free space is the atmospheric turbulence, which leads to spatial and temporal variations of the refractive index of the channel. The transmitted signal is usually measured by detectors with a finite aperture. Typically, the recorded data are contaminated by fluctuating losses due to beam wandering, beam broadening, scintillation, and degradation of coherence. The theory of classical light propagation through the atmosphere is well developed [@Tatarskii; @Ishimaru; @Andrews; @Andrews2; @Fante1; @Fante2]. Some progress was also achieved in the theory of free-space propagation of quantum light [@Diament; @Perina; @Perina1973; @Milonni; @Paterson; @Semenov; @Semenov10; @Vasylyev2012]. The atmosphere is considered as a quantum channel characterized by fluctuating transmission properties. In terms of the Glauber-Sudarshan $P$ function [@Glauber; @GlauberPRA; @Sudarshan], which is a quasiprobability as it may attain negativities, the relation between input $P_\mathrm{in}\!\left(\alpha\right)$ and output $P_\mathrm{out}\!\left(\alpha\right)$ states can be written as [@Semenov; @Vasylyev2012] $$\begin{aligned} P_{\rm out}(\alpha)=\int\limits_0^1 {{\rm{d}}}\eta\,\mathcal{P}(\eta)\frac{1}{\eta}P_{\rm in}\Bigl(\frac{\alpha}{\sqrt{\eta}}\Bigr).\label{inout}\end{aligned}$$ Here $\mathcal{P}(\eta)$ is the probability distribution of the transmittance (PDT), $\eta$ being the intensity transmittance. Hence, the description of quantum-light propagation through the turbulent atmosphere merely reduces to identifying this probability distribution. In Ref. [@Vasylyev2012] we have derived the PDT for the case when the leading effect of fluctuating losses in the atmosphere is beam wandering, as it is the case for weak turbulence. In this Letter, we present a substantially extended model of PDT, based on the [*elliptic-beam approximation*]{} that incorporates effects of beam wandering, broadening, and deformation. Our theory properly describes atmospheric quantum channels in the limits of relatively weak turbulence, as in experiments in Erlangen with an atmospheric link of 1.6 km length [@Usenko; @Peuntinger]. For the case of strong turbulence, our theory also yields a reasonable agreement with the log-normal model [@Andrews; @Andrews2; @Fante1; @Fante2; @Diament; @Perina; @Perina1973; @Milonni], which has been verified in experiments on the Canary Islands [@Capraro]. Most importantly, our elliptic-beam model overcomes the deficiency of physical inconsistencies inherent in the log-normal distribution. #### The aperture transmittance.– {#the-aperture-transmittance. .unnumbered} Temporal and spatial variations of temperature and pressure in atmospheric turbulent flows cause random fluctuations of the refraction index of the air. Consequently, the atmosphere acts as a source of losses for transmitted photons which are measured at the receiver by a detection module with a finite aperture. The transmitted signal is degraded by effects like beam wandering, broadening, deformation, and others. Let us consider a Gaussian beam that propagates along the $z$ axis onto the aperture plane at distance $z=L$. In general, the fluctuating intensity transmittance of such a signal is given by [@Vasylyev2012] $$\begin{aligned} \eta&{=}\int_{\mathcal A}{{\rm{d}}}^2 \mathbf{r}\, I(\mathbf{r};L),\label{Transm}\end{aligned}$$ where $\mathcal{A}$ is the aperture area and $I(\mathbf{r};L)$ is the normalized intensity with respect to the full $\textbf{r}{=}\{x,y\}$ plane. The Gaussian beam underlies turbulent disturbances along the propagation path. Within our model we assume that these disturbances lead to beam wandering and deformation of the beam profile into an elliptical form. This is justified for weak turbulence, when speckles play no essential role. For strong turbulence the beam shape is the result of many small spatially averaged distortions. The intensity of the elliptic beam at the aperture plane is given by $$\begin{aligned} I(\mathbf{r};L)=\frac{2}{\pi\sqrt{\det\mathbf{S}}}\exp\Bigl[-2({\mathbf{r}}{-}{ \mathbf{r}}_0)^{\rm T}{\mathbf{S}}^{-1}({\bf r}{-}{\bf r}_0)\Bigr],\label{Intens}\end{aligned}$$ with $\mathbf{r}{=}(x\,\,y)^{\rm T}$. It is characterized by the beam-centroid position $\mathbf{r}_0=(x_0\,\,y_0)^{\rm T}$ and the real, symmetric, positive-definite spot-shape matrix $\mathbf{S}$. The eigenvalues of this matrix, $W_i^2$, $i{=}1,2$, are squared semiaxes of the elliptic spot. The semiaxis $W_1$ has an angle $\phi{\in}\left[0,\pi/2\right)$ relative to the $x$ axis, and the set $\left\{W_1^2,W_2^2,\phi\right\}$ uniquely describes the orientation and the size of the ellipse. For an elliptic-beam profile, the transmittance $\eta$ is obtained by substituting Eq. (\[Intens\]) into Eq. (\[Transm\]). The resulting integral cannot be evaluated analytically. Here we adapt the technique proposed in Ref. [@Vasylyev2012] to derive an analytical approximation. For this purpose we consider the displacement of the beam centroid to the point $\mathbf{r}_0{=}\left(r_0\cos\varphi_0\,\,\,r_0\sin\varphi_0\right)^\textrm{T}$. Regarding the transmittance $\eta$ as a function of $r_0$, for given $\chi{=}\phi{-}\varphi_0$, we observe that it behaves similar to the transmittance of the circular Gaussian beam with the effective squared spot radius $$\begin{aligned} W_\textrm{eff}^2\left(\chi\right)&{=}4a^2\Bigl[\mathcal{W}\Bigl(\frac{4a^2}{ W_1W_2} e^ {\frac{a^2}{W_1^2}\bigl\{1+2\cos^2\!\chi\bigr\}}\nonumber\\ &\qquad\times e^ {\frac{a^2}{W_2^2}\bigl\{1+2\sin^2\!\chi\bigr\}}\Bigl)\Bigr]^{-1},\label{Weff}\end{aligned}$$ where $\mathcal{W}(\xi)$ is the Lambert $W$ function [@Corless] and $a$ is the aperture radius. In this case the transmittance is approximated by $$\begin{aligned} \eta=\eta_{0} \exp\left\{-\left[\frac{r_0/a} {R\left(\frac{2}{W_{\rm eff}\left(\phi{-}\varphi_0\right)}\right)}\right]^{\lambda\bigl(\frac{2}{W_{\rm eff}\left(\phi{-}\varphi_0\right)}\bigr)}\right\}.\label{Tapprox}\end{aligned}$$ Here $\eta_0$ is the transmittance for the centered beam, i.e. for $r_0{=}0$, $$\begin{aligned} &\eta_{0}{=}1{-}{{\rm{I}}}_0\Bigl(a^2\Bigl[\frac{1}{W_1^2}{-}\frac{1}{W_2^2}\Bigr]\Bigr)e^{-a^2\bigl[\frac{1}{W_1^2}{+}\frac{1}{W_2^2}\bigr]}\nonumber\\ &\qquad{-}2\left[1{-}e^{-\frac{a^2}{2}\!\bigl(\frac{1}{W_1}{-}\frac{1}{W_2}\bigr)^{2}}\!\right]\nonumber\\ &\qquad\times\exp\!\left\{\!{-}\Biggl[\!\frac{\frac{(W_1+W_2)^2}{|W_1^2-W_2^2|}}{R\left(\frac{1}{W_1}{-}\frac{1}{W_2}\right)}\!\Biggr] ^{\!\lambda\left(\!\frac{1}{W_1}{-}\frac{1}{W_2}\right)}\right\},\end{aligned}$$ $R(\xi)$ and $\lambda(\xi)$ are scale and shape functions, respectively, $$\begin{aligned} R\left(\xi\right)=\Bigl[\ln\Bigl(2\frac{1-\exp[-\frac{1}{2} a^2 \xi^2]}{1-\exp[-a^2\xi^2]{{\rm{I}}}_0\bigl(a^2\xi^2\bigr)}\Bigr)\Bigr]^{-\frac{1}{ \lambda(\xi)}},\end{aligned}$$ $$\begin{aligned} \lambda\left(\xi\right)&=2a^2\xi^2\frac{e^{-a^2\xi^2}{{\rm{I}}}_1(a^2\xi^2)}{1-\exp[ -a^2\xi^2 ] {{\rm{I}}}_0\bigl(a^2\xi^2\bigr)}\nonumber\\ &{\times}\Bigl[\ln\Bigl(2\frac{1-\exp[-\frac{1}{2} a^2 \xi^2]}{1-\exp[-a^2\xi^2]{{\rm{I}}}_0\bigl(a^2\xi^2\bigr)}\Bigr)\Bigr]^{-1},\end{aligned}$$ and ${{\rm{I}}}_{i}(\xi)$ is the modified Bessel function of $i$th order. Since $\phi$ is defined by modulo $\pi/2$, the transmittance $\eta$ is a $\pi/2$-periodical function of $\phi$. For the limit $W_1^2{=}W_2^2$, Eq. (\[Tapprox\]) reduces to the transmission coefficient of a Gaussian beam with a circular profile [@Vasylyev2012]. For details of the approximation see Supplemental Material [@suppl] and Ref. [@Agrest]. #### The probability distribution of the transmittance.– {#the-probability-distribution-of-the-transmittance. .unnumbered} The aperture transmittance $\eta$, cf. Eq. (\[Tapprox\]), is a function of five real parameters, $\left\{x_0,y_0,\Theta_1,\Theta_2,\phi\right\}$, randomly changed by the atmosphere, where $W_{i}^2{=}W_0^2\exp\Theta_{i}$, and $W_0$ is the initial beam-spot radius. For these parameters we assume a Gaussian approximation, with $\phi$ being a $\pi/2$-periodical wrapped Gaussian variable [@Mardia]. We restrict our attention to isotropic turbulence. In this case the wrapped Gaussian distribution for $\phi$ reduces to a uniform one and its correlations with other parameters vanishes. In the reference frame with $\big\langle\textbf{r}_0\big\rangle{=}0$, there are also no correlations of $x_0$, $y_0$, and $\Theta_i$. The variances $\langle\Delta x_0^2\rangle=\langle\Delta y_0^2\rangle=\langle x_0^2\rangle$, which describe beam wandering, are expressed in terms of the classical field correlation function of the fourth order, $\Gamma_{4}(\mathbf{r}_1,\mathbf{r}_2)=\langle I(\mathbf{r}_1;L) I (\mathbf{r}_2;L)\rangle$, in the aperture plane (see, e.g., [@Andrews; @Andrews2; @Fante1; @Fante2; @Mironov; @Banakh; @Kon; @Chumak; @Mironov2]): $$\begin{aligned} \langle x_0^2\rangle=\int_{\mathbb{R}^4} {{\rm{d}}}^4 \mathbf{r}\, x_1 x_2 \, \Gamma_4(\mathbf{r}_1,\mathbf{r}_2),\label{bwvariance}\end{aligned}$$ where ${{\rm{d}}}^4\mathbf{r}={{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2$. The means and the (co)variances of $\Theta_i$ are functions of the means and the (co)variances (first and second moments) of $W_i^2$: $$\begin{aligned} \langle \Theta_{i}\rangle=\ln\left[\frac{\langle W_{i}^2\rangle}{W_0^2}\left(1+ \frac{\langle (\Delta W_{i}^2)^2\rangle}{\langle W_{i}^2\rangle^2}\right)^{-1/2}\right],\label{Eq:ThetaMean}\end{aligned}$$ $$\begin{aligned} \langle \Delta\Theta_i\Delta\Theta_j\rangle= \ln\left[1+ \frac{\langle \Delta W_i^2 \Delta W_j^2\rangle}{\langle W_i^2\rangle\langle W_j^2\rangle}\right].\label{Eq:ThetaCovariances}\end{aligned}$$ In general, the evaluation of $\langle W_{i}^2\rangle$ and $\langle\Delta W_{i}^2\Delta W_{j}^2\rangle$ in Eqs. (\[Eq:ThetaMean\]) and (\[Eq:ThetaCovariances\]) is almost intractable. However the assumptions of Gaussianity and isotropy enable to express these quantities in a tractable form as (for details cf. the Supplemental Material [@suppl]) $$\begin{aligned} \langle& W_{i}^2\rangle{=}4\left[\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\, x^2 \Gamma_2\! \left(\mathbf{r}\right){-} \langle x_0^2\rangle\right],\label{Eq:W12SqMean}\end{aligned}$$ $$\begin{aligned} &\langle W_{i}^2W_{j}^2\rangle=8\Big[{-}8\,\delta_{ij}\langle x_0^2\rangle^2 {-}\langle x_0^2\rangle \langle W_{i}^2\rangle \label{Eq:W12ViaGamma}\\ &+\int_{\mathbb{R}^4}{{\rm{d}}}^4\mathbf{r} \, \left[x_1^2x_2^2\left(4\delta_{ij}{-}1\right)-x_1^2y_2^2\left(4\delta_{ij}{-}3\right)\right] \,\Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2\right) \Big],\nonumber\end{aligned}$$ where $\Gamma_{2}(\mathbf{r}){=}\langle I(\mathbf{r};L)\rangle$ is the classical field correlation function of the second order. Therefore, the means and the covariance matrix of the random vector $\textbf{v}{=}\big(x_0\,y_0\,\Theta_{1}\,\Theta_{2}\big)^\mathrm{T}$, i.e. $\mu_i{=}\left\langle v_i\right\rangle$ and $\Sigma_{ij}{=}\left\langle\Delta v_i\Delta v_j\right\rangle$, respectively, are expressed in terms of classical field correlation functions $\Gamma_2$ and $\Gamma_4$. These functions are important characteristics of atmospheric channels, which are widely discussed in the literature; see, e.g., [@Andrews; @Andrews2; @Fante1; @Fante2; @Mironov; @Banakh; @Mironov2]. In the Supplemental Material [@suppl], we derive $\mu_i$ and $\Sigma_{ij}$ for horizontal links by using the phase approximation of the Huygens-Kirchhoff method and the Kolmogorov turbulence spectrum [@Mironov; @Banakh; @Mironov2]. With the given assumptions, the PDT in Eq. (\[inout\]) reads as $$\begin{aligned} \mathcal{P}\left(\eta\right){=}\frac{2}{\pi}\int_{\mathbb{R}^4}{{\rm{d}}}^4\mathbf{v} \int\limits_{0}^{ \pi /2}{{\rm{d}}}\phi \, \rho_G(\mathbf{v};\boldsymbol{\mu},\Sigma)\delta\left[\eta{-}\eta\left(\mathbf{v},\phi\right)\right],\label{PDTC}\end{aligned}$$ where $\eta\left(\mathbf{v},\phi\right)$ is the transmittance defined by Eq. (\[Tapprox\]) as a function of random parameters and $\rho_G(\mathbf{v};\boldsymbol{\mu},\Sigma)$ is the Gaussian probability density of the vector $\mathbf{v}$ with the mean $\boldsymbol{\mu}$ and the covariance matrix $\Sigma$. In general, the PDT can be evaluated with the Monte Carlo method. For this purpose, one has to simulate the Gaussian random vector $\mathbf{v}$ and the uniformly distributed angle $\phi$. For practical purposes, we apply the Rayleigh distribution for $r_0$, a uniform distribution for $\chi$, and a Gaussian one for $\Theta_i$. The obtained values should be substituted in the transmittance; cf. Eq. (\[Tapprox\]). Within the standard procedure of estimation, one obtains the mean value of any function of the transmittance, $\langle f(\eta)\rangle$. The PDT can be obtained within the smooth-kernel method [@Wand]. The cumulative probability distribution, $\mathcal{F}(\eta){=}\int_0^{\eta}{{\rm{d}}}\eta^\prime\mathcal{P}(\eta^\prime)$, and the exceedance $\overline{\mathcal{F}}(\eta)=1-\mathcal{F}(\eta)$ are estimated by the technique of empirical distribution functions [@Vaart]. #### From weak to strong turbulence.– {#from-weak-to-strong-turbulence. .unnumbered} Let us distinguish the regimes of weak, moderate, and strong turbulence, through the values of Rytov parameter $\sigma_{\rm R}^2<1$, $\sigma_{\rm R}^2\approx 1 \dots 10$, and $\sigma_{\rm R}^2\gg 1$, respectively. The Rytov parameter is defined as $\sigma_R^2{=}1.23 C_n^2k^{\frac{7}{6}}L^{\frac{11}{6}}$, where $C_n^2$ is the atmospheric index-of-refraction structure constant and $k$ is the optical wave number; for more details and the corresponding motivation, see Ref. [@Andrews]. For weak turbulence the atmosphere mainly causes beam wandering. In this case Eq. (\[PDTC\]) reduces to the log-negative Weibull distribution [@Vasylyev2012]. For the weak-to-moderate transition and for strong turbulence, broadening and deformation of the beam occur, resulting in a smooth PDT. More problematic is the evaluation of $\mathbf{v}$ and $\Sigma$ for the moderate-to-strong turbulence transition. Hence we will restrict our considerations to the ranges of weak-to-moderate and strong turbulence. ![\[fig:PDTC\] The PDTs (a) and the corresponding exceedances (b): elliptic-beam approximation, log-normal, and beam wandering [@Vasylyev2012]. The shaded area in (a) shows the experimental PDT from Ref. [@Usenko]. The inset in (b) shows the tail of the exceedance. For the log-normal exceedance, it extends to the unphysical region, $\eta>1$ (shaded gray). Further parameters: wavelength $809\,\textrm{nm}$, initial spot radius $W_0{=}20\,\textrm{mm}$, propagation distance $1.6\,\textrm{km}$, Rytov parameter $\sigma_R^2{=}1.5$, aperture radius $a=40\,\textrm{mm}$, deterministic attenuation of $1.25\,\textrm{dB}$.](1.pdf) Figure \[fig:PDTC\] shows the probability $\mathcal{P}(\eta)$ derived by the elliptic-beam approximation for the conditions of weak-to-moderate turbulence. This distribution is compared with the corresponding ones obtained from the beam-wandering model [@Vasylyev2012] and from the log-normal model, see Supplemental Material [@suppl]. The inset shows the experimental data given in Ref. [@Usenko]. It is obvious that the elliptic model yields the best agreement with the measured data. The log-normal distribution is quite popular for modeling atmospheric turbulence effects [@Andrews; @Andrews2; @Fante1; @Fante2; @Diament; @Perina; @Perina1973; @Milonni]. Usually this model is applied for the description of intensity fluctuations in one spatial point. In Fig. \[fig:PDTC\], the log-normal model is applied to the signal detection with a finite aperture; see Supplemental Material [@suppl]. The dashed line in Fig. \[fig:PDTC\] (a) shows that the log-normal distribution differs significantly from the measured PDT. Moreover, the log-normal PDT is not limited to the physically allowed interval $\eta\in[0,1]$. This feature is clearly seen in Fig. \[fig:PDTC\] (b) where the exceedance functions $\overline{\mathcal{F}}(\eta)$, i.e., the probability that the transmittance exceeds the value of $\eta$, are shown for the elliptic model, the beam-wandering model, and the log-normal model. As was shown in Ref.[@Semenov10; @Vasylyev2012], the tails of $\overline{\mathcal F}$ with large values of $\eta$ are important for preserving nonclassical properties of transmitted light, which are overestimated by the log-normal model. It has been shown in experiments with coherent light propagating through a $144\,\textrm{km}$ atmospheric channel on the Canary Islands [@Capraro] that the log-normal distribution in its physical domain demonstrates a good agreement with the experimental data under the conditions of strong turbulence. In Fig. \[fig:PDTC\_StrongTurb\], we compare the PDTs derived from the elliptic-beam approximation with the ones obtained in the beam-wandering and the log-normal models. Although in this case we consider a short propagation distance, the turbulence is quite strong. Similar conditions may occur, e.g., for the case of near-to-ground propagation on a hot summer day. From Fig. \[fig:PDTC\_StrongTurb\] one can clearly conclude that the beam-wandering model strongly differs from the log-normal distribution and, consequently, it cannot well describe the strong turbulence scenario. However, the elliptic-beam model gives a reasonable agreement with the log-normal distribution in the physical domain of the latter. From this fact, one may conclude that our model consistently describes also the case of strong turbulence. A clear advantage of the elliptic-beam model is that the corresponding PDT does not attain nonzero values in the unphysical domain, $\eta{>}1$, which is the case for the log-normal distribution. Hence, the usage of the elliptic-beam model gives physically consistent results, whereas the log-normal distribution may yield unphysical artifacts, e.g., the creation of photons by the atmosphere [@Perina1973]. Such artifacts may cause an overestimation of the security of quantum communication protocols. Finally, we note that in some cases beam wandering is suppressed by tracking procedures [@Ursin; @Nauerth]. Under such conditions, the beam-wandering model is no longer useful but the elliptic-beam model does apply. ![\[fig:PDTC\_StrongTurb\] The PDTs and the corresponding exceedances, similar to those shown in Fig. \[fig:PDTC\], but for the case of strong turbulence. Further parameters: wavelength $780\,\textrm{nm}$, initial spot radius $W_0{=}50\textrm{mm}$, propagation distance $2\,\textrm{km}$, Rytov parameter $\sigma_R^2{=}31.5$, aperture radius $a=150\,\textrm{mm}$ and no deterministic attenuation.](2.pdf) #### Application: quadrature squeezing.– {#application-quadrature-squeezing. .unnumbered} The PDT (\[PDTC\]) in the elliptic-beam approximation allows one to analyze the quantum properties of light transmitted through the turbulent atmosphere by means of the input-output relation (\[inout\]). As an example, we analyze the squeezing properties after a weak-to-moderate turbulent atmospheric channel. We consider the $1.6\,\textrm{km}$ link in the city of Erlangen [@Peuntinger]. The transmitter generates squeezed light ($-2.4\,\textrm{dB}$) at $\lambda=780\,\textrm{nm}$ and sends it through the link with the Rytov parameter $\sigma_R^2=2.6$. The receiver detects $-0.95\,\textrm{dB}$ of squeezing. ![\[fig:sq\] Transmitted value of squeezing as a function of the postselection threshold $\eta_\textrm{min}$. Initially squeezed light (to $-2.4\,\textrm{dB}$, $\lambda=780\,\textrm{nm}$, spot radius $W_0=25\,\textrm{mm}$) is sent through a $1.6\,\textrm{km}$ atmospheric link ($\sigma_R^2=2.6$) and detected with an aperture radius of $a=75\,\textrm{mm}$. The deterministic attenuation is $1.9\,\textrm{dB}$. The output signal is squeezed by $-0.95\,\textrm{dB}$. With the postselection protocol, the squeezing value can be improved depending on the postselection threshold $\eta_\textrm{min}$. The theory is shown for the elliptic-beam, log-normal, and beam-wandering models, compared with the experimental results (shaded area lies within the error bars) from Ref.[@Peuntinger]. ](3.pdf) The postselection procedure of transmission events with $\eta\ge \eta_\textrm{min}$ yields larger detected values for the transmitted squeezing. In Fig. \[fig:sq\], we compare the values of detected squeezing as functions of postselection thresholds $\eta_\textrm{min}$, for the experimental values given in Ref. [@Peuntinger]. The beam-wandering model yields smaller values of postselected squeezing as detected in the experiment, as it does not properly describe the distribution tails for high values of $\eta$; cf. Fig. \[fig:PDTC\]. The postselected values of squeezing calculated within the elliptic-beam approximation agree very well within the error bars with the experimentally measured values. The shown log-normal model gives the correct values for the first two moments of $\eta$, but it differs in higher moments from the experimentally measured distribution. This feature allows one to obtain the correct value of squeezing for the transmitted signal from the log-normal model. However, this model completely fails to describe the postselection procedure, where the higher moments play a dominant role. #### Summary and Conclusions.– {#summary-and-conclusions. .unnumbered} We have introduced a model for the atmospheric turbulence effects on quantum light, which is based on an elliptic-beam approximation. Surprisingly, it yields a reasonable agreement with experiments for the conditions of weak-to-moderate turbulence. In this case, we get an excellent description of the transfer of squeezed light through a 1.6 km channel, analyzed with data postselection. For the case of strong turbulence, we have shown that our theory gives a reasonable agreement with the log-normal distribution. In experiments using a 144 km channel under strong turbulence conditions, the log-normal model also yields a proper description of the transmission of coherent light. Hence, our theory describes in a unified manner the quantum-light transfer through atmospheric channels under dissimilar turbulence conditions. The case of the transition regime of moderate-to-strong turbulence requires further research. The authors are grateful to P. Villoresi, G. Vallone, Ch. Marquardt, B. Heim, and C. Peuntinger for useful and enlightening discussions. The work was supported by the Deutsche Forschungsgemeinschaft through Project No. VO 501/21-1 and SFB 652, Project No. B12. [99]{} R. Ursin *et al.*, [Entanglement-based quantum communication over 144 km]{}, Nature Phys. **3**, 481 (2007). T. Scheidl *et al.*, [Feasibility of 300km quantum key distribution with entangled states]{}, New J, Phys. **11**, 085002 (2009). A. Fedrizzi, R. Ursin, T. Herbst, M. Nespoli, R. Prevedel, T. Scheidl, F. Tiefenbacher, T. Jennewein, and A. Zeilinger, [High-fidelity transmission of entanglement over a high-loss free-space channel]{}, Nature Phys. [**5**]{}, 389 (2009). I. Capraro, A. Tomaello, A. Dall’Arche, F. Gerlin, R. Ursin, G. Vallone, and P. Villoresi, [Impact of Turbulence in Long Range Quantum and Classical Communications]{}, Phys. Rev. Lett. **109**, 200502 (2012). J. Yin *et al.*, [Quantum teleportation and entanglement distribution over 100-kilometre free-space channels]{}, Nature **488**, 185 (2012). X. Ma *et al.*, [Quantum teleportation over 143 kilometres using active feed-forward]{}, Nature **489**, 269 (2012). C. Peuntinger, B. Heim, Ch. Müller, Ch. Gabriel, Ch. Marquardt, and G. Leuchs, [Distribution of Squeezed States through an Atmospheric Channel]{}, Phys. Rev. Lett. **113**, 060502 (2014). S. Nauerth, F. Moll, M. Rau, Ch. Fuchs, J. Horwath, S. Frick, and H. Weinfurter, [Air-to-ground quantum communication]{}, Nature Phot. **7**, 382 (2013). J. Wang *et al.*, [Direct and full-scale experimental verifications towards ground – satellite quantum key distribution]{}, Nature Phot. **7**, 387 (2013). J.-P. Bourgoin *et al.*, [A comprehensive design and performance analysis of low Earth orbit satellite quantum communication]{}, New J. Phys. [**15**]{}, 023006 (2013). V. Tatarskii, *Effects of the Turbulent Atmosphere on Wave Propagation* (IPST, Jerusalem, 1972). A. Ishimaru, *Wave Propagation and Scattering in Random Media*, (Academic Press, San Diego, 1978). L. Andrews, R. Phillips, and C. Hopen, *Laser Beam Scintillation with Applications*, (SPIE Press, Washington, 2001). L. Andrews and R. Phillips, *Laser Beam Propagation through Random Media*, (SPIE Press, Washington, 2005). R. L. Fante, [Electromagnetic beam propagation in turbulent media]{}, Proc. IEEE **63**, 1669 (1975). R. L. Fante, [Electromagnetic beam propagation in turbulent media: An update]{}, Proc. IEEE **68**, 1424 (1980). P. Diament and M. C. Teich, [Photodetection of Low-Level Radiation through the Turbulent Atmosphere]{}, J. Opt. Soc. Am. [**60**]{}, 1489 (1970). J. Peřina, [On the Photon Counting Statistics of Light Passing through an Inhomogeneous Random Medium]{}, Czech. J. Phys. **22**, 1075 (1972). J. Peřina, V. Peřinova, M. C. Teich, and P. Diament, [Two Descriptions for the Photocounting Detection of Radiation Passed through a Random Medium: A Comparison for the Turbulent Atmosphere]{}, Phys. Rev. A **7**, 1732 (1973). P. Milonni, J. Carter, Ch. Peterson, and R. Hughes, [Effects of Propagation through Atmospheric Turbulence on Photon Statistics]{}, J. Opt. B **6**, S742 (2004). C. Paterson, [Atmospheric Turbulence and Orbital Angular Momentum of Single Photons for Optical Communication]{}, Phys. Rev. Lett. **94**, 153901 (2005). A. A. Semenov and W. Vogel, [Quantum Light in the Turbulent Atmosphere]{}, Phys. Rev. A **80**, 021802(R) (2009). A. A. Semenov and W. Vogel, [Entanglement Transfer through the Turbulent Atmosphere]{}, Phys. Rev. A **81**, 023835 (2010). D. Yu. Vasylyev, A. A. Semenov, and W. Vogel, [Toward Global Quantum Communication: Beam Wandering Preserves Nonclassicality]{}, Phys. Rev. Lett. **108**, 220501 (2012). R. J. Glauber, [Photon Correlations]{}, Phys. Rev. Lett. **10**, 84 (1963). R. J. Glauber, [Coherent and Incoherent States of the Radiation Field]{}, Phys. Rev. A **131**, 2766 (1963). E. C. G. Sudarshan, [Equivalence of Semiclassical and Quantum Mechanical Descriptions of Statistical Light Beams]{}, Phys. Rev. Lett. **10**, 277 (1963). V. C.  Usenko, B. Heim, C. Peuntinger, C. Wittmann, C. Marquardt, G. Leuchs, and R. Filip, [Entanglement of Gaussian states and the applicability to quantum key distribution over fading channels]{}, New J. Phys. **14**, 093048 (2012). R. Corless, G. Gonnet, D. Hare, D. Jeffrey, and D. Knuth, [On the Lambert W Function]{}, Adv. Comput. Math. **5**, 329 (1996). See Supplemental Material for detailed derivations of our analytic results. M. M. Agrest and M. S. Maximov, *Theory of Incomplete Cylindrical Functions and their Applications* (Springer, Berlin, 1971). K.V. Mardia and P.E. Jupp, *Directional Statistics*, (John Wiley & Sons, Chichester, 1999). V. P. Aksenov and V. L. Mironov, [Phase Approximation of the Huygens-Kirchhoff Method in Problems of Reflections of Optical Waves in the Turbulent Atmosphere]{}, J. Opt. Soc. Am., [**69**]{}, 1609 (1979). V.A. Banakh and V.L. Mironov, [Phase Approximation of the Huygens-Kirchhoff Method in Problems of Space-Limited Optical-Beam Propagation in Turbulent Atmosphere]{}, Opt. Lett. **4**, 259 (1979). V. L. Mironov and V. V. Nosov, [On the Theory of Spatialy Limited Light Beam Displacement in a Randomly Inhomogeneous Medium]{}, J. Opt. Soc Am **67**, 1073 (1977). A.I. Kon, [Focusing of Light in a Turbulent Medium]{}, Radiophys. Quantum Electron., **13**, 43 (1970). G. P. Berman, A. A. Chumak, and V. N. Gorshkov, [Beam Wandering in the Atmosphere: The Effect of Partial Coherence]{}, Phys. Rev. E **76**, 056606 (2007). M. P. Wand and M. C. Jones, *Kernel Smoothing*, (Chapman$\&$Hall, New York, 1995). A. W. van der Vaart, *Asymptotic statistics*, (Cambridge University Press, Cambridge, 1998). Supplemental Material\ Atmospheric Quantum Channels with Weak and Strong Turbulence {#supplemental-material-atmospheric-quantum-channels-with-weak-and-strong-turbulence .unnumbered} ============================================================= **D. Vasylyev$^{1,2}$, A. A. Semenov$^{1,3}$, and W. Vogel$^1$**\ $^{1}$*Institut für Physik, Universität Rostock, Albert-Einstein-Straße 23, D-18059 Rostock, Germany*\ $^{2}$*Bogolyubov Institute for Theoretical Physics, NAS of Ukraine, Vulytsya Metrologichna 14-b, 03680 Kiev, Ukraine*\ $^{3}$*Institute of Physics, NAS of Ukraine, Prospect Nauky 46, 03028 Kiev, Ukraine* The supplement is structured as follows:\ In Sec. \[Sec:EllipticBeam\] we discuss the properties of Gaussian elliptical beams. In Sec. \[Sec:Transmittance\] we derive the analytic expression for the transmittance of the elliptical beam through the circular aperture. In Sec. \[Sec:GasussianApproximation\] the statistical properties of the elliptical beam transmitted through turbulence are discussed in Gaussian approximation. In Sec. \[Sec:Isotropy\] we discuss the simplifications which arise from the assumption that the atmospheric turbulence is isotropic. Here we derive the formulas that connect the statistical characteristics of the elliptical beam in the isotropic atmosphere with the field correlation functions. In Sec. \[Sec:PhaseAppr\] the phase approximation of the Huygens-Kirchhoff method is presented and the general expressions for field correlation functions are derived. In Sec. \[Sec:BW\] and in Sec. \[Sec:BeamShape\] we derive the means and (co)variances connected with beam wandering and beam shape deformation, respectively. These results are evaluated for limits of weak and strong turbulence and are summarized in the table in Sec. \[Sec:CovMatrixEl\]. Finally, in Sec. \[Sec:LogNormal\] the log-normal distribution for the beam transmittance is considered. addtoreset[figure]{}[section]{} addtoreset[equation]{}[section]{} @subsection Elliptic beams {#Sec:EllipticBeam} ============== In this Section we discuss the properties of elliptical beams, which are crucial for the consideration of light transferring through the turbulent atmosphere. In the paraxial approximation the beam amplitude $u(\mathbf{r},z)$ satisfies the equation, cf. Ref. [@Fante1], $$\begin{aligned} 2ik\frac{\partial u(\mathbf{r},z)}{\partial z}+\Delta_\mathbf{r} u(\mathbf{r},z)+2k^2\delta n(\mathbf{r},z) u(\mathbf{r},z)=0,\label{waveEq}\end{aligned}$$ where $k$ is the wave number, $\delta n(\mathbf{r},z)$ is a small fluctuating part of the index of air refraction, $\mathbf{r}{=}\left(x\,\,y\right)^T$ is the vector of transverse coordinates. The boundary condition in the transmitter plane $z=0$ for the initially Gaussian beam is given by $$\begin{aligned} u(\mathbf{r},z{=}0){=}u_0(\mathbf{r}){=}\sqrt{\frac{2}{\pi W_0^2}}\exp\Bigl[-\frac{1}{W_0^2}|\mathbf{r}|^2{-}\frac{ik }{2F}|\mathbf{r}|^2\Bigr].\label{Eq:BoundaryConditions}\end{aligned}$$ Here $W_0$ is the beam spot radius, $F$ is the wavefront radius in the center of the transmitting aperture at $z{=}0$. The intensity of light is defined as $$I(\mathbf{r},z)=\left|u(\mathbf{r},z)\right|^2.\label{Eq:IntensityDef}$$ This function can be chosen in the normalized form $$\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\, I(\mathbf{r},z)=1,$$ and Eq. (\[waveEq\]) implies that this norm preserves for any $z$. For our purposes it is also important that $I(\mathbf{r},z){\geq}0$. Consider the transverse Fourier transform of intensity, $$C(\mathbf{k},z)=\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\, I(\mathbf{r},z)e^{i\mathbf{k}\cdot\mathbf{r}},\label{Eq:IntSpektrum}$$ where $\mathbf{k}{\cdot}\mathbf{r}$ denotes the scalar product of two vectors. Similar to the cumulative expansion in the probability theory one writes $$\ln C(\mathbf{k},z)=i\mathbf{k}{\cdot}\mathbf{r}_0{-}\frac{1}{8}\mathbf{k}^\mathrm { T } \mathbf{S}\mathbf{k} + \ldots, \label{Eq:CumExpInt}$$ where $$\mathbf{r}_0=\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\,\mathbf{r}\, I(\mathbf{r},z)\label{Eq:BeamCentroid}$$ is the beam-centroid position, $$\begin{aligned} \mathbf{S}&=&\left( \begin{array}{cc} S_{xx} & S_{xy} \\ S_{xy} & S_{yy} \end{array} \right)\label{Eq:MatrixS_Definition}\\ &=&4\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r} \,\left[(\mathbf{r}-\mathbf{r}_0)(\mathbf{r}-\mathbf{r}_0)^\mathrm{T} \right ] \, I(\mathbf{r},z)\nonumber\end{aligned}$$ is the spot-shape matrix. Within the elliptic-beam approximation we suppose that the expansion (\[Eq:CumExpInt\]) in the aperture plane can be restricted by the second (Gaussian) term. Substituting it into the inversion of Eq. (\[Eq:IntSpektrum\]), $$I(\mathbf{r},z)=\frac{1}{\left(2\pi\right)^2}\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\, C(\mathbf{k},z) e^{-i\mathbf{k}{\cdot}\mathbf{r}}. \label{Eq:IntSpektrumInversion}$$ one gets for the intensity of the elliptic beam $$\begin{aligned} I(\mathbf{r},z)=\frac{2}{\pi\sqrt{\det\mathbf{S}}}\exp\Bigl[-2({\mathbf{r}}{-}{ \mathbf{r}}_0)^{\rm T}{\mathbf{S}}^{-1}({\bf r}{-}{\bf r}_0)\Bigr].\label{Eq:Intens}\end{aligned}$$ In the particular case, when the spot-shape matrix is proportional to the identity matrix, this expression is reduced to the intensity of a circular Gaussian beam. ![\[fig:Aperture\] (Color online) The aperture of radius $a$ and the elliptical beam profile with the half-axis $W_{1}$ rotated on the angle $\phi$ relative to the $x$-axis and on the angle $\chi$ relative to the $\mathbf{r}_0$-associated axis are shown. The beam centroid is situated in the point $\mathbf{r}_0$ with the polar coordinates $(r_0,\varphi_0)$. The $x^\prime$-$y^\prime$ coordinate system is associated with the elliptical beam centroid.](Aperture.pdf) Two eigenvalues, $W_1^2$ and $W_2^2$, of the spot-shape matrix $\mathbf{S}$ correspond to two semi-axes of the beam ellipse. They are related to the elements of the matrix $\mathbf{S}$ as $$\begin{aligned} &S_{xx}=W_1^2\cos^2\!\phi+W_2^2\sin^2\!\phi,\label{Eq:Sxx}\\ &S_{yy}=W_1^2\sin^2\!\phi+W_2^2\cos^2\!\phi,\label{Eq:Syy}\\ &S_{xy}=\frac{1}{2}\Bigl(W_1^2-W_2^2\Bigr)\sin2\phi,\label{WxWyrelation}\end{aligned}$$ where $\phi{\in}\left[0,\pi/2\right)$ is the angle between the $x$-axis and the ellipse semi-axis related to $W_1^2$. The set of three parameters $\left(W_1^2,W_2^2,\phi\right)$ uniquely defines all possible orientations of the ellipse. The introduced representation of the ellipse assumes that we do not distinguish between major and minor semi-axes of the ellipse. The semi-axis related to $W_1^2$ is defined as being situated in first and third quarter-planes of the $x^\prime$-$y^\prime$ coordinate system, cf. Fig. \[fig:Aperture\], while $W_2^2$ is in the second and fourth ones. Within this definition the values of $W_1^2$ and $W_2^2$ are not ordered. Aperture transmittance for elliptic beams {#Sec:Transmittance} ========================================= In this Section we derive in details an analytical approximation for the transmittance of elliptic beams through a circular aperture. For the aperture situated in the point $z{=}L$ the transmittance is determined via the expression, cf. Ref. [@Vasylyev2012], $$\eta=\int_\mathcal{A}{{\rm{d}}}^2 \mathbf{r}\, I(\mathbf{r},L),\label{Eq:TransmittanceGeneral}$$ where $I(\mathbf{r},L)$ is the normalized intensity defined by Eq. (\[Eq:IntensityDef\]) and integration is performed in the aperture opening area. Substituting Eq. (\[Eq:Intens\]) into Eq. (\[Eq:TransmittanceGeneral\]) and considering the structure of spot-shape matrix $\mathbf{S}$, cf. Eqs. (\[Eq:MatrixS\_Definition\]) and (\[Eq:Sxx\])-(\[WxWyrelation\]), one gets for the transmittance, $$\begin{aligned} \eta&=\frac{2}{\pi W_1 W_2}\int\limits_0^a{{\rm{d}}}r\,r\int\limits_0^{2\pi}{{\rm{d}}}\varphi\, e^{-2A_1\bigl(r\cos\varphi-r_0\bigr)^2}\nonumber\\ &\times e^{-2A_2r^2\sin^2\varphi} e^{-2A_3\bigl(r\cos\varphi-r_0\bigr)r\sin\varphi}.\label{Tegeneral1}\end{aligned}$$ Here $a$ is the aperture radius, $r$, $\varphi$ are polar coordinates for the vector $\mathbf{r}$, $$\begin{aligned} x{=}r\cos\varphi,\label{Eq:x}\\ y{=}r\sin\varphi,\label{Eq:y}\end{aligned}$$ $r_0$, $\varphi_0$ are polar coordinates for the vector $\mathbf{r}_0$, $$\begin{aligned} x_0{=}r_0\cos\varphi_0,\label{Eq:x0}\\ y_0{=}r_0\sin\varphi_0,\label{Eq:y0}\end{aligned}$$ $$\begin{aligned} A_1=\Bigl(\frac{\cos^2(\phi-\varphi_0)}{W_1^2}+\frac{\sin^2(\phi-\varphi_0)}{W_2^2}\Bigr),\end{aligned}$$ $$\begin{aligned} A_2=\Bigl(\frac{\sin^2(\phi-\varphi_0)}{W_1^2}+\frac{\cos^2(\phi-\varphi_0)}{W_2^2}\Bigr),\end{aligned}$$ $$\begin{aligned} A_3=\Bigl(\frac{1}{W_1^2}-\frac{1}{W_2^2}\Bigr)\sin2(\phi-\varphi_0),\end{aligned}$$ and $\phi$ is defined with the modulo $\pi/2$ such that $\eta$ in Eq. (\[Tegeneral1\]) is a $\pi/2$-periodical function of $\phi$. For the given angle $\chi=\phi-\varphi_0$ the transmittance $\eta$ as a function of $r_0$ has a behavior similar to the transmittance of the circular Gaussian beam with a certain effective spot-radius $W_\textrm{eff}\!\left(\chi\right)$. Applying the method developed in Ref. [@Vasylyev2012] one can write the corresponding approximation, $$\begin{aligned} \eta=\eta_{0} \exp\left\{-\left[\frac{r_0/a} {R\left(\frac{2}{W_{\rm eff}\left(\phi-\varphi_0\right)}\right)}\right]^{\lambda\bigl(\frac{2}{W_{\rm eff}\left(\phi-\varphi_0\right)}\bigr)}\right\}.\label{App:Tapprox}\end{aligned}$$ Here $\eta_0$ is the beam transmittance at $r_0{=}0$, and $$\begin{aligned} R\left(\xi\right)=\Bigl[\ln\Bigl(2\frac{1-\exp[-\frac{1}{2} a^2 \xi^2]}{1-\exp[-a^2\xi^2]{{\rm{I}}}_0\bigl(a^2\xi^2\bigr)}\Bigr)\Bigr]^{-\frac{1}{ \lambda(\xi)}},\label{Eq:R}\end{aligned}$$ $$\begin{aligned} \lambda\left(\xi\right)&=2a^2\xi^2\frac{e^{-a^2\xi^2}{{\rm{I}}}_1(a^2\xi^2)}{1-\exp[ -a^2\xi^2 ] {{\rm{I}}}_0\bigl(a^2\xi^2\bigr)}\nonumber\\ &{\times}\Bigl[\ln\Bigl(2\frac{1-\exp[-\frac{1}{2} a^2 \xi^2]}{1-\exp[-a^2\xi^2]{{\rm{I}}}_0\bigl(a^2\xi^2\bigr)}\Bigr)\Bigr]^{-1}\label{Eq:lambda}\end{aligned}$$ are scale and shape functions, respectively. The transmittance $\eta_0$ is obtained from Eq. (\[Tegeneral1\]) by setting the beam-centroid position $r_0=0$, $$\begin{aligned} \eta_0&=\frac{2}{\pi W_1 W_2}\int\limits_0^a {{\rm{d}}}r\, r\int\limits_0^{2\pi}{{\rm{d}}}\varphi \,e^{-\bigl\{\frac{1}{W_1^2}+\frac{1}{W_2^2}\bigr\}r^2}\nonumber\\ &\qquad\times e^ {-\left|\frac{1}{W_1^2}-\frac{1}{W_2^2}\right|r^2\cos2(\varphi-\widetilde\varphi)}\nonumber\\ &=\frac{2}{|W_1W_2|}\int\limits_0^{a^2}{{\rm{d}}}t\, e^{-\bigl\{\frac{1}{W_1^2}+\frac{1}{W_2^2}\bigr\}t}\, {{\rm{I}}}_0\Bigl(\left|\frac{1}{W_1^2}{-}\frac{1}{W_2^2}\right|t\Bigr),\end{aligned}$$ where $\widetilde\varphi{=}\frac{1}{2}\arctan[A_3/(A_1{-}A_2)]$. It is expressed in terms of the incomplete Lipshitz-Hankel integral, cf. Ref. [@Agrest], as $$\begin{aligned} {{\rm{I}}}_{e_0}(a,z)=\int_0^z {{\rm{d}}}t e^{-a t}{{\rm{I}}}_0(t),\end{aligned}$$ that results in $$\begin{aligned} \eta_0=\frac{2 W_1 W_2}{|W_1^2-W_2^2|}{{\rm{I}}}_{e_0}\Bigl(\frac{W_1^2+W_2^2}{|W_1^2-W_2^2|},a^2 \frac{|W_1^2-W_2^2|}{W_1^2W_2^2}\Bigr).\label{T0eIe0}\end{aligned}$$ The incomplete Lipshitz-Hankel integral can be evaluated numerically. However, using the relation between the incomplete Lipshitz-Hankel (${{\rm{I}}}_{e_0}$) and Weber ($\widetilde {{\rm{Q}}}_0$) integrals [@Agrest], we can rewrite Eq. (\[T0eIe0\]) as $$\begin{aligned} &\eta_0=1-e^{-a^2\frac{W_1^2+W_2^2}{W_1^2W_2^2}} \Bigl[{{\rm{I}}}_0\Bigl(a^2\frac{|W_1^2-W_2^2|}{W_1^2W_2^2}\Bigr)\nonumber\\ &+2\widetilde {{\rm{Q}}}_0\Bigl(a^2\frac{(W_1+W_2)^2}{2W_1^2W_2^2},a^2 \frac{|W_1^2-W_2^2|}{W_1^2W_2^2}\Bigr)\Bigr].\label{T0eQ0}\end{aligned}$$ In Ref. [@Vasylyev2012], an analytical approximation for $\widetilde{{\rm{Q}}}_0$ is derived. Applying here the same procedure for the approximation of the incomplete Weber integral in Eq. (\[T0eQ0\]) one obtains $$\begin{aligned} \eta_0&{=}1{-}{{\rm{I}}}_0\Bigl(a^2\frac{W_1^2{-}W_2^2}{W_1^2W_2^2}\Bigr)e^{-a^2\frac{W_1^2{+}W_2^2}{W_1^2W_2^2}}\nonumber\\ &{-}2\left[1{-}e^{-\frac{a^2}{2}\left(\!\frac{1}{W_1}{-}\frac{1}{W_2}\right)}\!\right]\nonumber\\ &\qquad\times\exp\Biggl[\!{-}\Biggl\{\!\frac{\frac{(W_1+W_2)^2}{|W_1^2-W_2^2|}}{R\left(\frac{1}{W_1}{-}\frac{1}{W_2}\right)}\!\Biggr\} ^{\lambda\left(\!\frac{1}{W_1}{-}\frac{1}{W_2}\right)}\Biggr],\label{T0approx}\end{aligned}$$ where $R\!\left(\xi\right)$ and $\lambda\!\left(\xi\right)$ are defined by Eqs. (\[Eq:R\]) and (\[Eq:lambda\]), respectively. For the case when $W_1^2{=}W_2^2{=}W^2$, Eqs. (\[T0eQ0\]) and (\[T0approx\]) are reduced to $\eta_0{=}1{-}e^{-2a^2/W^2}$ that is the maximal transmittance of the circular beam, cf. Ref. [@Vasylyev2012]. In order to get an approximate value for the effective spot-radius $W_{\rm eff}\!\left(\chi\right)$ we assume that the intensity of the corresponding circular beam is equal to the intensity of the elliptic beam at the aperture plane, i.e. $$\begin{aligned} &\frac{1}{W_\textrm{eff}^2\!\left(\chi\right)} e^{-\frac{2}{W_{\rm eff}^2\!\left(\chi\right)}(r^2+r_0^2+2\,r\, r_0 \cos\varphi)}=\frac{1}{W_1W_2}\nonumber\\ &\times e^{-2 A_1(\chi) r_0^2} e^{2r_0 r \bigl\{2A_1 (\chi) \cos\varphi+A_3(\chi)\sin\varphi\bigr\}}\nonumber\\ &\quad\times e^{-2r^2\bigl\{A_2(\chi)+\left[A_1(\chi)-A_2(\chi)\right]\cos^2\varphi+ \frac{A_3(\chi)}{2}\sin 2\varphi\bigr\}}.\label{Weffdeterm}\end{aligned}$$ In the most general case this equality cannot be satisfied exactly. However, we can find such a value of $W_\textrm{eff}\!\left(\chi\right)$ that Eq. (\[Weffdeterm\]) will be fulfilled approximately. For this purpose we expand both sides of this equation in series with respect to $e^{i\varphi}$. Then we equate the zeroth-order terms of these expansions at the point $r{=}r_0{=}a$. This results in the expression $$\begin{aligned} &4\frac{a^2}{W_{\rm eff}^2(\chi)}+\ln\Bigl[\frac{W_{\rm eff}^2(\chi)}{a^2}\Bigr]-2a^2\left[\frac{1}{W_1^2}+\frac{1}{W_2^2}\right]\nonumber\\ &{-}a^2\left[\frac{1}{W_1^2}{+}\frac{1}{W_2^2}\right]\cos2\chi{-}\ln \left(\frac{W_1W_2}{a^2}\right)=0.\end{aligned}$$ Solving this equation with respect to $W_\textrm{eff}\left(\chi\right)$ one gets $$\begin{aligned} W_\textrm{eff}^2\left(\chi\right)&{=}4a^2\Bigl[\mathcal{W}\Bigl(\frac{4a^2}{ W_1W_2 } e^{\frac{a^2}{W_1^2}\bigl\{1+2\cos^2\!\chi\bigr\}}\nonumber\\ &\qquad\qquad\times e^ {\frac{a^2}{W_2^2}\bigl\{1+2\sin^2\!\chi\bigr\}}\Bigl)\Bigr]^{-1},\label{App:Weff}\end{aligned}$$ where $\mathcal{W}(x)$ is the Lambert function [@Corless]. In Fig. \[fig:T\] we compare the transmittance $\eta$ obtained by numerical integration of Eq. (\[Tegeneral1\]) and its analytical approximation. The approximation, cf. Eq. (\[App:Tapprox\]), gives a reasonable accuracy especially in the case of small beam ellipticity. It is also important to note that $W_\textrm{eff}^2\left(\phi-\varphi_0\right)$ and $\eta$, cf. Eqs. (\[App:Weff\]) and (\[T0approx\]), respectively, are $\pi/2$-periodical functions of the $\phi$, since this angle is defined with the modulo $\pi/2$. ![\[fig:T\] (Color online) The transmittance of the elliptical beam (half-axes $|W_1|$, $|W_2|$) through the circular aperture (radius $a$) as a function of the beam-centroid displacement $r_0$: (a) $|W_1|=0.2a, |W_2|{=}0.1a$, $\chi={\pi}/{3}$; (b) $|W_1|{=}a, |W_2|{=}0.9a$, $\chi={\pi}/{4}$; (c) $|W_1|{=}1.8a, |W_2|{=}1.7a$, $\chi={\pi}/{5}$. The solid line for $\eta$ is obtained by numerical calculation, the dashed line represents the analytical approximation, cf. Eq. (\[App:Tapprox\]).](Fig1New.pdf) Gaussian approximation {#Sec:GasussianApproximation} ====================== In this Section we discuss in detail the statistical properties of elliptic beams and discuss the applicability of the Gaussian approximation. Any spot in the elliptic-beam approximation at the aperture plane is uniquely described by the set of five parameters $(x_0,y_0,W_1^2,W_2^2,\phi)$. While the beam passes through the turbulent atmosphere, these parameters are randomly changed. Each part of the path slightly contributes in these values. Also it is important to note that these parameters can be correlated. Random fluctuations of the beam-centroid position $\mathbf{r}_0$, i.e. the parameters $x_0$ and $y_0$, lead to the effect of beam wandering. These parameters can be considered as affected by an additive noise during the propagation. A large number of small additive contributions is a good argument for using the Gaussian approximation for the beam-centroid position, cf. Ref. [@Vasylyev2012]. Wrapped Gaussian model for $\phi$ --------------------------------- Similar argumentations work for the angle $\phi$. This parameter can also be considered as affected by a large number of the small additive contributions. An important difference is that the angle $\phi$ is a $\pi/2$-periodical variable. For this reason one should use in this case the wrapped Gaussian distribution, cf. Ref. [@Mardia], $$\rho\!\left(\phi\right)=\frac{1}{\sqrt{2\pi}\,\sigma_\phi}\sum\limits_{k=-\infty}^{+\infty} \exp\left[-\frac{\left(\phi-\mu_\phi+\frac{\pi}{2}k\right)^2}{2\sigma_\phi^2}\right], \label{Eq:WrappedGaussian}$$ where $\mu_\phi$ is the mean direction and $\sigma_\phi$ is the unwrapped standard deviation. For $\sigma_\phi{\rightarrow}+\infty$ Eq. (\[Eq:WrappedGaussian\]) becomes the probability density of the uniform distribution. Multiplicative-noise model for $W_{i}^2$ ---------------------------------------- In this model one assumes that each small $k\textrm{th}$ part of the atmospheric channel multiplicatively changes values of $W_{i}^2$, $i{=}1,2$, with the factor $\varepsilon_{i}^{\, k}\in\mathbb{R}^{+}$. As a result at the aperture plane the value of $W_{i}^2$ is $$\begin{aligned} W_{i}^2=W_0^2\prod\limits_{k=1}^N\varepsilon_{i}^{\, k},\qquad i{=}1,2.\label{Eq:MultiplicativeNoise}\end{aligned}$$ The large number $N$ of small random contributions gives a good argument for assuming $W_{i}^2$ log-normally distributed. Let us introduce the random parameters $$\begin{aligned} \Theta_{i}=\ln\frac{W_{i}^2}{W_0^2}.\label{Eq:Theta}\end{aligned}$$ In framework of the considered model these parameters yield a two-fold normal distribution. For the complete characterization of this distribution we need the means and the (co)variances of $\Theta_{i}$. They can be expressed in terms of the means and the (co)variances of $W_{i}^2$ as $$\begin{aligned} \langle \Theta_{i}\rangle=\ln\left[\frac{\langle W_{i}^2\rangle}{W_0^2\left(1+ \frac{\langle (\Delta W_{i}^2)^2\rangle}{\langle W_{i}^2\rangle^2}\right)^{1/2}}\right],\label{App:Eq:ThetaMean}\end{aligned}$$ $$\begin{aligned} \langle \Delta\Theta_i\Delta\Theta_j\rangle= \ln\left(1+ \frac{\langle \Delta W_i^2 \Delta W_j^2\rangle}{\langle W_i^2\rangle\langle W_j^2\rangle}\right),\quad i,j=1,2\label{App:Eq:ThetaCovariances}\end{aligned}$$ which can be used for the corresponding calculations. Isotropy of turbulence {#Sec:Isotropy} ====================== In this Section we discuss simplifications, which follow from the assumption that the atmospheric turbulence is isotropic. We also assume that $$\begin{aligned} \langle\mathbf{r}_0\rangle{=}0,\label{Eq:ZeroBeamCentroid}\end{aligned}$$ i.e. beam wandering fluctuations are placed around the reference-frame origin. We consider the field intensity at the aperture plane, $I(\mathbf{r},L)$, as a stochastic field characterized by the probability density functional $\rho\left[I(\mathbf{r},L)\right]$. The above assumptions mean that $$\begin{aligned} \rho\left[I(O\,\mathbf{r},L)\right]=\rho\left[I(\mathbf{r},L)\right], \label{Eq:Isotropy}\end{aligned}$$ where $O$ is a representation of the $O(2)$ group. In the following we consider important consequences from Eqs. (\[Eq:ZeroBeamCentroid\]) and (\[Eq:Isotropy\]). Uniform distribution for the angle $\phi$ ----------------------------------------- A clear consequence from the isotropy assumption is the fact that the angle parameter $\phi$ appears to be uniformly distributed. This fact is a consequence from Eq. (\[Eq:Isotropy\]). Indeed, according to this requirement the probability density $\rho(\phi)$ does not depend on the choice of the reference frame, i.e. for any angle $\zeta$ $$\begin{aligned} \rho(\phi+\zeta)=\rho(\phi).\end{aligned}$$ This equation holds true only for the uniform distribution. For details of circular distributions see Ref. [@Mardia]. Correlations between linear and angle parameters ------------------------------------------------ Let $\mathbf{v}$ be a random vector, which consists of variables $v_i$ with the support $\mathbb{R}$, $$\begin{aligned} \mathbf{v}=\left(\begin{array}{cccc} x_0&y_0&\Theta_1&\Theta_2 \end{array}\right)^\mathrm{T}.\label{Eq:vMultNoise}\end{aligned}$$ The parameters $v_i,\, i{=}1,..,4$ of Eq. (\[Eq:vMultNoise\]) and the angle parameter $\phi$ are distributed according to the two-fold normal distribution, which is wrapped for $\phi$, cf. Section \[Sec:GasussianApproximation\], $$\begin{aligned} \rho\left(v_i,\phi\right)=\frac{1}{2\pi\sqrt{\det\Sigma_{v_i,\phi}}} \sum\limits_{k=-\infty}^{+\infty} \exp\left(-\frac{1}{2}\boldsymbol{\nu}_k^\mathrm{T}\,\Sigma_{v_i,\phi}^{-1}\, \boldsymbol{\nu}_k\right), \label{Eq:TwoFoldWrapped}\end{aligned}$$ where $$\begin{aligned} \boldsymbol{\nu}_k=\left(\begin{array}{cc} v_i-\langle v_i\rangle&\phi-\mu_\phi+\frac{\pi}{2}k \end{array}\right)^\mathrm{T},\end{aligned}$$ $\mu_\phi$ is the mean direction of $\phi$ $$\begin{aligned} \Sigma_{v_i,\phi}=\left(\begin{array}{cc} \sigma_{v_i}^2&s\sigma_{v_i}\sigma_\phi\\ s\sigma_{v_i}\sigma_\phi&\sigma_\phi^2 \end{array}\right)\end{aligned}$$ is the covariance matrix, $\sigma_{v_i}^2$ is the standard deviation of $v_i$, $\sigma_\phi^2$ is the unwrapped variance of $\phi$, and $s$ is the correlation coefficient. The considered probability distribution can also be rewritten in the form, cf. Ref. [@Mardia], $$\begin{aligned} \rho&\left(v_i,\phi\right)= \frac{1}{\sqrt{2\pi}\sigma_{v_i}}e^{-\frac{1}{2}\frac{(v_i-\langle v_i \rangle)^2}{\sigma_{v_i}^2}}\frac{2}{\pi} \Bigl\{1\label{wrapped}\\ &{+}2\sum\limits_{n=1}^\infty e^{-8(1{-}s^2)\sigma_\phi^2n^2}\! \cos\Bigl[4n\bigl(\phi{-}\mu_\phi{-}s\frac{\sigma_\phi}{\sigma_{v_i}}[v_i{-}\langle v_i\rangle]\bigr)\Bigr]\Bigr\}.\nonumber\end{aligned}$$ As it has been already shown, in the case of isotropic turbulence the marginal distribution for $\phi$ is uniform. This corresponds to the case of $\sigma_\phi^2{\rightarrow}+\infty$. If the correlation is imperfect, i.e. $s^2{\neq}1$, Eq. (\[wrapped\]) is factorized in the normal distribution for $v_i$ and the uniform distribution for $\phi$, $$\begin{aligned} \rho\left(v_i,\phi\right)= \frac{1}{\sqrt{2\pi}\sigma_{v_i}}e^{-\frac{1}{2}\frac{(v_i-\langle v_i \rangle)^2}{\sigma_{v_i}^2}}\frac{2}{\pi},\quad i{=}1,...,4.\label{Eq:GaussUniform}\end{aligned}$$ Hence, for the isotropic turbulence correlations between the angle $\phi$ and the linear parameters vanish. Correlations between beam-centroid position and spot-shape parameters {#Sec:CorrR0W} --------------------------------------------------------------------- Consider the random variables $\Theta_{i}$, $i{=}1,2$, which describe the spot shape, cf. Eq. (\[Eq:Theta\]). We will be interested in the correlations $\langle \Delta \Theta_{i}\,\Delta\mathbf{r}_0\rangle$. With the considered assumption $$\begin{aligned} \langle \Delta \Theta_{i}\,\Delta \mathbf{r}_0\rangle=\langle \Theta_{i}\,\mathbf{r}_0\rangle,\qquad i{=}1,2,\end{aligned}$$ because the beam centroid is fluctuating around the reference-frame origin, cf. Eq. (\[Eq:ZeroBeamCentroid\]). By using the definition of $\mathbf{r}_0$, cf. Eq. (\[Eq:BeamCentroid\]), the correlation coefficient is written as $$\begin{aligned} \langle \Delta \Theta_{i}\,\Delta\mathbf{r}_0\rangle =\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\,\mathbf{r}\, \langle\Theta_{i}\, I(\mathbf{r},L)\rangle.\label{Eq:CorrXiX0}\end{aligned}$$ The assumption of isotropy, cf. Eq. (\[Eq:Isotropy\]), results in the statement that $\langle\Theta_{i}\, I(\mathbf{r},L)\rangle$ is invariant with respect to the rotations in the $(x,y)$ plane. Hence, this function has the central symmetry. This leads to the conclusion that $$\begin{aligned} \label{Thetacor} \langle \Delta\Theta_{i}\,\Delta\mathbf{r}_0\rangle =0,\end{aligned}$$ because the integral in Eq. (\[Eq:CorrXiX0\]) vanishes. We assume that $\Theta_{i}$ and $\mathbf{r}_0$ are Gaussian variables, cf. Section \[Sec:GasussianApproximation\]. Together with Eq. (\[Thetacor\]) this yields $$\begin{aligned} \langle F(\Theta_{i})G(\mathbf{r}_0)\rangle=\langle F(\Theta_{i})\rangle\,\langle G(\mathbf{r}_0)\rangle.\end{aligned}$$ Here $F$ and $G$ are arbitrary functions. Moments and (co)variances of $W_{i}^2$ -------------------------------------- In Section \[Sec:GasussianApproximation\] it has been shown that for the characterization of probability distributions for elliptic beams we need among other first and second moments for $W_{i}^2$, $i{=}1,2$, cf. Eqs. (\[App:Eq:ThetaMean\]) and (\[App:Eq:ThetaCovariances\]). In general, the calculation of these moments is a complicated task, which requires non-Gaussian functional integration. Here we will show that the assumption of turbulence isotropy essentially simplifies this problem such that the moments are expressed in terms of field correlation functions of the second and fourth orders. ### First moments of $W_{i}^2$ We start the consideration with averaging the elements of the matrix $\mathbf{S}$, cf. Eq. (\[Eq:MatrixS\_Definition\]), by the atmosphere states, $$\begin{aligned} \langle &S_{xx}\rangle{=}\nonumber\\ &4\left[\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\, x^2 \Gamma_2\! \left(\mathbf{r};L\right){-} \int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\,x_1x_2 \Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;L\right)\right],\label{Eq:SxxMean}\\ \langle &S_{yy}\rangle{=}\nonumber\\ &4\left[\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\, y^2 \Gamma_2\! \left(\mathbf{r};L\right){-} \int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\,y_1y_2 \Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;L\right)\right],\label{Eq:SyyMean}\\ \langle &S_{xy}\rangle{=}\nonumber\\ &4\left[\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\, xy \Gamma_2\! \left(\mathbf{r};L\right){-} \int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\,x_1y_2 \Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;L\right)\right].\label{Eq:SxyMean}\end{aligned}$$ Here $$\begin{aligned} \Gamma_2\!\left(\mathbf{r};z\right)=\left\langle I(\mathbf{r},z)\right\rangle= \left\langle u^\ast(\mathbf{r},z)u(\mathbf{r},z)\right\rangle,\label{Eq:Gamma2}\end{aligned}$$ $$\begin{aligned} \Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;z\right)&=\left\langle I(\mathbf{r}_1,z)I(\mathbf{r}_2,z)\right\rangle\label{Eq:Gamma4}\\&= \left\langle u^\ast(\mathbf{r}_1,z)u(\mathbf{r}_1,z) u^\ast(\mathbf{r}_2,z)u(\mathbf{r}_2,z)\right\rangle\nonumber \end{aligned}$$ are the field correlation functions of the second and fourth orders, respectively. The isotropy assumption, cf. Eq. (\[Eq:Isotropy\]), results in the equalities, $$\begin{aligned} &\langle S_{xx}\rangle{=}\langle S_{yy}\rangle,\label{Eq:SxxEqSyy}\\ &\langle S_{xy}\rangle{=}0,\label{Eq:SxyEq0}\end{aligned}$$ which means that the averaged beam has a circular shape. Equation (\[Eq:SxyEq0\]) is a consequence of the fact that due to the turbulence isotropy $\Gamma_2\!\left(\mathbf{r};L\right)$ and $\int_{\mathbb{R}^2}{{\rm{d}}}x_2{{\rm{d}}}y_1\,x_1y_2 \Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;L\right)$ have a symmetry in planes $(x,y)$ and $(x_1,y_2)$, respectively. This symmetry implies that the integrals in Eq. (\[Eq:SxyMean\]) appear to have zero values. Combining Eqs. (\[WxWyrelation\]) and (\[Eq:SxyEq0\]) one gets $$\begin{aligned} \left\langle W_1^2\right\rangle=\left\langle W_2^2\right\rangle,\end{aligned}$$ where we have used the fact that the angle $\phi$ does not correlate with $W_{i}^2$, cf. Eq. (\[Eq:GaussUniform\]). Similarly, averaging Eqs. (\[Eq:Sxx\]) and (\[Eq:Syy\]) one gets $$\begin{aligned} \left\langle W_{1/2}^2 \right\rangle=\left\langle S_{xx/yy} \right\rangle. \label{Eq:W12EqSxxyy} \end{aligned}$$ This equation together with Eqs. (\[Eq:SxxMean\]) and (\[Eq:SyyMean\]) express the first moments of $W_{i}^2$ in terms of the field correlation functions $\Gamma_2$ and $\Gamma_4$. ### Second moments of $W_{i}^2$ Similar argumentations enable us to express the second moments of $W_{i}^2$, $i{=}1,2$, in terms of field correlation functions. For this purpose we multiply Eq. (\[WxWyrelation\]) by $(W_1^2+W_2^2)$ and average it, $$\begin{aligned} \left\langle S_{xy}W_1^2\right\rangle+\left\langle S_{xy}W_2^2\right\rangle= \frac{1}{2}\Bigl(\left\langle W_1^4\right\rangle- \left\langle W_2^4\right\rangle\Bigr) \left\langle\sin2\phi\right\rangle.\label{Eq:W4Derivation1} \end{aligned}$$ Here $$\begin{aligned} \left\langle S_{xy}W_{i}^2\right\rangle&= 4\left[\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\, xy \left\langle W_{i}^2 I(\mathbf{r},L)\right\rangle\right.\label{Eq:W4Derivation2}\\ &\left.{-}\int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\,x_1y_2 \left\langle W_{i}^2 I(\mathbf{r}_1,L)I(\mathbf{r}_2,L)\right\rangle\right].\nonumber \end{aligned}$$ The isotropy condition (\[Eq:Isotropy\]) implies that the functions $\left\langle W_{i}^2 I(\mathbf{r},L)\right\rangle$ and $\int_{\mathbb{R}^2}{{\rm{d}}}x_2{{\rm{d}}}y_1\, \left\langle W_{i}^2 I(\mathbf{r}_1,L)I(\mathbf{r}_2,L)\right\rangle$ have such a symmetry in $(x,y)$ and $(x_1,y_2)$ planes, respectively, that the integrals in Eq. (\[Eq:W4Derivation2\]) are zeros. This means that the left-hand side of Eq. (\[Eq:W4Derivation1\]) is also zero, which results in $$\begin{aligned} \left\langle W_1^4\right\rangle=\left\langle W_2^4\right\rangle.\label{Eq:W14EqW24}\end{aligned}$$ The assumption of isotropy also implies that $$\begin{aligned} \langle S_{xx}^2\rangle{=}\langle S_{yy}^2\rangle,\label{Eq:Sxx2EqSyy2}\end{aligned}$$ i.e. the second moments of $S_{xx/yy}$ are equal. Equations (\[Eq:Sxx\]) and (\[Eq:Syy\]) enable to express the moments $\langle S_{xx/yy}^2\rangle$ and $\langle S_{xx}S_{yy}\rangle$ in terms of the moments $\langle W_{1/2}^4\rangle$ and $\langle W_{1}^2W_{2}^2\rangle$, $$\begin{aligned} &\langle S_{xx/yy}^2\rangle=\frac{3}{4}\langle W_{1/2}^4\rangle +\frac{1}{4}\langle W_{1}^2W_{2}^2\rangle,\label{Eq:Sxxyy2Mean}\\ &\langle S_{xx}S_{yy}\rangle=\frac{1}{4}\langle W_{1/2}^4\rangle+\frac{3}{4}\langle W_{1}^2W_{2}^2\rangle,\label{Eq:SxxSyyMean}\end{aligned}$$ where we have utilized the absence of correlations between $W_{1/2}^2$ and the angle $\phi$, cf. Eq. (\[Eq:GaussUniform\]). Inverting Eqs. (\[Eq:Sxxyy2Mean\]) and (\[Eq:SxxSyyMean\]) one gets $$\begin{aligned} &\langle W_{1/2}^4\rangle=\frac{3}{2}\langle S_{xx/yy}^2\rangle -\frac{1}{2}\langle S_{xx}S_{yy}\rangle,\label{Eq:W12_2Mean}\\ &\langle W_{1}^2W_{2}^2\rangle=-\frac{1}{2}\langle S_{xx/yy}^2\rangle +\frac{3}{2}\langle S_{xx}S_{yy}\rangle.\label{Eq:W1W2Mean}\end{aligned}$$ Since the moments $\langle S_{xx/yy}^2\rangle$ and $\langle S_{xx}S_{yy}\rangle$ can be expressed in terms of field correlation functions, we get a tool for obtaining the moments $\langle W_{1/2}^4\rangle$ and $\langle W_{1}^2W_{2}^2\rangle$. The straightforward expressions for $\langle S_{xx/yy}^2\rangle$ and $\langle S_{xx}S_{yy}\rangle$ contain the even-order field correlation functions up to $\Gamma_8$. Analytical methods are quite involved for evaluation of sixth- and eight-order functions. By using the assumptions of Gaussianity for the beam parameters and isotropic properties of the turbulence we can rewrite these expressions in terms of field correlation functions $\Gamma_2$ and $\Gamma_4$ only. The moment $\langle S_{xx}^2\rangle$ is obtained from Eq. (\[Eq:MatrixS\_Definition\]) by squaring and averaging $S_{xx}^2$, $$\begin{aligned} \langle S_{xx}^2\rangle=&16\left(\int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\,x_1^2x_2^2 \,\Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;L\right)+\langle x_0^4\rangle\right.\label{Eq:SxxViaGamma1}\\ &\left.-2\left\langle x_0^2\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\,x^2 \,I\!\left(\mathbf{r};L\right)\right\rangle\right),\nonumber\end{aligned}$$ and similarly for the moment $\langle S_{yy}^2\rangle$. The second term on the right-hand side of this expression contains the field correlation function $\Gamma_8$. However, assuming that the beam-centroid coordinate, $x_0$, is a Gaussian variable and utilizing Eq. (\[Eq:ZeroBeamCentroid\]), this term can be written as $$\begin{aligned} \langle x_0^4\rangle=3\langle x_0^2\rangle^2.\label{Eq:FourthMomentX0}\end{aligned}$$ Here $$\begin{aligned} \langle x_0^2\rangle=\int_{\mathbb{R}^4} {{\rm{d}}}^2 \mathbf{r}_1{{\rm{d}}}^2 \mathbf{r}_2 x_1 x_2\, \Gamma_4(\mathbf{r}_1,\mathbf{r}_2;L),\label{App:bwvariance}\end{aligned}$$ which is expressed in terms of the field correlation function $\Gamma_4$. Consider the third term in right-hand side of Eq. (\[Eq:SxxViaGamma1\]). By using Eqs. (\[Eq:MatrixS\_Definition\]), (\[Eq:Sxx\]), (\[Eq:GaussUniform\]), and (\[Eq:FourthMomentX0\]) one gets $$\begin{aligned} \left\langle x_0^2\int_{\mathbb{R}^2}{{\rm{d}}}^2\mathbf{r}\,x^2 \,I\!\left(\mathbf{r};L\right)\right\rangle=\frac{1}{4}\langle x_0^2 W_{1/2}^2\rangle+3\langle x_0^2\rangle^2.\end{aligned}$$ Because the assumption of isotropy results in the fact that the beam-centroid coordinate $x_0$ does not correlate with the spot-shape parameters, cf. Section \[Sec:CorrR0W\], we can write $$\begin{aligned} \langle x_0^2 W_{1/2}^2\rangle=\langle x_0^2\rangle \langle S_{xx/yy}\rangle,\end{aligned}$$ where we have also used Eq. (\[Eq:W12EqSxxyy\]). Next, the expression for the moment $\langle S_{xx}^2\rangle$, cf. Eq. (\[Eq:SxxViaGamma1\]), in terms of the second- and fourth-order field correlation functions reads as $$\begin{aligned} \langle S_{xx}^2\rangle=&16\left(\int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\,x_1^2x_2^2 \,\Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;L\right)\right.\label{Eq:SxxViaGamma2}\\ &\left.-3\langle x_0^2\rangle^2 -\frac{1}{2}\langle x_0^2\rangle \langle S_{xx}\rangle\right).\nonumber\end{aligned}$$ Similar considerations should be applied for the calculation of the moment $\langle S_{xx}S_{yy}\rangle$, taking into account that $\langle x_0^2y_0^2\rangle=\langle x_0^2\rangle^2$. Finally, we substitute the obtained expressions for the moments $\langle S_{xx}^2\rangle$ and $\langle S_{xx}S_{yy}\rangle$ in Eqs. (\[Eq:W12\_2Mean\]) and (\[Eq:W1W2Mean\]). This results in relations for the moments $\langle W_{1/2}^4\rangle$ and $\langle W_{1}^2W_{2}^2\rangle$ in terms of field correlation functions $\Gamma_2$ and $\Gamma_4$, $$\begin{aligned} \langle &W_{1/2}^4\rangle{=}8\left( 3\int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\,x_1^2x_2^2 \,\Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;L\right)\right.\label{App:Eq:W12ViaGamma}\\ &\left.{-}\int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\,x_1^2y_2^2 \,\Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;L\right){-}8\langle x_0^2\rangle^2 {-}\langle x_0^2\rangle \langle S_{xx}\rangle\right),\nonumber\end{aligned}$$ $$\begin{aligned} &\langle W_{1}^2W_{2}^2\rangle{=}8\left( 3\int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\,x_1^2y_2^2 \,\Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;L\right)\right.\label{Eq:W1W2ViaGamma}\\ &\qquad\left.{-}\int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\,x_1^2x_2^2 \,\Gamma_4\!\left(\mathbf{r}_1,\mathbf{r}_2;L\right) {-}\langle x_0^2\rangle \langle S_{xx}\rangle\right).\nonumber\end{aligned}$$ Here $\langle S_{xx}\rangle$ and $\langle x_0^2\rangle$ are given by Eqs. (\[Eq:SxxMean\]) and (\[App:bwvariance\]), respectively. Phase approximation of the Huygens-Kirchhoff method {#Sec:PhaseAppr} =================================================== The parameters, which characterize statistical properties of elliptic beams, are expressed in terms of the field correlation functions $\Gamma_2$ and $\Gamma_4$, see Section \[Sec:Isotropy\]. Here we briefly discuss the method of obtaining these functions as proposed in Ref. [@Banakh]. We start from the paraxial equation, cf. Eq. (\[waveEq\]), which describes the beam amplitude, $u(\mathbf{r},z)$ and the corresponding boundary condition, $u_0(\mathbf{r}^\prime)$, cf. Eq. (\[Eq:BoundaryConditions\]). For our purposes this equation is represented in such an integral form, $$\begin{aligned} &u(\mathbf{r},z)=\int_{\mathbb{R}^2} {{\rm{d}}}^2 \mathbf{r}^\prime u_0(\mathbf{r}^\prime) G_0(\mathbf{r},\mathbf{r}^\prime;z,0)\, G_1(\mathbf{r},\mathbf{r}^\prime;z,0)\nonumber \\ &{+}\frac{i}{2k} \int\limits_{0}^z{{\rm{d}}}z^\prime \int_{\mathbb{R}^2}\! {{\rm{d}}}^2 \mathbf{r}^\prime u(\mathbf{r}^\prime,z^\prime) G_0(\mathbf{r},\mathbf{r}^\prime;z,z^\prime) \Delta^\prime G_1(\mathbf{r},\mathbf{r}^\prime;z,z^\prime). \label{Eq:IntegraEqU}\end{aligned}$$ Here $$\begin{aligned} G_0(\mathbf{r},\mathbf{r}^\prime;z,z^\prime)= \frac{k}{2\pi i (z-z^\prime)}\exp\Bigl[\frac{ik |\mathbf{r}-\mathbf{r}^\prime|^2}{2(z-z^\prime)}\Bigr],\label{Eq:G0}\end{aligned}$$ $$\begin{aligned} G_1(\mathbf{r},\mathbf{r}^\prime;z,z^\prime)=\exp\Bigl[i S(\mathbf{r},\mathbf{r}^\prime;z,z^\prime)\Bigr],\end{aligned}$$ $$\begin{aligned} S(\mathbf{r},\mathbf{r}^\prime;z,z^\prime) =k\int\limits_{z^\prime}^z{{\rm{d}}}\xi\, \delta n\Bigl( \mathbf{r}\frac{\xi-z^\prime}{z-z^\prime}+\mathbf{r}^\prime\frac{z-\xi}{ z-z^\prime},\xi\Bigr),\label{SfunctDef}\end{aligned}$$ and $\Delta^\prime$ is the transverse Laplace operator acting on functions of $\mathbf{r}^\prime$. The phase approximation assumes that we consider the zero-order approximation for the solution of Eq. (\[Eq:IntegraEqU\]) in the aperture plane $z{=}L$, i.e. $$\begin{aligned} u(\mathbf{r},L)=\int_{\mathbb{R}^2} {{\rm{d}}}^2 \mathbf{r}^\prime u_0(\mathbf{r}^\prime) G_0\, G_1(\mathbf{r},\mathbf{r}^\prime;L,0). \label{Eq:IntegraEqUZeroSolution}\end{aligned}$$ Substituting this expression in the definition of the field correlation functions, cf. Eqs. (\[Eq:Gamma2\]) and (\[Eq:Gamma4\]), one gets $$\begin{aligned} &\Gamma_{2n}\!\left(\mathbf{r}_1,\ldots,\mathbf{r}_n;L\right)= \label{Eq:Gamma2nSol1}\\ &\int_{\mathbb{R}^{4n}}{{\rm{d}}}^2\mathbf{r}_1^\prime\ldots {{\rm{d}}}^2\mathbf{r}_{2n}^\prime\, u_0(\mathbf{r}_1^\prime)u_0^\ast(\mathbf{r}_2^\prime)\ldots u_0(\mathbf{r}_{2n-1}^\prime)u_0^\ast(\mathbf{r}_{2n}^\prime) \nonumber\\ &\hspace{6em}{}\times\mathcal{G}_{2n,0}(\mathbf{r}_1,\ldots,\mathbf{r}_n, \mathbf {r}_1^\prime, \ldots,\mathbf{r}_{2n}^\prime;L,0)\nonumber\\ &\hspace{6em}{}\times\left\langle\mathcal{G}_{2n,1}(\mathbf{r}_1,\ldots, \mathbf{r}_n,\mathbf{r}_1^\prime,\ldots,\mathbf{r}_{2n}^\prime;L,0)\right\rangle ,\nonumber\end{aligned}$$ where $n{=}1,2,\ldots$, $$\begin{aligned} &\mathcal{G}_{2n,i}(\mathbf{r}_1,\ldots,\mathbf{r}_n, \mathbf {r}_1^\prime, \ldots,\mathbf{r}_{2n}^\prime;L,0)=\\ &\hspace{5em}{}\prod\limits_{k=1}^n G_i(\mathbf{r}_k,\mathbf{r}_{2k-1}^\prime;L,0)\, G_i^\ast(\mathbf{r}_k,\mathbf{r}_{2k}^\prime;L,0),\nonumber\end{aligned}$$ and $i{=}0,1$. The assumption that $\delta n(\mathbf{r};z)$ is a Gaussian stochastic field enables to average $\mathcal{G}_{2n,1}$ in Eq. (\[Eq:Gamma2nSol1\]), such that $$\begin{aligned} &\left\langle\mathcal{G}_{2n,1}(\mathbf{r}_1,\ldots, \mathbf{r}_n,\mathbf{r}_1^\prime,\ldots,\mathbf{r}_{2n}^\prime;L, 0)\right\rangle=\label{Eq:G2n1}\\ &\qquad{}\exp\Bigl[\frac{1}{2}\sum\limits_{k=2}^{2n}\sum\limits_{l=1}^{k-1} (-1)^{k+l}\mathcal{D}_S(\mathbf{r}_l,\mathbf{r}_k;\mathbf{r}_l^\prime, \mathbf{r}_k^\prime;L,0)\Bigr]. \nonumber\end{aligned}$$ Here $$\begin{aligned} &\mathcal{D}_S(\mathbf{r}_l,\mathbf{r}_k;\mathbf{r}_l^\prime, \mathbf{r}_k^\prime;L,0)\label{Eq:StrConstD_S}\\ &\qquad{}=\left\langle\Bigl[S(\mathbf{r}_l,\mathbf{r}_l^\prime;L,0)- S(\mathbf{r}_k,\mathbf{r}_k^\prime;L,0)\Bigr]^2\right\rangle\nonumber\end{aligned}$$ is the structure function of phase fluctuations of a spherical wave propagating in turbulence. The correlation function for the index of refraction in the Markovian approximation, cf. e.g. Ref. [@Fante1], reads as $$\begin{aligned} \label{DeltaNcorr} & \langle\delta n(\mathbf{r};z)\delta n(\mathbf{r} ^\prime;z^\prime)\rangle\\ &\qquad{}=2\pi\delta(z-z^\prime)\int_{\mathbb{R}^2}{{\rm{d}}}^2\boldsymbol{\kappa}\, \Phi_n(\boldsymbol{\kappa},z)e^{i\boldsymbol{\kappa}\cdot(\mathbf{r} -\mathbf{r} ^\prime)}.\nonumber\end{aligned}$$ Here $\Phi_n(\boldsymbol{\kappa},z)$ is the spectrum of turbulence, which we use in the Kolmogorov form, see Ref. [@Tatarskii], $$\begin{aligned} \Phi_n(\boldsymbol{\mathbf{\kappa}},z)=0.033 C_n^2(z)\kappa^{-\frac{11}{3}},\label{KolmogorovSpectr}\end{aligned}$$ and $C_n^2(z)$ is the refractive index structure constant. Inserting Eqs. (\[SfunctDef\]), (\[DeltaNcorr\]) and (\[KolmogorovSpectr\]) in Eq. (\[Eq:StrConstD\_S\]), we arrive at the following expression for the phase structure function $$\begin{aligned} \mathcal{D}_S(\mathbf{r},\mathbf{r}^\prime) =2\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\, \Bigl|\mathbf{r}\,\xi{+} \mathbf{r}^\prime(1-\xi) \Bigr|^ { \frac {5}{3}},\label{DsSph}\end{aligned}$$ where we assume that $C_n^2$ is constant for the horizontal link, $$\begin{aligned} \mathcal{D}_S(\mathbf{r}_k-\mathbf{r}_l,\mathbf{r}_k^\prime- \mathbf{r}_l^\prime)=\mathcal{D}_S(\mathbf{r}_l,\mathbf{r}_k;\mathbf{r} _l^\prime, \mathbf{r}_k^\prime;L,0),\label{DsSph1}\end{aligned}$$ is a simplified notion for the structure function of phase fluctuations, $$\begin{aligned} \label{rhoDefinition} \rho_0=(1.5\, C_n^2\,k^2 L)^{-3/5}\end{aligned}$$ is the radius of spatial coherence of a plane wave in the atmosphere. Finally we substitute Eqs. (\[DsSph\]), (\[DsSph1\]) into Eq. (\[Eq:G2n1\]). Then substituting Eqs. (\[Eq:G0\]) and (\[Eq:G2n1\]) into Eq. (\[Eq:Gamma2nSol1\]) and performing some trivial integrations, we evaluate the field correlation functions for $n=1,2$, $$\begin{aligned} &\Gamma_2(\mathbf{r})=\frac{k^2}{4\pi^2 L^2}\int_{\mathbb{R}^2}{{\rm{d}}}^2 \mathbf{r}^\prime e^{-\frac{g^2|\mathbf{r}^\prime|^2}{2W_0^2} -2i\frac{\Omega}{W_0^2}\mathbf r\cdot\mathbf{r}^\prime-\frac{1}{2}\mathcal{D}_S(0,\mathbf{r}^\prime)}\label{Gamma2}\end{aligned}$$ and $$\begin{aligned} &\Gamma_4(\mathbf{r}_1,\mathbf{r}_2)=\frac{2 k^4 }{\pi^2(2\pi)^3L^4 W_0^2}\int_{\mathbb{R}^6}{{\rm{d}}}^2 \mathbf{r}^\prime_{1}{{\rm{d}}}^2 \mathbf{r}^\prime_{2}{{\rm{d}}}^2 \mathbf{r}^\prime_{3}\nonumber\\ &\qquad{\times}e^{-\frac{1}{W_0^2}(|\mathbf{r}_1^\prime|^2+|\mathbf{r}_2^\prime|^2+g^2|\mathbf{r}_3^\prime|^2)+2i\frac{\Omega}{W_0^2}[1{-}\frac{L}{F}]\mathbf{r}^\prime_1\cdot \mathbf{r}^\prime_2} \nonumber\\ &\qquad\quad\times e^{-2i\frac{\Omega}{W_0^2}(\mathbf{r}_1-\mathbf{r}_2)\cdot\mathbf{r}^\prime_2-2i\frac{\Omega}{W_0^2}(\mathbf{r}_1+\mathbf{r}_2)\cdot \mathbf{r}^\prime_3}\nonumber\\ &\quad\times\exp\Biggl[\frac{1}{2}\sum\limits_{j=1,2}\Bigl\{\mathcal{D}_S(\mathbf{r}_1{-}\mathbf{r}_2,\mathbf{r}^\prime_1{+}(-1)^j \mathbf{r}^\prime_2)\nonumber\\ &{-} \mathcal{D}_S(\mathbf{r}_1{-}\mathbf{r}_2,\mathbf{r}^\prime_1{+}(-1)^j \mathbf{r}^\prime_3){-} \mathcal{D}_S(0,\mathbf{r}^\prime_2{+}(-1)^j \mathbf{r}^\prime_3)\Bigr\}\Biggr].\label{Gamma4}\end{aligned}$$ Here $$\begin{aligned} \Omega{=}\frac{kW_0^2}{2L}\label{FresnelOmega} \end{aligned}$$ is the Fresnel number of the transmitter aperture and $g^2{=}1{+}\Omega^2[1{-}\frac{L}{F}]^2$ is the generalized diffraction beam parameter. Beam wandering {#Sec:BW} ============== In this Section we derive the beam-wandering variance for weak and strong turbulence regimes. The beam-wandering variance $\langle x_0^2\rangle$ is evaluated by substituting Eq. (\[Gamma4\]) into Eq. (\[App:bwvariance\]) $$\begin{aligned} \label{InitialBW} &\langle x_0^2\rangle=\frac{2k^4}{\pi^2(2\pi)^3L^4W_0^2}\int_{\mathbb{R}^{10}}{{\rm{d}}}^2\mathbf{R}\,{{\rm{d}}}^2\mathbf{r}\,{{\rm{d}}}^2\mathbf{r}_1^\prime\,{{\rm{d}}}^2\mathbf{r}_2^\prime\,{{\rm{d}}}^2\mathbf{r}_3^\prime\nonumber\\ &\qquad{\times}\left({R}_x^2{-}\frac{{r}_x^2}{4}\right)e^{-\frac{1}{W_0^2}(|\mathbf{r}_1^\prime|^2+|\mathbf{r}_2^\prime|^2+g^2|\mathbf{r}_3^\prime|^2)}e^{-4i\frac{\Omega}{W_0^2}\mathbf{R}\cdot \mathbf{r}^\prime_3}\nonumber\\ &\qquad{\times}e^{2i\frac{\Omega}{W_0^2}[1{-}\frac{L}{F}]\mathbf{r}^\prime_1\cdot \mathbf{r}^\prime_2-2i\frac{\Omega}{W_0^2}\mathbf{r}\cdot\mathbf{r}^\prime_2}\mathcal{J}(\mathbf{r},\mathbf{r}_1^\prime,\mathbf{r}_2^\prime,\mathbf{r}_3^\prime),\end{aligned}$$ with $$\begin{aligned} \label{Jdefinit} &\mathcal{J}(\mathbf{r},\mathbf{r}_1^\prime,\mathbf{r}_2^\prime,\mathbf{r}_3^\prime)\\ &{=}\exp\Bigl[\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\sum\limits_{j=1,2}\Bigl(\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1{-}\xi)\right|^{\frac{5}{3}}\nonumber\\ &{-\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_3^\prime](1{-}\xi)\right|^{\frac{5}{3}}}{-}(1{-}\xi)^{\frac{5}{3}}\left|\mathbf{r}_2^\prime{+}({-}1)^j\mathbf{r}_3^\prime\right|^{\frac{5}{3}}\Bigr)\Bigr],\nonumber\end{aligned}$$ where we have used the variables $\mathbf{r}{=}\mathbf{r}_1{-}\mathbf{r}_2$ and $\mathbf{R}{=}(\mathbf{r}_1{+}\mathbf{r}_2)/2$. We integrate over the variables $\mathbf{R}$ and $\mathbf{r}_3^\prime$ using the properties of Dirac delta function, which occurs in the integral representation of Eq. (\[InitialBW\]). For example one can show that $$\begin{aligned} \int_{\mathbb{R}^4}{{\rm{d}}}^2\mathbf{R}\,{{\rm{d}}}^2\mathbf{r}_3^\prime\,\mathbf{R}^2\,&e^{-4i\frac{\Omega}{W_0^2}\mathbf{R}\cdot\mathbf{r}_3^\prime}f(\mathbf{r}_3^\prime)\nonumber\\ &=-\frac{(2\pi)^2W_0^8}{(4\Omega)^4}\Delta_{\mathbf{r}_3^\prime}^2 f(\mathbf{r}_3^\prime)\Bigl|_{\mathbf{r}_3^\prime=0}, \label{IntTrick}\end{aligned}$$ where $\Delta_{\mathbf{r}_3^\prime}^2$ is the transverse Laplace operator and $f(x)$ is an arbitrary function. We arrive at $$\begin{aligned} &\langle x_0^2\rangle=\frac{2\Omega^2}{(2\pi)^3W_0^6}\int_{\mathbb{R}^{6}}{{\rm{d}}}^2\mathbf{r}\,{{\rm{d}}}^2\mathbf{r}_1^\prime\,{{\rm{d}}}^2\mathbf{r}_2^\prime\left(\frac{g^2 W_0^2}{2\Omega^2}{-}r_x^2\right)\nonumber\\ &\qquad{\times}e^{-\frac{1}{W_0^2}(|\mathbf{r}_1^\prime|^2+|\mathbf{r}_2^\prime|^2)}e^{2i\frac{\Omega}{W_0^2}[1{-}\frac{L}{F}]\mathbf{r}^\prime_1\cdot \mathbf{r}^\prime_2-2i\frac{\Omega}{W_0^2}\mathbf{r}\cdot\mathbf{r}^\prime_2}\nonumber\\ &\qquad{\times}\exp\Bigl[\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\Bigl(\sum\limits_{j=1,2}\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1{-}\xi)\right|^{\frac{5}{3}}\nonumber\\ &\qquad{-2\left|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)\right|^{\frac{5}{3}}}{-}2(1{-}\xi)^{\frac{5}{3}}\left|\mathbf{r}_2^\prime\right|^{\frac{5}{3}}\Bigr)\Bigr].\label{BwandGen}\end{aligned}$$ Let us consider the cases of weak and strong turbulence separately. Weak turbulence --------------- The weak turbulence is characterized by large values of the parameter $\rho_0$, cf. Eq. (\[rhoDefinition\]) together with the dependence on the Rytov parameter in (\[sigmaRytov\]). Hence, we can expand the last exponent of (\[BwandGen\]) into series with respect to $\rho_0^{-\frac{5}{3}}$ up to the first order. The first term of the expansion which is independent of $\rho_0$ in (\[BwandGen\]), vanishes and we obtain $$\begin{aligned} &\langle x_0^2\rangle=\frac{2\Omega^2\rho_0^{-\frac{5}{3}}}{(2\pi)^3W_0^6}\int_{\mathbb{R}^{6}}{{\rm{d}}}^2\mathbf{r}\,{{\rm{d}}}^2\mathbf{r}_1^\prime\,{{\rm{d}}}^2\mathbf{r}_2^\prime\left(\frac{g^2 W_0^2}{2\Omega^2}{-}{r}_x^2\right)\nonumber\\ &\qquad{\times}e^{-\frac{1}{W_0^2}(|\mathbf{r}_1^\prime|^2+|\mathbf{r}_2^\prime|^2)}e^{2i\frac{\Omega}{W_0^2}[1{-}\frac{L}{F}]\mathbf{r}^\prime_1\cdot \mathbf{r}^\prime_2-2i\frac{\Omega}{W_0^2}\mathbf{r}\cdot\mathbf{r}^\prime_2}\nonumber\\ &\qquad{\times}\int\limits_0^1{{\rm{d}}}\xi\Bigl(\sum\limits_{j=1,2}\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1{-}\xi)\right|^{\frac{5}{3}}\nonumber\\ &\qquad\quad{-2\left|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)\right|^{\frac{5}{3}}}{-}2(1{-}\xi)^{\frac{5}{3}}\left|\mathbf{r}_2^\prime\right|^{\frac{5}{3}}\Bigr).\end{aligned}$$ Performing the multiple integrations in this equation, one derives for the beam-wandering variance for a focused beam, $L{=}F$ (defined in Eq. (\[Eq:BoundaryConditions\])), for weak turbulence the result $$\begin{aligned} \langle x_0^2\rangle&=0.94 C_n^2 L^3 W_0^{-\frac{1}{3}}=0.33 W_0^2\sigma_R^2 \Omega^{-\frac{7}{6}},\label{bwweak}\end{aligned}$$ where $$\begin{aligned} \label{sigmaRytov} \sigma_R^2=1.23 C_n^2k^{\frac{7}{6}}L^{\frac{11}{6}}=0.82\rho_0^{-\frac{5}{3}}k^{-\frac{5}{6}}L^{\frac{5}{6}}\end{aligned}$$ is the Rytov parameter [@Tatarskii]. Strong turbulence ----------------- For the case of strong turbulence the parameter $\rho_0$ is small. The exponential in Eq. (\[Jdefinit\]), $\mathcal{J}(\mathbf{r},\mathbf{r}_1^\prime,\mathbf{r}_2^\prime,\mathbf{r}_3^\prime)$, significantly differs from zero in the following regions: $$\begin{aligned} |\mathbf{r}_2^\prime|(1{-}\xi)\gg\rho_0,\quad |\mathbf{r}_3^\prime|(1{-}\xi),\,\,|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)|\lesssim\rho_0; \label{region1}\end{aligned}$$ $$\begin{aligned} |\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)|\gg\rho_0,\quad|\mathbf{r}_2^\prime|(1{-}\xi),\,|\mathbf{r}_3^\prime|(1{-}\xi)\lesssim\rho_0; \label{region2} \end{aligned}$$ $$\begin{aligned} |\mathbf{r}_2^\prime|(1{-}\xi),\,\, |\mathbf{r}_3^\prime|(1{-}\xi),\,\,|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)|\lesssim\rho_0.\label{regions}\end{aligned}$$ This function is negligibly small provided that any of the conditions $$\begin{aligned} &|\mathbf{r}_3^\prime|(1{-}\xi)\gg\rho_0,\,\,|\mathbf{r}_2^\prime|(1{-}\xi),\,\,|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)|\lesssim\rho_0;\nonumber\\ &|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)|,\,\,|\mathbf{r}_2^\prime|(1{-}\xi)\gg\rho_0,\,\,\,|\mathbf{r}_3^\prime|(1{-}\xi)\lesssim\rho_0;\nonumber\\ &|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)|,\,\,|\mathbf{r}_3^\prime|(1{-}\xi)\gg\rho_0,\,\,|\mathbf{r}_2^\prime|(1{-}\xi)\lesssim\rho_0;\\ &|\mathbf{r}_2^\prime|(1{-}\xi),\,\,|\mathbf{r}_3^\prime|(1{-}\xi)\gg\rho_0,\,\,|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)|\lesssim\rho_0;\nonumber\\ &|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)|,\,\,|\mathbf{r}_2^\prime|(1{-}\xi),\,\,|\mathbf{r}_3^\prime|(1{-}\xi)\gg\rho_0\nonumber\end{aligned}$$ holds true. The function (\[Jdefinit\]) can be approximated then as $$\begin{aligned} \label{expans} &\mathcal{J}(\mathbf{r},\mathbf{r}_1^\prime,\mathbf{r}_2^\prime,\mathbf{r}_3^\prime)=\exp\Bigl[-\rho_0^{-\frac{5}{3}}\int\limits_0^1\!\!{{\rm{d}}}\xi\sum\limits_{j=1,2}\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_3^\prime](1{-}\xi)\right|^{\frac{5}{3}}\Bigr] \sum\limits_{n=0}^\infty\frac{\rho_0^{-\frac{5}{3}n}}{n!}\Bigl\{\sum\limits_{j=1,2}\Bigl( \int\limits_0^1\!{{\rm{d}}}\xi\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1-\xi)\right|^{\frac{5}{3}} \nonumber\\ &-\frac{3}{8}\left|\mathbf{r}_2^\prime{+}(-1)^j\mathbf{r}_3^\prime\right|^{\frac{5}{3}}\Bigr)\Bigr\}^n+\exp\Bigl[-\frac{3}{8}\rho_0^{-\frac{5}{3}}\left|\mathbf{r}_2^\prime{+}(-1)^j\mathbf{r}_3^\prime\right|^{\frac{5}{3}}\Bigr]\sum\limits_{n=0}^\infty\frac{\rho_0^{-\frac{5}{3}n}}{n!}\Bigl\{\sum\limits_{j=1,2}\int\limits_0^1{{\rm{d}}}\xi\Bigl(\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1-\xi)\right|^{\frac{5}{3}}\\ &-\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_3^\prime](1-\xi)\right|^{\frac{5}{3}}\Bigr)\Bigr\}^n-\exp\Bigl[-\rho_0^{-\frac{5}{3}}\sum\limits_{j=1,2}\Bigl\{\frac{3}{8}\left|\mathbf{r}_2^\prime{+}(-1)^j\mathbf{r}_3^\prime\right|^{\frac{5}{3}}+\int\limits_0^1{{\rm{d}}}\xi\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_3^\prime](1-\xi)\right|^{\frac{5}{3}}\Bigr\}\Bigr]\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\times\sum\limits_{n=0}^\infty\frac{\left(\frac{3}{8}\right)^n}{n!}\rho_0^{-\frac{5}{3}n}\Bigl\{\sum\limits_{j=1,2}\int\limits_0^1{{\rm{d}}}\xi\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1-\xi)\right|^{\frac{5}{3}}\Bigr\}^n\nonumber\end{aligned}$$ Here the first term on the right hand side accounts for the contributions from the regions (\[region1\]) and (\[region2\]). If one substitutes the latter into Eqs. (\[InitialBW\]) and (\[Jdefinit\]) and performs integrations, then the region (\[regions\]) would be counted twice. Therefore, the last term on the right hand side of (\[expans\]) is introduced to eliminate the aforementioned double-counting. It is worth to mention that already the first ($n=0,1$) terms of expansion (\[expans\]) give a good approximation of the function $\mathcal{J}$, cf. Ref. [@Banakh]. Substituting the right-hand side of Eq. (\[expans\]) into Eq. (\[InitialBW\]) and integrating over the variables $\mathbf{R}$, $\mathbf{r}_3^\prime$ as it is described above, we obtain $$\begin{aligned} \label{longExpr} & \left\langle x_0^2\right\rangle=\frac{2\Omega^2}{(2\pi)^3W_0^6}\int_{\mathbb{R}^{6}}{{\rm{d}}}^2\mathbf{r}\,{{\rm{d}}}^2\mathbf{r}_1^\prime\,{{\rm{d}}}^2\mathbf{r}_2^\prime\left(\frac{g^2 W_0^2}{2\Omega^2}{-}{r}_x^2\right)e^{-\frac{1}{W_0^2}(|\mathbf{r}_1^\prime|^2+|\mathbf{r}_2^\prime|^2)}e^{2i\frac{\Omega}{W_0^2}[1{-}\frac{L}{F}]\mathbf{r}^\prime_1\cdot \mathbf{r}^\prime_2-2i\frac{\Omega}{W_0^2}\mathbf{r}\cdot\mathbf{r}^\prime_2}\\ &{\times}\Biggl\{\exp\Bigl[-\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\sum\limits_{j=1,2}\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_3^\prime](1-\xi)\right|^{\frac{5}{3}}\Bigr]\Bigl(1-\frac{3}{4}\rho_0^{-\frac{5}{3}}\left|\mathbf{r}_2^\prime\right|^{\frac{5}{3}}+\rho_0^{-\frac{5}{3}}\sum\limits_{j=1,2}\int\limits_0^1{{\rm{d}}}\xi\left|\mathbf{r}\xi+[\mathbf{r}_1^\prime{+}({-}1)^j\mathbf{r}_2^\prime](1{-}\xi)\right|^{\frac{5}{3}}\Bigr)\nonumber\\ &+\exp\Bigl[-\frac{3}{4}\rho_0^{-\frac{5}{3}}|\mathbf{r}_2^\prime|^{\frac{5}{3}}\Bigr]\Bigl(1-2\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\left|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1-\xi)\right|^{\frac{5}{3}}+\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\sum\limits_{j=1,2}\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1-\xi)\right|^{\frac{5}{3}}\Bigr)\nonumber\\ &-\exp\Bigl[-\rho_0^{-\frac{5}{3}}\Bigl(\frac{3}{4}\left|\mathbf{r}_2^\prime\right|^{\frac{5}{3}}+2\int\limits_0^1{{\rm{d}}}\xi\,\left|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)\right|^{\frac{5}{3}}\Bigr)\Bigr]\Bigl(1+\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\sum\limits_{j=1,2}\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1-\xi)\right|^{\frac{5}{3}}\Bigr) \Biggr\} \nonumber\end{aligned}$$ The evaluation of Eq. (\[longExpr\]) is simplified further with the use of the approximation [@Andrews2] $$\begin{aligned} \exp\left[-\left(\frac{|\mathbf r|}{\rho_0\Omega}\right)^{\frac{5}{3}}\right]= \exp\left[-\left(\frac{|\mathbf r|}{\rho_0\Omega}\right)^{2}\right],\label{Eq:approx}\end{aligned}$$ which gives good accuracy for small values of $\rho_0$, cf.Ref. [@Mironov2]. Consecutive integration of (\[longExpr\]) yields for the collimated beam (\[Eq:BoundaryConditions\]) with $F\gg L$ the following result $$\begin{aligned} \langle x_0^2\rangle{=}1.78 C_n^{\frac{8}{5}}L^{\frac{37}{15}}k^{-\frac{1}{15}} =0.75 W_0^2\sigma_R^{\frac{8}{5}}\Omega^{-1}.\label{sigmarST}\end{aligned}$$ A similar expression has been obtained by using the Markovian-random-process approximation, cf. Ref. [@Mironov2] and the references therein. Beam-shape distortion {#Sec:BeamShape} ===================== In this Section we derive the expressions for the moments $\langle W_{1/2}^2\rangle$, $\langle W_{1/2}^4\rangle$ and $\langle W_1^2W_2^2\rangle$ for weak and strong turbulence regimes. From Eqs. (\[Eq:W12EqSxxyy\]), (\[Eq:SxxViaGamma2\])-(\[Eq:W1W2ViaGamma\]) one can see that these moments are expressed through the integrals containing the field correlation functions $\Gamma_2$ and $\Gamma_4$. The first moments of $W_{1/2}^2$ defined by Eqs. (\[Eq:SxxMean\]), (\[Eq:SyyMean\]), and (\[Eq:W12EqSxxyy\]) contain the following integral $$\begin{aligned} & \int_{\mathbb R^2}{{\rm{d}}}^2 \mathbf{r} x^2 \Gamma_2(\mathbf{r})=\frac{ W_0^2}{ \pi^2 \Omega^4}\int_{\mathbb R^4}{{\rm{d}}}^2\mathbf{r} {{\rm{d}}}^2\mathbf{r}^\prime\, x^2e^{-\frac{g^2}{2\Omega^2}|\mathbf{r}^\prime|^2}\nonumber\\ &{\times} \exp\Bigl[{-}\frac{2i}{\Omega}\, \mathbf{r}{\cdot} \mathbf{r}^\prime{-}\rho_0^{-\frac{5}{3}}W_0^{\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi(1{-}\xi)^{\frac{5}{3}}\Bigl(\frac{|\mathbf{r}^\prime|}{\Omega}\Bigr)^{\frac{5}{3}}\Bigr] .\label{BeamWidth}\end{aligned}$$ Here we have used Eqs. (\[Gamma2\]) and (\[DsSph1\]). The second moments of $W_{1/2}^2$ defined in Eqs. (\[App:Eq:W12ViaGamma\]), (\[Eq:W1W2ViaGamma\]) contain the integrals $$\begin{aligned} &\int_{\mathbb R^4} {{\rm{d}}}^2 \mathbf{r}_1\,{{\rm{d}}}^2 \mathbf{r}_2 x_1^2 x_2^2 \Gamma_4(\mathbf{r}_1,\mathbf{r}_2)=\frac{\Omega^2}{2(2\pi)^3 W_0^6}\int_{\mathbb R^6} {{\rm{d}}}^2 \mathbf{r}\, {{\rm{d}}}^2 \mathbf{r}_1^\prime {{\rm{d}}}^2 \mathbf{r}_2^\prime\nonumber\\ &\quad\times \left(\frac{3g^4W_0^4}{4\Omega^4}-\frac{g^2W_0^2}{\Omega^2}r_x^2+r_x^4\right) e^{-\frac{1}{W_0^2}\left(|\mathbf{r}_1^\prime|^2+|\mathbf{r}_2^\prime|^2\right)}\nonumber\\ &\quad\times \exp\left[2i\frac{\Omega}{W_0^2}\Bigl(1-\frac{L}{F}\Bigr)\mathbf{r}_1^\prime{\cdot}\mathbf{r}_2^\prime-2i\frac{\Omega}{W_0^2}\mathbf{r}{\cdot}\mathbf{r}_2^\prime\right] \label{beamWIntG4}\\ &\qquad{\times}\exp\Bigl[\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\Bigl(\sum\limits_{j=1,2} \left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1{-}\xi)\right|^{\frac{5}{3}}\nonumber\\ &\qquad\qquad{-2\left|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)\right|^{\frac{5}{3}}}{-}2(1{-}\xi)^{\frac{5}{3}} \left|\mathbf{r}_2^\prime\right|^{\frac{5}{3}}\Bigr)\Bigr]\nonumber\end{aligned}$$ and $$\begin{aligned} &\int_{\mathbb R^4} {{\rm{d}}}^2 \mathbf{r}_1\,{{\rm{d}}}^2 \mathbf{r}_2 x_1^2 y_2^2 \Gamma_4(\mathbf{r}_1,\mathbf{r}_2)=\frac{\Omega^2}{2(2\pi)^3 W_0^6}\int_{\mathbb R^6} {{\rm{d}}}^2 \mathbf{r}\, {{\rm{d}}}^2 \mathbf{r}_1^\prime {{\rm{d}}}^2 \mathbf{r}_2^\prime\nonumber\\ &\quad\times \left(\frac{g^4W_0^4}{4\Omega^4}+\frac{g^2W_0^2}{\Omega^2}r_x^2+r_x^2r_y^2\right) e^{-\frac{1}{W_0^2}\left(|\mathbf{r}_1^\prime|^2+|\mathbf{r}_2^\prime|^2\right)}\nonumber\\ &\quad\times \exp\left[2i\frac{\Omega}{W_0^2}\Bigl(1-\frac{L}{F}\Bigr)\mathbf{r}_1^\prime{\cdot}\mathbf{r}_2^\prime-2i\frac{\Omega}{W_0^2}\mathbf{r}{\cdot}\mathbf{r}_2^\prime\right] \label{beamCorrIntG4}\\ &\qquad{\times}\exp\Bigl[\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\Bigl(\sum\limits_{j=1,2} \left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1{-}\xi)\right|^{\frac{5}{3}}\nonumber\\ &\qquad\qquad{-2\left|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)\right|^{\frac{5}{3}}}{-}2(1{-}\xi)^{\frac{5}{3}} \left|\mathbf{r}_2^\prime\right|^{\frac{5}{3}}\Bigr)\Bigr].\nonumber\end{aligned}$$ Here we have used the definition of $\Gamma_4$ given in Eq. (\[Gamma4\]) and performed the four-fold integration in a similar way as in Eq. (\[InitialBW\]), with the aid of formulas similar to (\[IntTrick\]). Weak turbulence --------------- In the limit of weak turbulence we derive, by substituting Eqs. (\[BeamWidth\]), (\[bwweak\]) in Eqs. (\[Eq:SxxMean\]) and (\[Eq:W12EqSxxyy\]), the following result for the first moment of $W_{1/2}^2$: $$\begin{aligned} &\langle W_{1/2}^2\rangle{=} \frac{W_0^2}{\Omega^2}{+}2.96 W_0^2 \sigma_R^2\Omega^{-\frac{7}{6}}.\label{IntWT} \end{aligned}$$ In Eq. (\[BeamWidth\]) we have used the approximations $\bigl(|\mathbf{r}^\prime|/\Omega\bigr)^\frac{5}{3}\approx \bigl(|\mathbf{r}^\prime|/\Omega\bigr)^2$, cf. [@Andrews2] and $\int_0^1{{\rm{d}}}\xi f(\xi)\approx f(0)$, cf. [@Kon]. The first term in Eq. (\[IntWT\]) describes the diffraction broadening in free space and the second term gives the amount of diffraction broadening in turbulence. The second order moments of $W_{1/2}^2$ are evaluated by substituting Eqs. (\[bwweak\]), (\[beamWIntG4\]), (\[IntWT\]) in Eq. (\[App:Eq:W12ViaGamma\]) and correspondingly Eqs. (\[bwweak\]), (\[beamCorrIntG4\]), (\[IntWT\]) in Eq. (\[Eq:W1W2ViaGamma\]). We evaluate the integrals in Eqs. (\[beamWIntG4\]) and (\[beamCorrIntG4\]) by expanding the last exponents into series with respect to $\rho_0^{-\frac{5}{3}}$ up to the second order and consecutive integration. For a focused beam ($L{=}F$) we obtain $$\begin{aligned} &\int_{\mathbb R^4} {{\rm{d}}}^2 \mathbf{r}_1\,{{\rm{d}}}^2 \mathbf{r}_2 x_1^2 x_2^2 \Gamma_4(\mathbf{r}_1,\mathbf{r}_2)\nonumber\\ &\quad=\frac{W_0^4}{16 \Omega^4}+0.58 W_0^4\sigma_R^2\Omega^{-\frac{19}{6}}+1.37 W_0^4\sigma_R^4\Omega^{-\frac{7}{3}},\end{aligned}$$ $$\begin{aligned} &\int_{\mathbb R^4} {{\rm{d}}}^2 \mathbf{r}_1\,{{\rm{d}}}^2 \mathbf{r}_2 x_1^2 y_2^2 \Gamma_4(\mathbf{r}_1,\mathbf{r}_2)\nonumber\\ &\quad=\frac{W_0^4}{16 \Omega^4}+0.51 W_0^4\sigma_R^2\Omega^{-\frac{19}{6}}+1.145 W_0^4\sigma_R^4\Omega^{-\frac{7}{3}}.\end{aligned}$$ The corresponding (co)variances are evaluated as $$\begin{aligned} \left\langle(\Delta W_{1/2}^2)^2\right\rangle=1.2 W_0^4\sigma_R^2\Omega^{-\frac{19}{6}}{+}0.17 W_0^4\sigma_R^4\Omega^{-\frac{7}{3}},\end{aligned}$$ $$\begin{aligned} \left\langle\Delta W_{1}^2\Delta W_2^2\right\rangle{=}{-}0.8 W_0^4\sigma_R^2\Omega^{-\frac{19}{6}}{-}0.05 W_0^4\sigma_R^4\Omega^{-\frac{7}{3}},\end{aligned}$$ correspondingly. The correlation function for weak turbulence is negative, i.e. the shape of the ellipse is deformed in such a way that the increase of the beam width along one half-axis of the ellipse causes the decrease of the width in the complementary direction. Strong turbulence ----------------- For strong turbulence the first moment of $W_{1/2}^2$ is evaluated by substituting Eqs. (\[BeamWidth\]) and (\[sigmarST\]) in Eqs. (\[Eq:SxxMean\]), (\[Eq:W12EqSxxyy\]). We also use the approximation (\[Eq:approx\]) for evaluating (\[BeamWidth\]) to obtain $$\begin{aligned} \label{BeamSpreadST} \langle W_{1/2}^2\rangle{=}\gamma W_0^2&+1.71 W_0^2\sigma_R^{\frac{12}{5}}\Omega^{{-}1}{-}2.99 W_0^2\sigma_R^{\frac{8}{5}}\Omega^{{-}1}, \end{aligned}$$ where $\gamma{=}(1{+}\Omega^2)/\Omega^2$. It is also assumed that $\Omega>1$, cf.  Eq. (\[FresnelOmega\]). For calculating the (co)variances of $W_{1/2}^2$ we firstly evaluate the integrals in (\[beamWIntG4\]) and (\[beamCorrIntG4\]) by using the approximation (\[expans\]) in the way described in Section \[Sec:BW\]. Within this approximation one gets, for example $$\begin{aligned} \label{longBBroad} &\int_{\mathbb R^4} {{\rm{d}}}^2 \mathbf{r}_1\,{{\rm{d}}}^2 \mathbf{r}_2 x_1^2 x_2^2 \Gamma_4(\mathbf{r}_1,\mathbf{r}_2)=\frac{\Omega^2}{2(2\pi)^3W_0^6}\int_{\mathbb{R}^{6}}{{\rm{d}}}^2\mathbf{r}\,{{\rm{d}}}^2\mathbf{r}_1^\prime\,{{\rm{d}}}^2\mathbf{r}_2^\prime\left(\frac{3}{4}\gamma^2W_0^4{-}\gamma W_0^2{r}_x^2+r_x^4\right)\nonumber\\ &\qquad\quad\times \exp\Bigl[-\frac{1}{W_0^2}(|\mathbf{r}_1^\prime|^2+|\mathbf{r}_2^\prime|^2)+2i\frac{\Omega}{W_0^2}\mathbf{r}^\prime_1\cdot \mathbf{r}^\prime_2-2i\frac{\Omega}{W_0^2}\mathbf{r}\cdot\mathbf{r}^\prime_2\Bigr]\\ &{\times}\Biggl\{\exp\Bigl[-\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\sum\limits_{j=1,2}\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_3^\prime](1{-}\xi)\right|^{\frac{5}{3}}\Bigr]\Bigl(1{-}\frac{3}{4}\rho_0^{-\frac{5}{3}}\left|\mathbf{r}_2^\prime\right|^{\frac{5}{3}}{+}\rho_0^{-\frac{5}{3}}\sum\limits_{j=1,2}\int\limits_0^1{{\rm{d}}}\xi\left|\mathbf{r}\xi+[\mathbf{r}_1^\prime{+}({-}1)^j\mathbf{r}_2^\prime](1{-}\xi)\right|^{\frac{5}{3}}\Bigr)\nonumber\\ &\qquad+\exp\Bigl[-\frac{3}{4}\rho_0^{-\frac{5}{3}}|\mathbf{r}_2^\prime|^{\frac{5}{3}}\Bigr]\Bigl(1-2\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\left|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)\right|^{\frac{5}{3}}+\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\sum\limits_{j=1,2}\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1{-}\xi)\right|^{\frac{5}{3}}\Bigr)\nonumber\\ &\qquad-\exp\Bigl[-\rho_0^{-\frac{5}{3}}\Bigl(\frac{3}{4}\left|\mathbf{r}_2^\prime\right|^{\frac{5}{3}}{+}2\int\limits_0^1{{\rm{d}}}\xi\,\left|\mathbf{r}\xi{+}\mathbf{r}_1^\prime(1{-}\xi)\right|^{\frac{5}{3}}\Bigr)\Bigr]\Bigl(1+\rho_0^{-\frac{5}{3}}\int\limits_0^1{{\rm{d}}}\xi\sum\limits_{j=1,2}\left|\mathbf{r}\xi{+}[\mathbf{r}_1^\prime{+}(-1)^j\mathbf{r}_2^\prime](1{-}\xi)\right|^{\frac{5}{3}}\Bigr) \Biggr\}. \nonumber\end{aligned}$$ Performing the multiple integration in (\[longBBroad\]), one derives $$\begin{aligned} \int_{\mathbb R^4} {{\rm{d}}}^2 \mathbf{r}_1\,{{\rm{d}}}^2 \mathbf{r}_2 x_1^2& x_2^2 \Gamma_4(\mathbf{r}_1,\mathbf{r}_2)\nonumber\\ &=\gamma^2\frac{W_0^4}{16}+4.34 \gamma W_0^4 \sigma_R^{\frac{12}{5}}\Omega^{-1} \label{GammaStr2} \end{aligned}$$ and similarly $$\begin{aligned} \int_{\mathbb R^4} {{\rm{d}}}^2 \mathbf{r}_1\,{{\rm{d}}}^2 \mathbf{r}_2 &x_1^2 y_2^2 \Gamma_4(\mathbf{r}_1,\mathbf{r}_2)\nonumber\\ &=\gamma^2\frac{W_0^4}{16}+3.16 \gamma W_0^4 \sigma_R^{\frac{12}{5}}\Omega^{-1}. \label{GammaStr3} \end{aligned}$$ Finally, substituting Eqs. (\[sigmarST\]), (\[BeamSpreadST\]), (\[GammaStr2\]) and (\[GammaStr3\]) into Eqs. (\[App:Eq:W12ViaGamma\]) and (\[Eq:W1W2ViaGamma\]) we obtain $$\begin{aligned} \left\langle (\Delta W_{1/2}^2)^2\right\rangle=13.14 \gamma W_0^4\sigma_R^{\frac{12}{5}}\Omega^{-1} \end{aligned}$$ and $$\begin{aligned} \left\langle \Delta W_{1}^2\Delta W_{2}^2\right\rangle=0.65\gamma W_0^4\sigma_R^{\frac{12}{5}}\Omega^{-1}. \label{CovarianceST} \end{aligned}$$ It is worth to note that in contrast to weak turbulence case the covariance (\[CovarianceST\]) is positive, i.e. the shape of the beam profile of the ellipse is deformed in such a way that the increase of beam width along one half-axis of the ellipse causes the increase in the complimentary direction. Mean values and covariance matrix elements {#Sec:CovMatrixEl} ========================================== ------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------- $\left\langle \Theta_{1/2}\right\rangle$ $\ln\Biggl[\frac{\left(1+2.96 \sigma_R^2\Omega^{\frac{5}{6}}\right)^2}{\Omega^2\sqrt{\left(1+2.96 \sigma_R^2\Omega^{\frac{5}{6}}\right)^2+1.2\sigma_R^2\Omega^{\frac{5}{6}}}}\Biggr]$ $\left\langle\Delta x_0^2\right\rangle,\left\langle\Delta y_0^2\right\rangle$ $0.33\,W_0^2 \sigma_R^2 \Omega^{-\frac{7}{6}}$ $\left\langle \Delta \Theta_{1/2}^2\right\rangle$ $\ln\Biggl[1+\frac{1.2\sigma_R^2\Omega^{\frac{5}{6}}}{\left(1+2.96\sigma_R^2\Omega^{\frac{5}{6}}\right)^2}\Biggr]$ $\left\langle \Delta \Theta_1\Delta $\ln\Biggl[1-\frac{0.8\sigma_R^2\Omega^{\frac{5}{6}}}{\left(1+2.96\sigma_R^2\Omega^{\frac{5}{6}}\right)^2}\Biggr]$ \Theta_2\right\rangle$ $\left\langle \Theta_{1/2}\right\rangle$ $\ln\Biggl[\frac{\bigl(\gamma+1.71 \sigma_R^{\frac{12}{5}}\Omega^{{-}1}{-}2.99 \sigma_R^{\frac{8}{5}}\Omega^{{-}1}\bigr)^2}{\sqrt{\bigl(\gamma+1.71 \sigma_R^{\frac{12}{5}}\Omega^{{-}1}{-}2.99 \sigma_R^{\frac{8}{5}}\Omega^{{-}1}\bigr)^2+3.24 \gamma\sigma_R^{\frac{12}{5}}\Omega^{-1}}}\Biggr]$ $\left\langle\Delta x_0^2\right\rangle,\left\langle\Delta y_0^2\right\rangle$ $0.75\,W_0^2 \sigma_R^{\frac{8}{5}} \Omega^{-1}$ $\left\langle \Delta \Theta_{1/2}^2\right\rangle$ $\ln\Biggl[1+\frac{13.14 \gamma\sigma_R^{12/5}\Omega^{-1}}{\bigl(\gamma+1.71 \sigma_R^{\frac{12}{5}}\Omega^{{-}1}{-}2.99 \sigma_R^{\frac{8}{5}}\Omega^{{-}1}\bigr)^2}\Biggr]$ $\left\langle \Delta \Theta_1\Delta \Theta_2\right\rangle$ $\ln\Biggl[1+\frac{0.65\gamma \sigma_R^{12/5}\Omega^{-1}}{\bigl(\gamma+1.71 \sigma_R^{\frac{12}{5}}\Omega^{{-}1}{-}2.99 \sigma_R^{\frac{8}{5}}\Omega^{{-}1}\bigr)^2}\Biggr]$ ------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------- : Mean values and elements of the covariance matrix of the vector $\mathbf{v}$, are given for horizontal links, in terms of the transmitter beam spot radius, $W_0$, the Fresnel parameter of the beam, $\Omega{=}\frac{kW_0^2}{2L}$, and the Rytov parameter, $\sigma_R^2=1.23 C_n^2\,k^{\frac{7}{6}}L^{\frac{11}{6}}$. Here $k$ is the beam wave-number, $L$ is the propagation distance, $C_n^2$ $[m^{-\frac{2}{3}}]$ is the structure constant of the refractive index of the air, and $\gamma{=}(1{+}\Omega^2)/\Omega^2$. \[tab:covariance\] The Table \[tab:covariance\] lists the non-zero means and covariance matrix elements of the four-dimensional Gaussian distribution for the random vector $\mathbf{v}$ defined in Eq. (\[Eq:vMultNoise\]). We list the results for weak and strong turbulence. The weak turbulence results can be applied, e.g., for short propagation distances with $\sigma_R^2 \lesssim 1$. In near-to-ground propagation the latter condition is fulfilled for optical frequencies for night-time communication. The strong turbulence results are applied for short distance communication, $\sigma_R^2\gg 1$. For a near-to-ground communication scenario this corresponds to the day-time operation at clear sunny days. Log-normal model {#Sec:LogNormal} ================ The log-normal probability distribution for transmittance is $$\begin{aligned} \mathcal{P}(\eta)=\frac{1}{\eta\sigma\sqrt{2\pi}}\exp\left[-\frac{\Bigl(-\ln\eta-\mu\Bigr)^2}{2\sigma^2}\right]\end{aligned}$$ where $$\begin{aligned} \mu=-\ln\left(\frac{\langle\eta\rangle^2}{\sqrt{\langle\eta^2\rangle}}\right)\end{aligned}$$ and $$\begin{aligned} \sigma^2=\ln\left(\frac{\langle\eta^2\rangle}{\langle\eta\rangle^2}\right),\label{SigmaLogNormal1}\end{aligned}$$ are parameters of the log-normal distribution. The parameters $\mu$ and $\sigma$ are the functions of the first and second moments of transmittance $$\begin{aligned} \left\langle\eta\right\rangle=\int_\mathcal{A}{{\rm{d}}}^2 \mathbf{r} \Gamma_2(\mathbf{r}),\label{meanEta}\end{aligned}$$ $$\begin{aligned} \left\langle\eta^2\right\rangle{=}\int_\mathcal{A}{{\rm{d}}}^2 \mathbf{r}_1{{\rm{d}}}^2\mathbf{r}_2\Gamma_4(\mathbf{r}_1,\mathbf{r}_2),\label{squareEta}\end{aligned}$$ where the field coherence functions $\Gamma_2$ and $\Gamma_4$ are given by Eqs. (\[Gamma2\]) and (\[Gamma4\]), respectively. Here the integration is performed over the circular aperture opening area $\mathcal{A}$. The first moment of transmittance (\[meanEta\]) is evaluated explicitly as $$\begin{aligned} \left\langle\eta\right\rangle=1-\exp\left[-\frac{2a^2}{\langle W^2\rangle}\right],\label{MeanEta}\end{aligned}$$ where $a$ is the aperture radius and $$\begin{aligned} \langle W^2\rangle=\langle S_{xx}\rangle+4\langle x_0^2\rangle \end{aligned}$$ is the so called “long-term” beam mean-square radius [@Fante1]. Here $\langle S_{xx}\rangle$ and $\langle x_0^2\rangle $ are defined by Eqs. (\[Eq:SxxMean\]) and (\[App:bwvariance\]) respectively. For weak turbulence from Eqs. (\[IntWT\]) and (\[bwweak\]) we evaluate $\langle W^2\rangle=W_0^2\Omega^{-2}+4.33W_0^2\sigma_R^2\Omega^{-\frac{7}{6}}$. However, the integration of Eq. (\[squareEta\]) is more involved. In this Letter we evaluated Eq. (\[squareEta\]) numerically.
--- abstract: 'The secret key rate attained by a free-space QKD system in the [*near-field*]{} propagation regime (relevant for $1$-$10$ km range using $\approx 7$ cm radii transmit and receive apertures and $1.55~\mu$m transmission center wavelength) can benefit from the use of multiple spatial modes. A suite of theoretical research in recent years has suggested the use of orbital-angular-momentum (OAM) bearing spatial modes of light to obtain this improvement in rate. We show that most of the aforesaid rate improvement in the near field afforded by spatial-mode multiplexing can be realized by a simple-to-build overlapping Gaussian beam array (OGBA) and a pixelated detector array. With the current state-of-the-art in OAM-mode-sorting efficiencies, the key-rate performance of our OGBA architecture could come very close to, if not exceed, that of a system employing OAM modes, but at a fraction of the cost.' author: - 'Boulat A. Bash$^{1}$ and Nivedita Chandrasekaran$^{2}$ and Jeffrey H. Shapiro$^{2}$ and Saikat Guha$^{1}$' title: Quantum Key Distribution Using Multiple Gaussian Focused Beams --- [^1] Introduction ============ The extremely low key rates afforded by quantum key distribution (QKD) compared to computational cryptographic schemes pose a significant challenge to the wide spread adoption of QKD. The main reason for the poor rate performance is that the QKD [*capacity*]{} of a single-mode lossy bosonic channel, i.e., the maximum key rate attainable using any direct-transmission QKD protocol, is proportional to the end-to-end transmissivity of the channel $\eta$ in the high-loss regime. Therefore, to increase the key rate one must increase the number of modes used by the system. This can be done by increasing the optical bandwidth $\nu$ in modes/s that can be used by the QKD protocol as well as employing multiple spatial modes. Here we investigate the latter. Formally, the QKD capacity of a single-mode bosonic channel where we employ both polarizations of light is $2\nu \log_2\left(\frac{1}{1-\eta}\right)\approx 2.88\,\nu\eta$ bits/s when $\eta \ll 1$ [@pirandola15QKDcap].. Since $\eta \propto \frac{e^{-\alpha L}}{L^2}$, this corresponds to an exponential decay of key rate with distance $L$ in fiber and free-space propagation in non-turbulent atmosphere. While the extinction coefficient $\alpha$ may be modest for the atmospheric propagation in clear weather at a well-chosen wavelength, the inverse-square decay of rate with distance is unavoidable in the [*far-field*]{} regime even in vacuum (where $\alpha=0$). This is because free-space optical channel is characterized by the Fresnel number product $D_{\rm f} \equiv A_{\rm t}A_{\rm r}/(\lambda L)^2$, where $A_{\rm t}$ and $A_{\rm r}$ are the respective areas of the transmitter and receiver apertures, and $\lambda$ is the transmission center wavelength. In the far-field regime $D_{\rm f} \ll 1$ and only one transmitter-pupil spatial mode couples significant power into the receiver pupil over an $L$-meter line-of-sight channel with input-output power transmissivity $\eta_0 \approx D_{\rm f} \propto 1/L^2$ [@Sha05]. Thus, employing multiple orthogonal spatial modes in the far-field regime cannot yield a appreciable improvement in the achievable QKD rate. Therefore, our interest in this paper is in the [*near-field*]{} propagation regime ($D_{\rm f} \gg 1$), which is relevant to metropolitan area QKD, as well as line-of-sight over-the-surface maritime applications of QKD. In this near-field regime, approximately $D_{\rm f}$ mutually-orthogonal spatial modes have near-perfect power transmissivity ($\eta \approx 1$) [@Sha05]. Thus, multiplexing over multiple orthogonal spatial modes could substantially improve the total QKD rate with the gain in rate over using a single spatial mode (such as a focused Gaussian beam) being approximately proportional to $D_{\rm f}$, and hence more pronounced at shorter range $L$ (where $D_{\rm f}$ is high). Laguerre-Gauss (LG) functions in the two-dimensional transverse coordinates form an infinite set of mutually-orthogonal spatial modes, which happen to carry orbital angular momentum (OAM). There have been several suggestions in recent years to employ LG modes for QKD, both based on laser-light and single-photon encodings [@berkhout10oam; @mirhosseini15twistedlightqkd; @horiuchi15twistedbeam; @vallone14twistedlightqkd; @krenn14vienna; @malik12oamturb; @djordjevic13oamqkd; @junlin10sixstate], and the purported rate improvement has been attributed to the OAM degree of freedom of the photon. While multiplexing over orthogonal spatial modes could undoubtedly improve QKD rate in the near-field propagation regime as explained above: 1. Can other orthogonal spatial mode sets that do [*not*]{} carry OAM be as effective as LG modes in achieving the spatial-multiplexing rate improvement in the near field? 2. Does one truly need orthogonal modes to obtain this spatial-multiplexing gain or are there simpler-to-generate mode profiles that might suffice? Question (1) was answered affirmatively for classical [@Sha05; @chandrasekaran14pieturb1] and quantum-secure private communication (without two-way classical communication as is done in QKD) [@chandrasekaran14pieturb2] over the near-field vacuum propagation and turbulent atmospheric optical channels: Hermite-Gauss (HG) modes are unitarily equivalent to the LG modes and have identical power-transfer eigenvalues $\left\{\eta_m\right\}$, $1 \le m < \infty$. Since the respective communication capacity of mode $m$ is a function of $\eta_m$ and the transmit power on mode $m$, HG modes, which do [*not*]{} carry OAM, can in principle achieve the same rate as LG modes, notwithstanding that the hardware complexity and efficiency of generation and separation of orthogonal LG and HG modes could be quite different. Our goal is to address questions (1) and (2) above for QKD. The answer to (1) is trivially affirmative, at least for the case of vacuum propagation (no atmospheric turbulence or extinction), based on an argument similar to the one used in Refs. [@Sha05; @chandrasekaran14pieturb1; @chandrasekaran14pieturb2]. We show potential gain of between $1$ to $2$ orders of magnitude in the key rate by using multiple spatial modes over a $1$ km link, assuming $\approx 7$ cm radii transmitter and receiver apertures, and $\lambda = 1.55 \mu$m laser-light transmission. The bulk of our analysis addresses question (2) for the optical vacuum propagation channel, which we answer negatively. We show that most of the spatial-multiplexing gain afforded by mutually-orthogonal modes (either HG or LG) in the near field can be obtained using a focused overlapping Gaussian beam array (OGBA) with optimized beam geometry in which beams are individually amplitude and/or phase modulated to realize the QKD protocol. These Gaussian focused beams (FBs) are [*not*]{} mutually-orthogonal spatial modes, and therefore the power that leaks into FB $m$ from the neighboring FBs has the same effect on the key rate $R_m(L)$ of that FB as do excess noise sources like detector dark current or electrical Johnson noise. Non-zero excess noise causes the rate-distance function $R_m(L)$ to fall to zero at a minimum transmissivity threshold $\eta_{\rm min}$, or, equivalently, at a maximum range threshold $L_{\rm max}$ such that $R_m(L) = 0$ for $L>L_{\rm max}$. Thus, while packing the FBs closer increases the spatial-multiplexing gain, it also increases the excess noise on each FB channel, resulting in decreased $R_m(L)$. For any given range $L$ there should exist an optimal (key-rate-maximizing) solution for spatial geometry (tiling) of the FBs, power allocation across the FBs, and beam widths. For shorter range $L$ the optimal solution should involve a greater number of FBs, and the number of beams employed should be approximately proportional to $D_{\rm f}$. Here, instead of evaluating the optimal rate-maximizing solution as explained above (which is extremely difficult), we find a numerical solution to a constrained optimization problem assuming a square-grid tiling of the FBs in the receiver aperture and restricting our attention to the discrete-variable (DV) laser-light decoy-state BB84 protocol [@lo05decoyqkd]. The rationale behind this is to obtain an [*achievable*]{} rate-distance envelope for the OGBA transmitter to compare with the ultimate key capacity attainable by employing infinitely many LG (or HG) modes. Since we restrict our attention to DV QKD, we assume that the OGBA transmitter is paired with a single-photon detector (SPD) array at the receiver with square-shaped pixels and unity fill factor with each FB being focused at the center of a detector pixel and there are as many detector pixels as the number of FBs (the optimal number of which is a function of $L$ as discussed above). Azimuthal LG modes retain their orthogonality when passed through hard-pupil circular apertures. Thus, generating and separating these modes without any power leaking between them is possible in theory, and has been the subject of much experimental work [@mirhosseini13oammodesseparation; @lavery12sortingoam]. The current state of the art is the separation of 25 OAM modes with average efficiency of $>92\%$, as was demonstrated in [@mirhosseini13oammodesseparation]. We compare the QKD rate achievable with our OGBA proposal to what is achievable using ideal separation of azimuthal LG modes as well as the best currently possible. In the latter case, we obtained the data for the cross-talk (overlap) between the separated modes (see [@mirhosseini13oammodesseparation Table 4a]) from the authors of [@mirhosseini13oammodesseparation]. We evaluate performance assuming ideal photodetectors and no atmospheric extinction. We find that the achievable rate using our OGBA architecture is at worst $4.4$ dB less than the state-of-the-art azimuthal LG mode separation in [@mirhosseini13oammodesseparation] and at worst $8.3$ dB less than the theoretical maximum for entire azimuthal LG mode set, while using hard-pupil transmitter and receiver apertures of same areas and the same center wavelength. The maximum rate gap occurs because the square-grid OGBA architecture does allow the use of two and three beams; with two square pixels placed side-by-side at the receiver, the gap between the systems employing the state-of-the-art and ideal azimuthal LG mode separation reduces to $2.6$ dB and $6.3$ dB, respectively. Current technology for optical communication using orthogonal modes use bulky and expensive components [@willner15oam]. While advances in enabling technology could reduce the device size, weight and cost of orthogonal mode generation and separation, our results show that using OAM modes for QKD may not be worth the trouble: the gain in QKD key rate in the near field is modest compared to what can already be obtained by our fairly simple-to-implement OGBA architecture. This paper is organized as follows: in the next section we introduce the basic mathematics of laser light propagation in vacuum using soft-pupil (Gaussian attenuation) apertures. In Section \[sec:lgmodes\] we consider the propagation of LG modes using hard-pupil circular apertures, while in Section \[sec:gaussian\] we discuss the mathematical model of the OGBA architecture that we propose in this paper. Using the expressions derived in Sections \[sec:lgmodes\] and \[sec:gaussian\], we numerically evaluate the QKD rate using various beam and aperture geometries, and report the results in Section \[sec:results\]. We conclude with a discussion of the implications of our results as well as future work, in Section \[sec:discussion\]. Bosonic Mode Sets and the Degrees of Freedom of the Photon {#sec:bosonic} ========================================================== Consider propagation of linearly-polarized, quasimonochromatic light with center wavelength $\lambda$ (that is, a narrow transmission band $\Delta \lambda \ll \lambda$ around the center wavelength) from Alice’s transmitter pupil in the $z=0$ transverse plane with a complex-field-unit pupil function $A_{\rm T}({\bm \rho})$, ${\bm \rho} \equiv (x,y)$, through a $L$-meter line-of-sight free-space channel, and received by Bob’s receiver pupil in the $z=L$ plane with aperture function $A_{\rm R}({\bm \rho^\prime})$, ${\bm \rho^\prime} \equiv (x^\prime,y^\prime)$. Alice’s transmitted field’s complex envelope $E_0({\bm \rho},t)$ is multiplied (truncated) by the complex-valued transmit-aperture function $A_{\rm T}({\bm \rho})$, undergoes free-space diffraction over the $L$-meter path, and is truncated by Bob’s receiver-aperture function $A_{\rm R}({\bm \rho^\prime})$, to yield the received field $E_L({\bm \rho^\prime},t)$. The overall input-output relationship is described by the following linear-system equation: $$\begin{aligned} E_L({\bm \rho^\prime},t)&= \int E_0({\bm \rho},t - L/c) \, h({\bm \rho^\prime}, {\bm \rho}, t) \, \mathrm{d}^2 {\bm \rho}, \label{eq:fresnel} \end{aligned}$$ where the channel’s Green’s function $h({\bm \rho^\prime}, {\bm \rho}, t)$ is a spatial impulse response. We assume vacuum propagation and drop the time argument $t$ from the Green’s function: $$\begin{aligned} \label{eq:vacpropkernel} h({\bm \rho^\prime}, {\bm \rho})& = A_{\rm R}({\bm \rho^\prime}) \, \frac{\exp\left[ik \left(L + |{\bm \rho^\prime}-{\bm \rho}|^2/2L\right)\right]}{i \lambda L} \, A_{\rm T}({\bm \rho}),\end{aligned}$$ where $k = 2\pi/\lambda$. Normal-mode decomposition of the vacuum-propagation Green’s function yields an infinite set of orthogonal input-output spatial-mode pairs (a mode being a normalized spatio-temporal field function of a given polarization), that is, an infinite set of non-interfering parallel spatial channels. In other words, $$\begin{aligned} \label{eq:eigenmodes}\int h({\bm \rho^\prime},{\bm \rho}) \Phi_m({\bm \rho})\mathrm{d}^2{\bm \rho} &= \sqrt{\eta_m} \,\phi_m({\bm \rho^\prime}), \, {\rm for}\, m=1,2,\ldots,\end{aligned}$$ where $\left\{\Phi_m({\bm \rho})\right\}$ forms a complete orthonormal (CON) spatial basis in the transmit-aperture plane before the aperture mask $A_{\rm T}({\bm \rho})$, and $\left\{\phi_m({\bm \rho^\prime})\right\}$ forms a CON spatial basis in the receiver-aperture plane after the aperture mask $A_{\rm R}({\bm \rho^\prime})$. That is, $$\begin{aligned} \int\Phi_m({\bm \rho})\Phi_n({\bm \rho})\mathrm{d}^2{\bm \rho}=\delta_{m,n},~\int|\Phi_m({\bm \rho})|^2\mathrm{d}^2{\bm \rho}=1\\ \int\phi_m({\bm \rho})\phi_n({\bm \rho})\mathrm{d}^2{\bm \rho}=\delta_{m,n},~\int|\phi_m({\bm \rho})|^2\mathrm{d}^2{\bm \rho}=1,\end{aligned}$$ where $\delta_{m,n}=\left\{\begin{array}{lr}1&\text{if~}m=n\\0&\text{if~}m\neq n\end{array}\right.$ is the Kronecker delta function. Therefore, the singular-value decomposition (SVD) of $h({\bm \rho^\prime},{\bm \rho})$ yields: $$\begin{aligned} h({\bm \rho^\prime},{\bm \rho})&=\sum_{m=1}^\infty \sqrt{\eta_m} \,\phi_m({\bm \rho^\prime})\Phi_m^\ast({\bm \rho}).\end{aligned}$$ Physically this implies that if Alice excites the spatial mode $\Phi_m({\bm \rho})$, it in turn excites the corresponding spatial mode $\phi_m({\bm \rho^\prime})$ (and no other) within Bob’s receiver. This specific set of transmitter-plane receiver-plane spatial-mode pairs that form a set of non-interfering parallel channels are the eigenmodes for the channel geometry. The fraction of power Alice puts in the mode $\Phi_m({\bm \rho})$ that appears in Bob’s spatial mode $\phi_m({\bm \rho^\prime})$ is the modal transmissivity, $\eta_m$. We assume that the modes are ordered such that $$\begin{aligned} 1 \ge \eta_1 \ge \eta_2 \ge \ldots \eta_m \ge \ldots \ge 0.\end{aligned}$$ If Alice excites the mode $\Phi_m({\bm \rho})$ in a coherent-state $|\beta\rangle$—the quantum description of an ideal laser-light pulse of intensity $|\beta|^2$ (photons) and phase ${\rm Arg}(\beta)$, then the resulting state of Bob’s mode $\phi_m({\bm \rho^\prime})$ is an attenuated coherent state $|\sqrt{\eta_m}\beta\rangle$. The power transmissivities $\eta_m(\omega)$ are strictly increasing functions of the transmission frequency $\omega = 2\pi c/\lambda$, each increasing from $\eta_m = 0$ at $\omega = 0$, to $\eta_m = 1$ at $\omega = \infty$. Let us consider Gaussian-attenuation (soft-pupil) apertures with $$\begin{aligned} \label{eq:tx_gauss_ap} A_{\rm T}({\bm \rho}) &= \exp\left[-|{\bm \rho}|^2/r_{\rm t}^2\right] \text{~and}\\ \label{eq:rx_gauss_ap} A_{\rm R}({\bm \rho^\prime}) &= \exp\left[-|{\bm \rho^\prime}|^2/r_{\rm r}^2\right].\end{aligned}$$ For this choice of pupil functions, there are two unitarily-equivalent sets of eigenmodes: the aforementioned Laguerre-Gauss (LG) modes, which have circular symmetry in the transverse plane and are known to carry orbital angular momentum (OAM), and the Hermite-Gauss (HG) modes, which have rectangular symmetry in the transverse plane and do not carry OAM. The input LG modes, labeled by the radial index $p=0,1,2,\ldots $ and the azimuthal index $l=0,\pm 1,\pm 2, \ldots$, are expressed using the polar coordinates ${\bm \rho}\equiv(r,\theta)$ as follows: $$\begin{aligned} \label{eq:inputLG} \Phi_{p,l}(r,\theta)&=\sqrt{\frac{p!}{\pi(|l|+p)!}}\frac{1}{a}\left[\frac{r}{a}\right]^{|l|}\mathcal{L}_p^{|l|}\left(\frac{r^2}{a^2}\right)\exp\left(-\left[\frac{1}{2a^2}+\frac{ik}{2L}\right]r^2+il\theta\right),\end{aligned}$$ where $\mathcal{L}_p^{|l|}(\cdot)$ denotes the generalized Laguerre polynomial indexed by $p$ and $|l|$. For completeness of exposition, the input HG modes, labeled by the horizontal and vertical indices $n,m=0,1,2,\ldots$, are expressed using the Cartesian coordinates ${\bm \rho}\equiv(x,y)$ as follows: $$\begin{aligned} \Phi_{n,m}(x,y)&=\frac{1}{a\sqrt{\pi n! m! 2^{n+m}}}H_n\left(\frac{x}{a}\right)H_m\left(\frac{y}{a}\right)\exp\left(-\left[\frac{1}{2a^2}+\frac{ik}{2L}\right][x^2+y^2]\right)\end{aligned}$$ where $H_p(\cdot)$ is the $p^{\text{th}}$ Hermite polynomial. In the expressions for both LG and HG modes, $a$ is a beam width parameter given by $$\begin{aligned} \label{eq:a_gauss}a&=\frac{r_{\rm t}}{\sqrt{2}(1+4D_{\rm f})^{1/4}},\end{aligned}$$ where $$\begin{aligned} D_{\rm f}&=\frac{kr_{\rm t}^2}{4L}\frac{kr_{\rm r}^2}{4L}\end{aligned}$$ is the product of the transmitter-pupil and receiver-pupil Fresnel number products for this soft-pupil vacuum propagation configuration. Alternatively, $D_{\rm f}={A_\mathrm{t} A_\mathrm{r}}/{(\lambda L)^2}$ when expressed using the transmitter and receiver pupils’ areas $A_{\rm t} \equiv \int |A_{\rm T}({\bm \rho})|^2 \mathrm{d}^2{\bm \rho} = \frac{\pi r_{\rm t}^2}{2}$ and $A_{\rm r} \equiv \int |A_{\rm R}({\bm \rho^\prime})|^2 \mathrm{d}^2{\bm \rho^\prime} = \frac{\pi r_{\rm r}^2}{2}$. The expressions for the output LG and HG modes are given by equations (28) and (24) in [@Sha05], respectively. The expression for the power-transfer eigenvalues $\eta_q$ for either mode set admits the following simple form: $$\begin{aligned} \eta_{q}& =\left(\frac{1+2D_\mathrm{f}-\sqrt{1+4D_\mathrm{f}}}{2D_\mathrm{f}}\right)^{q}, \, {\rm for}\, q=1,2,\ldots, \label{eq:gen_modes}\end{aligned}$$ where $q=2p+|l|+1$ for LG modes, and $q=n+m+1$ for HG modes. Thus, there are $q$ spatial modes of transmissivity $\eta_q$. The LG and HG modes span the same eigenspace, and hence are related by a unitary transformation (a linear mode transformation). The first mode in both LG or HG mode sets, defined by $p=l=n=m=0$, is known as the *Gaussian beam*. The input Gaussian beam is expressed as follows: $$\begin{aligned} \label{eq:inputGauss}\Phi_{0,0}(x,y)&=\frac{1}{\sqrt{\pi}a}\exp\left(-\left[\frac{1}{2a^2}+i\frac{k}{2L}\right](x^2+y^2)\right).\end{aligned}$$ LG Modes and Hard-Pupil Circular Apertures {#sec:lgmodes} ========================================== Soft-pupil Gaussian apertures used in the preceding section are purely theoretical constructs: while they greatly simplify the mathematics, they are impossible to realize physically. Let us thus consider hard-pupil circular apertures of areas $A_{\rm t}$ and $A_{\rm r}$, that is, $$\begin{aligned} \label{eq:ATc} A_{\rm T}({\bm \rho})& = \left\{\begin{array}{ll}1&\text{if~} |{\bm \rho}| \le r_{\rm t}\\0&\text{otherwise}\end{array}\right.,\text{~and}\\ \label{eq:ARc} A_{\rm R}({\bm \rho^\prime})& = \left\{\begin{array}{ll}1&\text{if~} |{\bm \rho^\prime}| \le r_{\rm r}\\0&\text{otherwise}\end{array}\right.\end{aligned}$$ with the corresponding areas defined as $A_{\rm t} \equiv \int |A_{\rm T}({\bm \rho})|^2 \mathrm{d}^2{\bm \rho} = \pi r_{\rm t}^2$ and $A_{\rm r} \equiv \int |A_{\rm R}({\bm \rho^\prime})|^2 \mathrm{d}^2{\bm \rho^\prime} = \pi r_{\rm r}^2$. Neither LG nor HG modes form an eigenmode set for these hard-pupil apertures. Instead, their eigenmodes are prolate spheroidal functions, and the power-transfer eigenvalues $\eta_m(\omega)$, indexed by two integers $m \equiv (m_1, m_2)$, have known, yet quite complicated expressions [@slepian64prolate; @slepian65apodization]. If the LG (or HG) modes are used as input into the hard-pupil system, the output modes are non-orthogonal in general, as the expressions that we derive next show. Employing the vacuum propagation kernel in with the expression for the input LG mode in , substituting the expressions for the hard circular pupils in and , and re-arranging terms yields: $$\begin{aligned} \label{eq:phi_out}\phi_{p,l}(r',\theta')&=\frac{\exp[ikL+il\theta']\sqrt{p!}}{i a\lambda L\sqrt{\pi(|l|+p)!}}\int_0^{r_{\rm t}}\int_{0}^{2\pi} \left[\frac{r}{a}\right]^{|l|}\mathcal{L}_p^{|l|}\left[\frac{r^2}{a^2}\right]\exp\left[-\frac{r^2}{2a^2}+\frac{ik}{2L}(r'^2-2rr'\cos\theta)+il\theta\right]r\mathrm{d}\theta\mathrm{d}r,\end{aligned}$$ for $r'\in[0,r_{\rm r}]$ and $\theta'\in[0,2\pi]$, where we first substitute $|{\bm \rho^\prime}-{\bm \rho}|^2=r^2+r'^2-2rr'\cos(\theta-\theta')$, and then substitute $\theta\rightarrow\theta-\theta'$. Now, the integral representation of the Bessel function of the first kind given in Appendix \[app:besselint\] allows the following evaluation of the integral with respect to $\theta$ in : $$\begin{aligned} \label{eq:inner_int}\int_{0}^{2\pi}\exp\left[i\left(l\theta-\frac{krr'\cos\theta}{L}\right)\right]\mathrm{d}\theta=2\pi \exp\left[-\frac{il\pi}{2}\right]J_l\left[\frac{krr'}{L}\right].\end{aligned}$$ Substitution of into yields: $$\begin{aligned} \label{eq:phi_out1}\phi_{p,l}(r',\theta')&=\frac{2\exp\left[ikL+il\theta'+\frac{ikr'^2}{2L}-\frac{il\pi}{2}\right]\sqrt{\pi p!}}{i a\lambda L\sqrt{(|l|+p)!}}\int_0^{r_{\rm t}} \left[\frac{r}{a}\right]^{|l|}\mathcal{L}_p^{|l|}\left[\frac{r^2}{a^2}\right]\exp\left[-\frac{r^2}{2a^2}\right]J_l\left[\frac{krr'}{L}\right]r\mathrm{d}r.\end{aligned}$$ While the Bessel function is not an elementary function, it can be efficiently evaluated by a computer (using, e.g., MATLAB). Now let’s evaluate the cross-talk (overlap) between the output modes. We are interested in the fraction of power transmitted on the mode indexed by $(p,l)$ that is leaked to the mode indexed by $(q,m)$: $$\begin{aligned} \label{eq:radialcrosstalk}\eta_{\rm L}(p,l,q,m)&=\left|\int_0^{r_{\rm r}}\int_{0}^{2\pi} \phi_{p,l}(r',\theta')\phi_{q,m}^\ast(r',\theta')r'\mathrm{d}\theta'\mathrm{d}r'\right|^2.\end{aligned}$$ Substituting , we note that evaluation of the integral with respect to $\theta'$ yields: $\int_{0}^{2\pi}\exp\left[i(l-m)\theta'\right]\mathrm{d}\theta'=2\pi\delta_{l,m}$. Thus, while the radial LG modes are clearly non-orthogonal, the azimuthal LG modes retain their orthogonality when passed through hard-pupil circular apertures. However, azimuthal LG modes are unlikely to be perfectly separated in the near future. The current state-of-the-art experiments have been able to achieve $\eta_{\rm L}=7.9\pm0.7\%$ averaged across the 25 modes spanning $l=-12,-11,\ldots,12$ [@mirhosseini13oammodesseparation]; we evaluate the QKD rate for such a system using the cross-talk data from these experiments. Gaussian Beam Array and Hard-Pupil Square Apertures {#sec:gaussian} =================================================== Our OGBA architecture employs a square transmitter aperture. The receiver aperture is composed from square pixels of equal size. Gaussian beams are directed from the transmitter to the square pixels using linear phase tilts as in . The hard square pupils of areas $A_{\rm t}$ and $A_{\rm r}$ are given by: $$\begin{aligned} \label{eq:ATs} A_{\rm T}({\bm \rho})& = \left\{\begin{array}{ll}1&\text{if~} |x|,|y| \le l_{\rm t}/2\\0&\text{otherwise}\end{array}\right.,\text{~and}\\ \label{eq:ARs} A_{\rm R}({\bm \rho^\prime})& = \left\{\begin{array}{ll}1&\text{if~} |x'|,|y'| \le l_{\rm r}/2\\0&\text{otherwise}\end{array}\right..\end{aligned}$$ The corresponding areas are defined as $A_{\rm t} \equiv \int |A_{\rm T}({\bm \rho})|^2 \mathrm{d}^2{\bm \rho} = l_{\rm t}^2$ and $A_{\rm r} \equiv \int |A_{\rm R}({\bm \rho^\prime})|^2 \mathrm{d}^2{\bm \rho^\prime} = l_{\rm r}^2$. For simplicity of exposition, we ignore the linear phase tilt of the input Gaussian beam (and the corresponding offset of the output Gaussian beam), and derive the expression for the beam centered on the central pixel of the output aperture (in fact, while the implementation of the Gaussian beam array would use the linear phase tilts, we do not need to explicitly consider them in the analysis that follows). The beams directed at each pixel have intensity $|\alpha|^2$, which we optimize in the next section. Employing the vacuum propagation kernel in with the expression for the input Gaussian beam $\Phi_{0,0}(x,y)$ in , substituting expressions for the hard square pupils in and , and re-arranging terms yields the following: $$\begin{aligned} \phi_{0,0}(x',y')&=\frac{\sqrt{\pi}a\exp\left[ikL-(x'^2+y'^2)\left(\frac{a^2k^2}{2L^2}-\frac{ik}{2L}\right)\right]}{2i\lambda L}\left(\operatorname{Erf}\left[\frac{l_{\rm t}}{\sqrt{2}a}-\frac{iakx'}{\sqrt{2}L}\right]+\operatorname{Erf}\left[\frac{l_{\rm t}}{\sqrt{2}a}+\frac{iakx'}{\sqrt{2}L}\right]\right)\nonumber\\ &\phantom{=}\times\left(\operatorname{Erf}\left[\frac{l_{\rm t}}{\sqrt{2}a}-\frac{iaky'}{\sqrt{2}L}\right]+\operatorname{Erf}\left[\frac{l_{\rm t}}{\sqrt{2}a}+\frac{iaky'}{\sqrt{2}L}\right]\right)\\ \label{eq:phiGauss}&=\frac{2\sqrt{\pi}a\exp\left[ikL-(x'^2+y'^2)\left(\frac{a^2k^2}{2L^2}-\frac{ik}{2L}\right)\right]}{i\lambda L}\mathfrak{Re}\left[\operatorname{Erf}\left[\frac{l_{\rm t}}{\sqrt{2}a}+\frac{iakx'}{\sqrt{2}L}\right]\right]\mathfrak{Re}\left[\operatorname{Erf}\left[\frac{l_{\rm t}}{\sqrt{2}a}+\frac{iaky'}{\sqrt{2}L}\right]\right],\end{aligned}$$ where $\operatorname{Erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^xe^{-t^2}\mathrm{d}t$ is the error function and the simplification in is because of the symmetry of $\operatorname{Erf}(\cdot)$ as explained in Appendix \[app:erf\]. While the error function is not an elementary function, it can be efficiently evaluated by a computer (using, e.g., the Faddeeva Package [@faddeeva] which includes a wrapper for MATLAB). Suppose that the receiver aperture is constructed using square $l_{\rm d}\times l_{\rm d}$ m pixels. We consider two configurations for the layout of these pixels on the square aperture as illustrated in Figure \[fig:config\]: 1. \[item:center\] a pixel in the center of the aperture as shown in Figure \[fig:config\_1\]; 2. \[item:cluster\] a $2\times2$ pixel cluster in the center of the aperture as shown in Figure \[fig:config\_2\]; and 3. a $1\times 2$ pixel array with two square pixels placed side-by-side as shown in Figure \[fig:config\_3\]. Consider configurations \[item:center\] and \[item:cluster\]. We optimize the length of the pixel $l_{\rm d}$ when computing the QKD rate. Unless $l_{\rm r}/l_{\rm d}$ is an integer, the pixels at the edges of the aperture are cut off to fit into the aperture. While these pixels are either $l_{\rm d}\times l_{\rm s}$ m rectangles on the edges of the aperture or $l_{\rm s}\times l_{\rm s}$ m squares on the corner, for simplicity we still direct the beams at the centers of the hypothetical full $l_{\rm d}\times l_{\rm d}$ m pixels that are cut off by the edge of the aperture. The circular symmetry of the Gaussian beam allows us to limit our calculations to a set of pixels forming octants illustrated in Figure \[fig:config\], as interference profiles for the corresponding pixels in other octants are identical. The total QKD rate is computed by summing the products of the contribution from each of these pixels with the total number of identical pixels. Using paraxial approximation, the fraction of power captured by a full (interior) $l_{\rm d}\times l_{\rm d}$ m pixel from a Gaussian beam focused on its center is: $$\begin{aligned} \label{eq:eta}\eta&=\int_{-l_{\rm d}/2}^{l_{\rm d}/2}\int_{-l_{\rm d}/2}^{l_{\rm d}/2}|\phi_{0,0}(x',y')|^2\mathrm{d}x'\mathrm{d}y'.\end{aligned}$$ The fraction of power captured by a partial (edge) pixel is obtained by appropriately adjusting the limits of integration in . Since Gaussian beam is circularly symmetric, the cross-talk from another beam that is focused on a pixel whose center is located $n$ pixels either to the left or to the right and $m$ pixels either above or below is expressed similarly: $$\begin{aligned} \label{eq:crosstalkGauss}\eta_{\rm L}(n,m)&=\int_{l_{\rm d}\left(m-\frac{1}{2}\right)}^{l_{\rm d}\left(m+\frac{1}{2}\right)}\int_{l_{\rm d}\left(n-\frac{1}{2}\right)}^{l_{\rm d}\left(n+\frac{1}{2}\right)}|\phi_{0,0}(x',y')|^2\mathrm{d}x'\mathrm{d}y'.\end{aligned}$$ Again, the cross-talk from another beam captured by a partial (edge) pixel is obtained by appropriately adjusting the limits of integration in . The total contribution of interference from cross-talk to noise afflicting the detector at the pixel that is $u$ pixels to the right and $v$ pixels above the bottom-left pixel is calculated by summing each interfering beam’s cross-talk given in and multiplying by the beam intensity $|\alpha|^2$: $$\begin{aligned} P_{\rm L}(u,v)&=\sum_{{\genfrac{}{}{0pt}{}{n,m\in\{0,\ldots,\lceil l_{\rm r}/l_{\rm d}\rceil-1\}}{n\neq u\lor m\neq v}}} |\alpha|^2\eta_{\rm L}(|u-n|,|v-m|).\end{aligned}$$ In configuration 3 two $l_{\rm s}\times l_{\rm s}$ square pixels are placed side-by-side, where $l_{\rm s}=l_{\rm r}/\sqrt{2}$. The beams are vertically centered on the corresponding square pixel but can be offset horizontally. When computing QKD rate, we optimize the distance from the center of the aperture for both beams $l_{\rm o}$. The fraction of power captured by each pixel and the cross-talk can be calculated by appropriately setting the limits of integration in and . Results {#sec:results} ======= We plot our results in Figure \[fig:mult\_modes\]. We assume vacuum propagation without any extinction losses and turbulence; the losses and cross-talk induced by the channel are solely from diffraction. Our repetition rate is $\nu=10^{10}$ modes/s. The yellow line is the capacity of QKD system (see discussion of in Appendix \[app:decoyBB84\]) that employs both polarizations, full set of orthogonal spatial modes and soft Gaussian apertures. That is, it plots $$\begin{aligned} \nu C_s=-2\nu\sum_{q=1}^\infty q\log_2(1-\eta_q),\end{aligned}$$ where $\eta_q$ is given by . Next we examine the performance of the decoy state BB84 protocol that is reviewed in Appendix \[app:decoyBB84\]. All of these results are for apertures with total area $A=0.005\pi~\text{m}^2$, i.e., the effective area of a soft-pupil Gaussian aperture (as defined in Section \[sec:bosonic\]) with $r=0.1$ m and the hard-pupil circular aperture of radius $r\approx 0.07$ m. The areas of transmitter and receiver apertures are equal. Our operating wavelength is $\lambda=1.55~\mu$m. We assume dark click probability $p_{\rm d}=10^{-6}$, unity detector quantum efficiency $\eta_{\rm d}=1$, visibility $V=0.99$ (i.e., the probability that the beam splitter directs the pulse according to the bases chosen by Bob), and availability of capacity-achieving channel codes (i.e., error correction code efficiency $f_{\mathrm{leak}}=1$). We optimize QKD rate $R(L)$ that is calculated in Appendix \[app:decoyBB84\] over the intensity of Alice’s pulses $|\alpha|^2$. The blue curve plots the QKD rate of a system employing the entire orthogonal spatial mode set with soft-pupil Gaussian apertures. To obtain this rate, we optimize over the intensity of Alice’s pulses $|\alpha|^2$. The soft-pupil Gaussian apertures are mathematically convenient devices, however, they are not realizable in practice. We thus turn our attention to hard-pupil apertures with the same area. First we examine azimuthal LG modes. The red curve plots the maximum rate achievable in theory using this mode set when hard-pupil circular apertures are used. There is no cross-talk between the modes since they retain their orthogonality. We optimize over the beam width $a$ and intensity $|\alpha|^2$ using the entire infinite set of LG modes, however noting that modes with high index couple only an insignificant portion of power from the transmitter to the receiver, and thus are not used at long distances. We also evaluate the theoretical performance of the decoy state BB84 QKD protocol using the data from the experimental system for separating 25 azimuthal LG modes indexed from -12 to 12 [@mirhosseini13oammodesseparation], and plot the results with the light blue curve. This is the current state-of-the-art in azimuthal LG mode separation. Because of various imperfections inherent in physical systems, there is cross-talk between modes in these experiments, as depicted in [@mirhosseini13oammodesseparation Figure 4a]; we obtained these data from the authors. We treat the erroneous counts from cross-talk as we treat the detector dark counts. The only source of loss is diffraction; we assume that there are no losses incurred in mode separation (even though they may be substantial) as well as through extinction and turbulence. In order to make a fair comparison between various systems, we normalize the cross-talk probabilities over the modes which couple significant power to the receiver (i.e., modes that we use).[^2] This normalization, while not ideal, avoids treating photons sent on the zeroth mode as lost to cross-talk in separation when only the zeroth mode is used (the case when $L$ is large). We optimize the QKD rate $R(L)$ over the beam width $a$ and intensity $|\alpha|^2$. The dip on the left side of the light blue curve (around $L=1$ km) is because the experiment was limited to 25 modes, more modes would improve the rate in that regime. The purple curve in Figure \[fig:mult\_modes\] plots the QKD rate using an optimal number of focused beams with an optimal choice of their overlap at the receiver aperture plane (the optimal overlap is range-dependent). The transmitter is equipped with an $l_{\rm t}\times l_{\rm t}$ hard-pupil square aperture, where the length of the transmitter aperture is equal to the total length of the receiver aperture $l_{\rm t}=l_{\rm r}$. The receiver geometry is as shown in Figures \[fig:config\_1\] and \[fig:config\_2\]. For each beam we employ the same beam width $a$ and intensity $|\alpha|^2$, optimizing over those variables as well as length of the side of the full interior pixel $l_{\rm d}$. The dashed purple curve plots the maximum QKD rate achievable using $1\times 2$ receiver pixel setup described in Figure \[fig:config\_3\] (we keep the $l_{\rm t}\times l_{\rm t}$ hard-pupil square transmitter aperture). Again, we optimize over the beam width $a$ and intensity $|\alpha|^2$, however, instead of pixel side length $l_{\rm d}$ (which is set to $l_{\rm r}/\sqrt{2}$), we optimize over the beam offset $l_{\rm o}$. While practical systems have space between detector pixels, for simplicity we assume a unity-fill factor single-photon square detector array with each beam focused at the center of one detector pixel (except in the $1\times2$ configuration). The optimal values of $a$, $l_{\rm d}$, and $|\alpha|^2$ are plotted in Figure \[fig:params\]. The light blue curve plots QKD rate that employs a single Gaussian FB and square apertures (we plot the optimal beam width and intensity in Figures \[fig:a\] and \[fig:mu\], respectively). We provide the comparison of the QKD rates achieved for hard apertures of equal areas using OGBA and LG mode sets in Figure \[fig:comp\_rates\]. Discussion and Future Work {#sec:discussion} ========================== The primary takeaways from these results are: 1. One can potentially gain between 1 to 2 orders of magnitude in key rate in the near-field propagation regime (e.g., over a $1$ km link using $7$ cm radii apertures at $1.55~\mu$m center wavelength) by using multiple spatial modes. But, 2. QKD using orthogonal azimuthal LG modes may not be worth it given the hardware complexity associated with generating and separating these spatially overlapping orthogonal modes. In this paper, we proposed an overlapping Gaussian beam array (OGBA) architecture, which uses an array of focussed Gaussian beams with an optimized beam geometry. OGBA architecture can yield most of the spatial-multiplexing gain in the QKD rate in the near field afforded by the use of azimuthal LG modes. As shown in Figure \[fig:comp\_rates\], the rate gain from using azimuthal LG modes over our OGBA architecture is modest: at most $6.3$ dB in theory if the entire azimuthal LG mode set is used with perfect separation and at most $2.6$ dB with the current state-of-the-art azimuthal LG mode separation implemented in the laboratory (without accounting for any losses introduced by the mode separation process). The losses associated with generating and separating these modes will likely offset this rate improvement. Furthermore, the performance of the OGBA (the green curve) might improve further if we use hexagonally-packed beam spots as opposed to using a square grid. However, we have not examined that yet. Finally, in the near-field regime, CV QKD can improve rate substantially over the DV BB84 protocol since the CV scheme can leverage effectively a high-order constellation in the low-loss regime. Therefore, it would be instructive to evaluate an OGBA architecture employing CV QKD with a heterodyne detection array. We assumed vacuum propagation in the results reported in this paper. We are extending them to account for the atmospheric turbulence in the ongoing work. Clearly, turbulence will adversely affect all systems. It is known to break the orthogonality of the azimuthal LG modes [@chandrasekaran14pieturb1]. While the classical and private capacities of systems using multiple HG, LG, and FB modes are similar in turbulence [@chandrasekaran14pieturb2], the effect of turbulence on the QKD systems using (or not using) adaptive optics at the transmitter and/or the receiver is still unclear. The authors are grateful to Mohammad Mirhosseini, Mehul Malik, Zhimin Shi, and Robert Boyd for graciously providing the data plotted in [@mirhosseini13oammodesseparation Figure 4], as well as answering question about their experiment. [21]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [“,” ]{} () @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](http://stacks.iop.org/1367-2630/17/i=3/a=033033) @noop [****, ()]{},  @noop [****,  ()]{} [****,  ()](http://stacks.iop.org/1367-2630/16/i=11/a=113028),  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [**** ()]{}, in @noop [**]{} () pp.  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [“,” ]{} @noop [**]{},  ed., edited by  and  (, ) @noop [****,  ()]{} Useful Integral Representation of the Bessel Function of the first kind $J_n(z)$ {#app:besselint} ================================================================================ Eq. (8.411.1) in [@gr07tables] gives the following integral representation of Bessel function of the first kind: $$\begin{aligned} \label{eq:intJ}J_n(z)&=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{-in\theta+iz\sin\theta}\mathrm{d}\theta,\end{aligned}$$ where $n$ is an integer. We perform several substitutions to obtain the form of this integral that is useful to us. First, substitute $\theta\rightarrow-\theta$: $$\begin{aligned} J_n(z)&=\frac{1}{2\pi}\int_{-\pi}^{\pi}e^{in\theta-iz\sin\theta}\mathrm{d}\theta.\end{aligned}$$ Now substitute $\theta\rightarrow\theta+\frac{\pi}{2}$ and split the resulting integral: $$\begin{aligned} J_n(z)&=\frac{e^{in\pi/2}}{2\pi}\int_{-3\pi/2}^{\pi/2}e^{in\theta-iz\cos\theta}\mathrm{d}\theta\\ \label{eq:splitint}&=\frac{e^{in\pi/2}}{2\pi}\int_{-3\pi/2}^{0}e^{in\theta-iz\cos\theta}\mathrm{d}\theta+\frac{e^{in\pi/2}}{2\pi}\int_{0}^{\pi/2}e^{in\theta-iz\cos\theta}\mathrm{d}\theta.\end{aligned}$$ Now, since $e^{in(\theta-2\pi)}=e^{in\theta}$ for integer $n$, and $\cos(\theta-2\pi)=\cos(\theta)$, substitution $\theta\rightarrow\theta-2\pi$ into the first integral in only changes its limits, yielding the form we need: $$\begin{aligned} J_n(z)&=\frac{e^{in\pi/2}}{2\pi}\int_{\pi/2}^{2\pi}e^{in\theta-iz\cos\theta}\mathrm{d}\theta+\frac{e^{in\pi/2}}{2\pi}\int_{0}^{\pi/2}e^{in\theta-iz\cos\theta}\mathrm{d}\theta\\ &=\frac{e^{in\pi/2}}{2\pi}\int_{0}^{2\pi}e^{in\theta-iz\cos\theta}\mathrm{d}\theta.\end{aligned}$$ Useful Simplification Involving the Symmetry of Error Function {#app:erf} ============================================================== Let $g(u,v)=\operatorname{Erf}(u+iv)+\operatorname{Erf}(u-iv)$. Now, $$\begin{aligned} g(u,v)&=\mathfrak{Re}[\operatorname{Erf}(u+iv)]+i\mathfrak{Im}[\operatorname{Erf}(u+iv)]+\mathfrak{Re}[\operatorname{Erf}(u-iv)]+i\mathfrak{Im}[\operatorname{Erf}(u-iv)]\\ \label{eq:symerf}&=\mathfrak{Re}[\operatorname{Erf}(u+iv)]+i\mathfrak{Im}[\operatorname{Erf}(u+iv)]+\mathfrak{Re}[\operatorname{Erf}^\ast(u+iv)]+i\mathfrak{Im}[\operatorname{Erf}^\ast(u+iv)]\\ \label{eq:conj}&=2\mathfrak{Re}[\operatorname{Erf}(u+iv)],\end{aligned}$$ where in we use the fact that $\operatorname{Erf}(x^\ast)=\operatorname{Erf}^\ast(x)$ and follows from the definition of complex conjugation. Review of Decoy State Quantum Key Distribution {#app:decoyBB84} ============================================== Here we review the decoy state discrete variable BB84 QKD protocol [@lo05decoyqkd], borrowing the development of the key generation rate expression from [@scarani09rmpQKD Section IV.B.3]. Suppose that Alice transmits pulses to Bob at the rate of $\nu$ Hz. The lower bound for the rate of secure key generation from these pulses is: $$\begin{aligned} \label{eq:initQKDrate}R&=I_{\rm AB}-\min(I_{\rm AE},I_{\rm BE})\text{~bits/mode},\end{aligned}$$ where $I_{\rm AB}$ denotes the information shared between Alice and Bob, while $I_{\rm AE}$ and $I_{\rm BE}$ denote the information captured by eavesdropper Eve from Alice and Bob, respectively. Privacy amplification aims to destroy Eve’s information, sacrificing part of the information in the process (hence subtraction in ). We take the minimum of $I_{\rm AE}$ and $I_{\rm BE})$ in since Alice and Bob choose the reference set of pulses on which Eve has least information. The QKD rate in bits/second is then $\nu R$. For lossy bosonic channels, $R\leq C_s$ [@pirandola15QKDcap], with QKD capacity given by: $$\begin{aligned} \label{eq:Cs}C_s&=-\log_2(1-\eta) \text{~bits/mode},\end{aligned}$$ where $\eta$ captures all losses, which include the diffraction described in the previous sections, as well as atmospheric losses and detector inefficiency. Alice transmits a sequence of polarized laser pulses with average intensity $|\alpha|^2$ photons per pulse. Following the standard BB84 protocol, polarization is chosen by first randomly selecting one of two non-orthogonal polarization bases (rectilinear or diagonal), and then encoding a random bit in the selected bases. Bob randomly chooses one of two polarization bases in which to measure the received pulse. When Alice and Bob select the same bases, Alice’s pulse is directed to one of two detectors via a polarizing beam splitter and ideally only the detector corresponding to the transmitted bit can click, registering the detection event (we discuss the non-ideal case later). When the bases are not the same, either detector can click with equal probability. We call Bob’s detector “correct” when it corresponds to Alice’s basis choice, otherwise we call the detector “incorrect.” The probability of a click from a signal pulse when the bases match is: $$\begin{aligned} p_{\rm p}&=1-e^{-\eta|\alpha|^2}.\end{aligned}$$ In the decoy state BB84 protocol, Alice changes the value of the intensity $|\alpha|^2$ randomly from one pulse to the other; she reveals the list of values she used at the end of the exchange of transmissions. This prevents Eve from adapting her attack to Alice’s state, and allows Alice and Bob to estimate their parameters in post-processing. The probability of a click in one of the detectors from either the received pulse or a dark click is: $$\begin{aligned} p_{\rm r}&=p_{\rm p}(1-p_{\rm d})+2(1-p_{\rm p})p_{\rm d}(1-p_{\rm d}),\end{aligned}$$ where $p_{\rm d}$ is the probability of a dark click. When pulse is not detected, an error can occur only because of a dark click in the incorrect detector. The probability of this event is $p_{\rm d}(1-p_{\rm d})(1-p_{\rm p})$. When the pulse is received, non-idealities of the polarizing beam splitter can result in a click in the erroneous detector. These non-idealities are captured by the visibility parameter $V$, which is effectively the probability that the beam splitter directs the pulse according to the bases chosen by Bob. Since an incorrect bases choice results in a click happening with equal probability in one of the detectors, the probability of an erroneous click with pulse received is $\frac{1}{2}(1-V)p_{\rm p}(1-p_{\rm d})$. Combining the above probabilities, the quantum bit error rate is: $$\begin{aligned} Q&=\frac{\frac{1}{2}(1-V)p_{\rm p}(1-p_{\rm d})+p_{\rm d}(1-p_{\rm d})(1-p_{\rm p})}{p_{\rm r}}.\end{aligned}$$ The rate at which Bob can extract information from the clicks at his detectors is thus: $$\begin{aligned} I_{\rm AB}&=1-f_{\rm leak}h_2(Q),\end{aligned}$$ where $h_2(Q)=-Q\log_2Q-(1-Q)\log_2(1-Q)$ is the binary entropy function, $1-h_2(Q)$ is the expression for the Shannon capacity of the binary symmetric channel, and $f_{\rm leak}$ is the efficiency of the error correction code (ECC) used by Alice and Bob. Now let’s study the amount of information about the key collected by Eve $I_{\rm E}=\min(I_{\rm AE},I_{\rm BE})$. She only gains information when photons are transmitted, and provided that Bob detects the photon that she forwarded (thus, when Alice does not send a photon but Bob detects a dark click, Eve does not obtain any information about the key). If Alice sends a single photon pulse, Eve has to introduce an error if she is to obtain any information. In this case Eve gains $h_2(\epsilon_1)$ bits of information, where $\epsilon_1$ is the probability of error event when Alice transmits a single photon. Alice transmits a single photon with probability $|\alpha|^2e^{-|\alpha|^2}$, and a detection event occurs at one of the detectors with probability $$\begin{aligned} p_{\rm r_1}=|\alpha|^2e^{-|\alpha|^2}(\eta+2(1-\eta)p_{\rm d})(1-p_{\rm d}).\end{aligned}$$ Conditioned on the event that a click occurs in one of Bob’s detectors, the probability becomes: $$\begin{aligned} \label{eq:y_1}y_1=\frac{p_{\rm r_1}}{p_{\rm r}}=\frac{|\alpha|^2e^{-|\alpha|^2}(\eta+2(1-\eta)p_{\rm d})}{p_{\rm p}+2(1-p_{\rm p})p_{\rm d}}.\end{aligned}$$ The probability of Alice transmitting one photon and a click occurring in the incorrect detector is: $$\begin{aligned} p_{\rm r_1^w}&=|\alpha|^2e^{-|\alpha|^2}(1-\eta)p_{\rm d}(1-p_{\rm d}).\end{aligned}$$ Conditioning on the event that Alice transmits a single photon and a detection event occurs at one of the detectors yields: $$\begin{aligned} \epsilon_1&=\frac{p_{\rm r_1^w}}{p_{\rm r_1}}=\frac{(1-\eta)p_{\rm d}}{\eta+2(1-\eta)p_{\rm d}}.\end{aligned}$$ For multi-photon pulses, photon number splitting is an optimal attack, in which Eve forwards one photon to Bob and keeps the others. She gains one bit from the photons she keeps when there is a click in one of Bob’s detectors. The probability of a click in one of the detectors when Alice transmits more than one photons is $1-y_0-y_1$ where $y_1$ is given by and $y_0$ is the probability of a click in one of the detectors when Alice does not transmit a photon given that a click occurred. Since Alice sends no photons with probability $e^{-|\alpha|^2}$, the probability of a click in one of the detectors when Alice does not transmit a photon is: $$\begin{aligned} p_{\rm r_0}&=2p_{\rm d}(1-p_{\rm d})e^{-|\alpha|^2}.\end{aligned}$$ Conditioning on the event that a click occurs in one of Bob’s detectors, we obtain: $$\begin{aligned} y_0&=\frac{p_{\rm r_0}}{p_{\rm r}}=\frac{2p_{\rm d}e^{-|\alpha|^2}}{p_{\rm p}(1-p_{\rm d})+2(1-p_{\rm p})p_{\rm d}}\end{aligned}$$ Therefore, $$\begin{aligned} I_{\rm E}&=y_1h_2(\epsilon_1)+(1-y_0-y_1)\\ &=1-(y_0+y_1(1-h_2(\epsilon_1))).\end{aligned}$$ The expression for the QKD rate is thus: $$\begin{aligned} R&=\max[0,p_{\rm r}((1-f_{\rm leak}h_2(Q))-(1-(y_0+y_1(1-h_2(\epsilon_1)))))]\\ \label{eq:R}&=\max[0,p_{\rm r}(y_0+y_1(1-h_2(\epsilon_1))-f_{\rm leak}h_2(Q))] \text{~bits/mode}.\end{aligned}$$ We note that in the numerical optimization performed in Section \[sec:results\] we use a version of without taking the maximum. Allowing negative rate allows MATLAB’s `fmincon` function to construct the gradient over the entire space of optimization variables. [^1]: This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations. BAB acknowledges the support of the SECANT program funded by the Sandia National Laboratories under PO\# 1628276. NC and JHS acknowledge support from the Air Force Office of Scientific Research (AFOSR) grant number FA9550-14-1-0052. BAB and SG acknowledge the support from the SeaKey program, through the US Office of Naval Research (ONR) contract number N00014-14-C-0002. [^2]: For example, suppose that mode separator couples 80% of the received input power from the mode indexed 0 to mode 0, 7% to each of the modes indexed -1 and +1, and 3% to each of the modes indexed -2 and +2. If we only use modes indexed -1, 0, and 1, then we normalize the cross-talk probabilities so that in our calculations the mode separator couples 85.1% of the received input power from the mode indexed 0 to mode 0 and 7.45% to each of the modes indexed -1 and 1.
--- abstract: 'We study the scaling properties of the differential cross section of elastic proton-proton ($pp$) and proton-antiproton ($p\bar p$) collisions at high energies. We introduce a new scaling function, that scales – within the experimental errors – all the ISR data on elastic $pp$ scattering from $\sqrt{s} = 23.5$ to $62.5$ GeV to the same universal curve. We explore the scaling properties of the differential cross-sections of the elastic $pp$ and $p\bar p$ collisions in a limited TeV energy range. Rescaling the TOTEM $pp$ data from $\sqrt{s} = 7$ TeV to $2.76$ and $1.96$ TeV, and comparing it to D0 $p\bar p$ data at $1.96$ TeV, our results provide an evidence for a $t$-channel Odderon exchange at TeV energies, with a significance of at least 6.55$\sigma$.' author: - 'T. Csörgő' - 'T. Novák' - 'R. Pasechnik' - 'A. Ster' - 'I. Szanyi' bibliography: - 'references.bib' title: | Evidence of Odderon-exchange from scaling properties\ of elastic scattering at TeV energies --- Introduction {#s:intro} ============ One of the most important and critical tests of quantum chromodynamics (QCD) in the infrared regime is provided by the ongoing studies of elastic differential hadron-hadron scattering cross section at various energies and momentum transfers. The characteristics of the elastic amplitude, its both real and imaginary parts, carry a plenty of information about the inner proton structure, the proton profile in the impact parameter space and its energy dependence, as well as about the properties of QCD exchange interaction at low momentum transfers. The first and most precise measurements of the total, elastic and differential cross sections of elastic $pp$ collisions, together with the $\rho$-parameter, has recently been performed by the TOTEM Collaboration at the Large Hadron Collider (LHC) at CERN at the highest energy frontier of $\sqrt{s} = 13$ TeV (for the corresponding recent TOTEM publications, see Refs. [@Antchev:2017dia; @Antchev:2017yns; @Antchev:2018edk; @Antchev:2018rec]). A correct theoretical interpretation of the LHC data, together with the lower-energy Tevatron and ISR data, is a subject of intense debates and ongoing research development in the literature, see e.g. Refs. [@Samokhin:2017kde; @Khoze:2018kna]. Among the important recent advances, the recent data by the TOTEM Collaboration for the first time have indicated the presence of an odd-under-crossing (or C-odd) contribution to the elastic scattering amplitude known as the Odderon [@Lukaszuk:1973nt]. In particular, a comparison of the differential cross-section of elastic proton-proton $pp$ scattering obtained by the TOTEM Collaboration at $\sqrt{s} = 2.76$ TeV with D0 results on elastic proton-antiproton $p\bar p$ scattering at 1.96 TeV [@Abazov:2012qb] indicates important qualitative differences that can be attributed to the Odderon effect [@Csorgo:2018uyp; @Antchev:2018rec]. In more rigorous language of QCD, an Odderon exchange is usually associated with a quarkless odd-gluon (e.g. three-gluon, to the lowest order) bound state such as a vector glueball, and a vast literature is devoted to theoretical understanding of its implications (for recent developments and claims, see e.g. Refs. [@Martynov:2018nyb; @Shabelski:2018jfq; @Khoze:2018kna]). In earlier studies of Refs. [@Csorgo:2018ruk; @Csorgo:2019rsr], the Odderon signatures have been identified and qualitatively described in a model-independent way using the power of the so-called Lévy imaging technique [@Csorgo:2018uyp]. One of such signatures concern the presence of a dip-and-bump structure in the differential cross section of elastic $pp$ collisions and the lack of such a structure in elastic $p\bar p$ collisions. The latter effectively emerges in the $t$-dependence of the elastic slope $B(t)$, that crosses zero for elastic $pp$ collisions and remains non-negative for all values of $t$ in elastic $p\bar p$ collisions. Besides, Ref. [@Csorgo:2018uyp] noted that the position of the node of the nuclear phase $\phi(t)$, as reconstructed with the help of the Lévy expansion method, is characteristically and qualitatively different for elastic $pp$ from $p\bar p$ collisions, thus, indicating the Odderon exchange. In addition, the presence of a smaller substructure of the proton has been revealed in the data that is imprinted in the behaviour of the $t$-dependent elastic slope $B(t)$, apparent at large values of $t$. In particular, in Refs. [@Csorgo:2018uyp; @Csorgo:2018ruk; @Csorgo:2019rsr; @Csorgo:2019egs] a substructure of the two distinct sizes has been identified in the low (a few tens of GeV) and high (a few TeV) energy domains, respectively. Besides, a new statistically significant feature in the $b$-dependent shadow (or inelasticity) profile has been found at the maximal available energy $\sqrt{s} = 13$ TeV and represents a long-debated hollowness, or “black-ring” effect that emerges instead of the conventionally anticipated “black-disk” regime [@Csorgo:2018ruk; @Csorgo:2019egs]. In this paper, in order to further unveil the important characteristics of elastic hadron-hadron scattering we study the scaling properties of the existing data sets available from the ISR and Tevatron colliders as well as those provided by the TOTEM Collaboration in a TeV energy range [@Antchev:2013gaa; @Antchev:2017dia; @Antchev:2017yns; @Antchev:2018edk; @Antchev:2018rec]. We investigate a generic scaling behavior of elastic differential proton-(anti)proton scattering cross section, with the goal of transforming out the trivial colliding energy dependent variation of the key observables like that of the total and elastic cross-sections $\sigma_{\rm tot}(s)$ and $\sigma_{\rm el}(s)$, the elastic slope $B(s)$ and the real-to-imaginary ratio $\rho(s)$. We search for a universal scaling function and the associated data-collapsing behaviour that is valid not only in the low-$|t|$ domain, but also in the dip-and-bump region. We then discuss the physics implications of such a scaling behaviour and explore its consequences for understanding of the Odderon effect as well as the high-energy behaviour of the proton structure. The paper is organised as follows. In section \[s:formalism\], we recapitulate the formalism that is utilized for evaluation of the observables of elastic proton-(anti)proton scattering in the TeV energy range. In section \[s:Odderon-search\], we connect this formalism to a more general strategy of the experimental Odderon search, namely, to the search for a crossing-odd component in the differential cross-section of elastic proton-(anti)proton scattering. In section \[s:Scalings\], we study some of the scaling functions of elastic scattering already existing in the literature as well as propose a new scaling function denoted as $H(x)$ that is readily measurable in $pp$ and $p \bar p$ collisions, and present a first test of the $H(x)$ scaling in the ISR energy range of 23.5 – 62.5 GeV. Subsequently, in section \[s:results-TeV\] we extend these studies to the TeV (Tevatron and LHC) energy range, where the possible residual effects of Reggeon exchange are expected to be below the scale of the experimental errors, Ref. [@Broniowski:2018xbg]. In section \[s:quantification\], we present a method of how to quantify the significance of our findings, giving the formulas that are used to evaluate $\chi^2$, confidence level (CL), and significance in terms of the standard deviation, $\sigma$. In section \[s:extrapolations\], we discuss how to employ the newly found scaling behavior of the differential cross-section in the search for an Odderon effect. In section \[s:results\], we present further, more detailed results of our studies with the help of $H(x)$ and compare such a scaling function for $pp$ differential cross-sections at the LHC energies with the $p\bar p$ scaling function at the Tevatron energy. In section \[s:Odderon-significance\] we evaluate the significance of the Odderon-effect, and find that it is at least a 6.55$\sigma$-significant effect. Subsequently, we present several cross-checks in section \[s:cross-checks\] and discuss the main results in section \[s:discussion\]. Finally, we summarize and conclude our work in section \[s:summary\]. Formalism {#s:formalism} ========= For the sake of completeness and clarity, let us start first with recapitulating the connection between the scattering amplitude and the key observables of elastic scattering, following the conventions of Refs. [@Bialas:2006qf; @Nemes:2012cp; @CsorgO:2013kua; @Nemes:2015iia]. The Mandelstam variables $s$ and $t$ are defined as usual $s = (p_1 + p_2)^2$, $t = (p_1 - p_3)^2$ for an elastic scattering of particles $a$ and $b$ with incoming four-momenta $p_1$ and $p_2$, and outgoing four-momenta $p_3$ and $p_4$, respectively. The elastic cross-section is given as integral of the differential cross-section of elastic scattering: $$\sigma_{\rm el}(s) = \int_{0}^\infty d|t| \frac{d\sigma(s,t)}{dt} \label{e:sigmael}$$ The elastic differential cross section is $$\frac{d\sigma(s,t)}{dt} = \frac{1}{4\pi}|T_{\rm el}(s,\Delta)|^2 \,, \qquad \Delta=\sqrt{|t|}\, . \label{e:dsigmadt-Tel}$$ The $t$-dependent slope parameter $B(s,t)$ is defined as $$B(s,t) = \frac{d}{dt} \ln \frac{d\sigma(s,t)}{dt} \label{e:Bst}$$ and in the experimentally accessible low-$t$ region this function is frequently assumed or found within errors to be a constant. In this case, a $t$-independent slope parameter $B(s)$ is introduced as $$B(s) \equiv B_0(s) \, = \, \lim_{t\rightarrow 0} B(s,t) , \label{e:Bs}$$ where the $t\rightarrow 0$ limit is taken within the experimentally probed region. Actually, experimentally the optical $t=0$ point can only be approached by extrapolations from the measurements in the $-t > 0$ kinematically accessible regions. According to the optical theorem, the total cross section is also found by a similar extrapolation. Its value is given by $$\sigma_{\rm tot}(s) \equiv 2\,{\rm Im}\, T_{\rm el}(\Delta=0,s) \,, \label{e:sigmatot}$$ while the ratio of the real to imaginary parts of the elastic amplitude is found as $$\rho(s,t)\equiv \frac{{\rm Re}\, T_{\rm el}(s,\Delta)}{{\rm Im}\, T_{\rm el}(s,\Delta)} \label{e:rhost}$$ and its measured value at $t=0$ reads $$\rho(s) \equiv \rho_0(s) \, = \, \lim_{t\rightarrow 0} \rho(s,t) \label{e:rhos} \,.$$ Here, the $t\rightarrow 0$ limit is taken typically as an extrapolation in dedicated differential cross section measurements at very low $-t$, where the parameter $\rho_0$ can be measured using various Coulomb-Nuclear Interference methods. The differential cross section at the optical $(t = 0)$ point is thus represented as $$\frac{d\sigma(s)}{dt}\Big|_{t\to 0}=\frac{1+\rho_0^2(s)}{16\pi}\, \sigma_{\rm tot}^2(s) \, . \label{e:optical-point}$$ In the impact-parameter $b$-space, we have the following relations: $$\begin{aligned} \nonumber t_{\rm el}(s,b) & = & \int \frac{d^2\Delta}{(2\pi)^2}\, e^{-i{\bm \Delta}{\bm b}}\,T_{\rm el}(s,\Delta) \, = \\ \null & = & \frac{1}{2\pi} \int J_0(\Delta\,b)\,T_{\rm el}(s, \Delta)\,\Delta\, d\Delta \,, \label{e:tel-b} \\ \Delta & \equiv & |{\bm \Delta}|\,, \quad b\equiv|{\bm b}|\,. \label{e:Delta}\end{aligned}$$ This Fourier-transformed elastic amplitude $t_{el}(s,b)$ can be represented in the eikonal form $$\begin{aligned} t_{\rm el}(s,b) & = & i\left[ 1 - e^{-\Omega(s,b)} \right] \,, \label{e:tel-eikonal}\end{aligned}$$ where $\Omega(s,b)$ is the so-called opacity function (known also as the eikonal function), which is complex in general. The shadow profile function is then defined as $$\begin{aligned} P(s,b) & = & 1-\left|e^{-\Omega(s,b)}\right|^2 \,. \label{e:shadow}\end{aligned}$$ For clarity, let us note that other conventions are also used in the literature and for example the shadow profile $P(b,s)$ is also referred to as the inelasticity profile function as it corresponds to the probability distribution of inelastic proton-proton collisions in the impact parameter $b$ with $0\le P(b,s) \le 1$. When the real part of the scattering amplitude is neglected, $P(b,s)$ is frequently denoted as $G_{\rm inel}(s,b)$, see for example Refs. [@Petrov:2018wlv; @Dremin:2013qua; @Dremin:2014spa; @Dremin:2018urc; @Dremin:2019tgm]. Looking for Odderon effects in the differential cross-section of elastic scattering {#s:Odderon-search} =================================================================================== As noted in Refs. [@Jenkovszky:2011hu; @Ster:2015esa], the only direct way to see the Odderon is by comparing the particle and antiparticle scattering at sufficiently high energies provided that the high-energy $pp$ or $p\bar p$ elastic scattering amplitude is a sum or a difference of even and odd C-parity contributions, respectively, $$\begin{aligned} T_{\rm el}^{pp}(s,t) & = & T_{\rm el}^{+}(s,t) + T_{\rm el}^{-}(s,t), \\ T_{\rm el}^{p\overline{p}}(s,t) & = & T_{\rm el}^{+}(s,t) - T_{\rm el}^{-}(s,t) , \\ T_{\rm el}^{+}(s,t) & = & T_{\rm el}^{P}(s,t) + T_{\rm el}^{f}(s,t),\\ T_{\rm el}^{-}(s,t) & = & T_{\rm el}^{O}(s,t) + T_{\rm el}^{\omega}(s,t) \,.\end{aligned}$$ where the even-under-crossing part consists of the Pomeron and the $f$ Reggeon trajectory, while the odd-under-crossing part contains the Odderon and a contribution from the $\omega$ Reggeon. It is clear from the above formulae that the odd component of the amplitude can be extracted from the difference of the $pp$ and $p\bar p$ scattering amplitudes. At sufficiently high energies, the relative contributions from secondary Regge trajectories are suppressed, as they decay as negative powers of the colliding energy $\sqrt{s}$. In Ref. [@Ster:2015esa], the authors argued that the LHC energy scale is already sufficiently large to suppress the Reggeon contributions, and they presented the $(s,t)$-dependent contributions of an Odderon exchange to the differential and total cross-sections at typical LHC energies. More recently, this observation was confirmed in Ref. [@Broniowski:2018xbg], suggesting that indeed the relative contribution of the Reggeon trajectories is well below the experimental precision in elastic $pp$ scattering in the TeV energy range. The analysis of Ref. [@Ster:2015esa] relies on a model-dependent, phenomenological picture formulated in the framework of the Phillips-Barger model [@Phillips:1974vt] and is focused primarily on fitting the dip region of elastic $pp$ scattering, but without a detailed analysis of the tail and cone regions. In Ref. [@Broniowski:2018xbg], a phenomenological Reggeon + Pomeron + Odderon exchange model is employed to study, in particular, the possible hollowness effect in the high-energy elastic $pp$ collisions. More recently, a similar study of the Philips-Barger model was performed in Ref. [@Goncalves:2018nsp] using the most recent TOTEM data on elastic $pp$ scattering. Similarly, Ref. [@Lebiedowicz:2018eui] has also argued that the currently highest LHC energy of $\sqrt{s} = $ 13 TeV is sufficiently high to see the Odderon contribution. In this paper, we follow Refs. [@Ster:2015esa; @Broniowski:2018xbg; @Lebiedowicz:2018eui] and assume that the Reggeon contributions to the elastic scattering amplitudes for $\sqrt{s} \geq$ 1.96 TeV and at higher energies are negligibly small. We search for an odd-under-crossing contribution to the scattering amplitude, in a model independent way, and find that such a non-vanishing contribution is present at a TeV scale that is recognised as an Odderon effect. The vanishing nature of the Reggeon contributions offers a direct way of extracting the Odderon as well as the Pomeron contributions, $T_{\rm el}^{O}(s,t)$ and $T_{\rm el}^{P}(s,t)$, respectively, from the elastic $pp$ and $p\bar p$ scattering data at sufficiently high colliding energies as follows $$\begin{aligned} T_{\rm el}^{P}(s,t) & = & \frac{1}{2} \left(T_{\rm el}^{pp}(s,t) + T_{\rm el}^{p\overline{p}}(s,t)\right) \,\, \mbox{ \rm for}\,\, \sqrt{s}\ge 1 \,\, \mbox{\rm TeV} , \label{e:Tel-P} \\ T_{\rm el}^{O}(s,t) & = & \frac{1}{2} \left(T_{\rm el}^{pp}(s,t) - T_{\rm el}^{p\overline{p}}(s,t)\right) \,\, \mbox{ \rm for}\,\, \sqrt{s}\ge 1 \,\, \mbox{\rm TeV} \,.\label{e:Tel-O}\end{aligned}$$ These kind of studies rely on the extrapolation of the fitted model parameters of $pp$ and $p\bar p$ reactions to an exactly the same energy, given that the elastic $pp$ and $p\bar p$ scattering data have not been measured at the same (or close enough) energies in the TeV region so far. Another problem is a lack of precision data at the low- and high-$|t|$, primarily, in $p\bar p$ collisions. Recently, the TOTEM Collaboration noted in Ref. [@Antchev:2018rec] that “Under the condition that the effects due to the energy difference between TOTEM and D0 can be neglected, the result“ (namely the differential cross-section measured by TOTEM at $\sqrt{s} = 2.76 $ TeV) ”provides evidence for a colourless 3-gluon bound state exchange in the t-channel of the proton-proton elastic scattering". In other words, if the effects due to the energy difference between TOTEM and D0 measurements can be neglected, the direct comparison of the differential cross section of elastic $pp$ scattering at $\sqrt{s} = 2.76$ with that of $p\bar p$ scattering at $\sqrt{s}= 1.96$ TeV provides a [*conditional*]{} evidence for a colourless three-gluon state exchange in the $t$-channel. In this paper, we show that the conditional evidence stated by TOTEM can be turned to an evidence, i.e. a discovery of the Odderon, by closing the energy gap as much as possible at present, without a direct measurement, based on a re-analysis of already published TOTEM and D0 data. Here we take the data at a face value as given in published sources and do not attempt to extrapolate any model or model parameters towards their unmeasured values (in unexplored energy domains). Instead, we discuss a new kind of scaling relations, that we test on the experimental data and show their data-collapsing behaviour in a limited energy range. We demonstrate that such a data-collapsing behaviour can be used to close the small energy gap between the highest-energy elastic $p\bar p$ collisions, $\sqrt{s}= 1.96$ TeV and the lowest-energy elastic $pp$ collisions at the LHC where the public data are available, $\sqrt{s} = 2.76$ TeV. We then look for even-under-crossing and odd-under-crossing contributions by comparing the scaling functions of $pp$ and $p\bar p$ collisions in the TeV energy range. In other words, we look for a robust Odderon signature in the difference of the scaling functions of the elastic differential cross-section between $pp$ and $p\bar p$ collisions. We thus discuss the Odderon features that can be extracted in a model-independent manner directly by comparing the corresponding data sets to one another. Let us start with three general remarks as direct consequences of Eqs. (\[e:Tel-P\],\[e:Tel-O\]): - If the Odderon exchange effect is negligibly small (within errors, equal to zero) or if it does not interfere with that of the Pomeron at a given energy, then the differential cross sections of the elastic $pp$ and $p\bar p$ scattering have to be equal: $$T_{\rm el}^O(s,t) = 0 \implies \frac{d\sigma^{pp}}{dt} = \frac{d\sigma^{p\bar p}}{dt} \,\,\, \mbox{ \rm for}\,\, \sqrt{s}\ge 1 \,\, \mbox{\rm TeV}.$$ - If the differential cross sections of elastic $pp$ and $p\bar p$ collisions are equal within the experimental errors, this does not imply that the Odderon contribution has to be equal to zero. Indeed, the equality of cross sections does not require the equality of complex amplitudes: $$\frac{d\sigma^{pp}}{dt} = \frac{d\sigma^{p\bar p}}{dt} \,\,\, \mbox{ \rm for}\,\, \sqrt{s}\ge 1 \,\, \mbox{\rm TeV} { \mathrel{{\ooalign{\hidewidth$\not\phantom{=}$\hidewidth\cr$\implies$}}}}T_{\rm el}^O(s,t) = 0 \, .$$ - If the $pp$ differential cross sections differ from that of $p\bar p$ scattering at the same value of $s$ in a TeV energy domain, then the Odderon contribution to the scattering amplitude cannot be equal to zero, i.e. $$\frac{d\sigma^{pp}}{dt} \neq \frac{d\sigma^{p\bar p}}{dt} \,\,\, \mbox{ \rm for}\,\, \sqrt{s}\ge 1 \,\, \mbox{\rm TeV} \implies T_{\rm el}^O(s,t) \neq 0 \, .$$ Our research strategy in this paper is thus to scale out the $s$-dependence of the differential cross section by factoring out its dependencies on $\sigma_{\rm tot}(s)$, $\sigma_{\rm el}(s)$, $B(s)$ and $\rho(s)$ functions. The residual scaling functions will be compared for the $pp$ and $p\bar p$ elastic scattering to see if any difference remains. Such residual difference is considered to be a clear-cut signal for the Odderon-exchange, if the differential cross sections were measured at exactly the same energies. However, currently such data are lacking in the TeV energy range. So we may expect that after scaling out the trivial $s$-dependencies, only small scaling violating terms remain that depend on $s$, which can be estimated by the scaling violations of the differential cross sections measured at various nearby energies. We look for significant differences between the scaling functions of $pp$ and $p\bar p$ collisions as compared to these possible $s$-dependent scaling violating terms, as such observations provide a significant signal of the Odderon effect. In what follows, we introduce and discuss the newly found scaling function $H(x)$ in section \[s:Scalings\] and subsequently evaluate the significance of these observations as detailed in sections \[s:quantification\] and \[s:Odderon-significance\]. Possible scaling relations at low values of $|t|$ {#s:Scalings} ================================================= In this section, let us first investigate the scaling properties of the experimental data based on a simple Gaussian model elaborating on the discussion presented in Ref. [@Csorgo:2019fbf]. The motivation for this investigation is that we would like to work out a scaling law that works at least in the simplest, exponential diffractive cone approximation, and scales out the trivial $s$-dependencies of $\sigma_{\rm tot}(s)$, $\sigma_{\rm el}(s)$, $\rho(s)$, and $B(s)$. Based on the results of such a frequently used exponential approximation, we gain some intuition and experience on how to generalize such scaling laws for realistic non-exponential differential cross sections. Experimentally, the low-$|t|$ part of the measured distribution is usually approximated with an exponential, $$\frac{d\sigma}{dt} = A(s) \, \exp\left[ B(s) t\right] \, , \label{e:dsdt-exp}$$ where it is explicitly indicated that both the normalization parameter $A \equiv A(s) $ and the slope parameter $B \equiv B(s)$ are the functions of the center-of-mass energy squared $s$. If the data deviate from such an exponential shape, that can be described if one allows for a $t$-dependence of the slope parameter $B \equiv B(s,t)$ as defined in Eq. (\[e:Bst\]). For simplicity, we would like to scale out the energy dependence of the elastic slope $B(s) \equiv B(s,t=0)$ from the differential cross section of elastic scattering, together with the energy dependence of the elastic and total cross sections, $\sigma_{\rm el}(s)$ and $\sigma_{\rm tot}(s)$, as detailed below. For this purpose, let us follow the lines of a similar derivation in Refs. [@Broniowski:2018xbg; @Csorgo:2019fbf]. It is clear that Eq. (\[e:dsdt-exp\]) corresponds to an exponential “diffractive cone” approximation, that may be valid in the low-$t$ domain only. This equation corresponds to the so called “Grey Gaussian” approximation that suggests a relationship between the nuclear slope parameter $B(s)$, the real-to-imaginary ratio $\rho_0(s)$, the total cross section $\sigma_{\rm tot}(s)$, and the elastic cross section $\sigma_{\rm el}(s)$ as follows [@Block:2006hy; @Fagundes:2011hv; @Broniowski:2018xbg]: $$\begin{aligned} A(s) & = & B(s) \, \sigma_{\rm el}(s) \, = \, \frac{1+\rho_0^2(s)}{16 \, \pi}\, \sigma_{\rm tot}^2(s), \label{e:Asigma}\\ B(s) & = & \frac{1+\rho_0^2(s)}{16 \, \pi }\, \frac{\sigma_{\rm tot}^2(s)}{\sigma_{\rm el}(s)} \,.~\label{e:Bsigma}\end{aligned}$$ Such relations for $A$ and $B$ parameters in terms of the elastic and total cross sections are particularly useful when studying the shadow profile function as detailed below. The above relationships, in a slightly modified form, have been utilized by TOTEM to measure the total cross section at $\sqrt{s} = $ 2.76, 7, 8 and 13 TeV in Refs. [@Nemes:2017gut; @Antchev:2013iaa; @Antchev:2013paa; @Antchev:2017dia], using the luminosity independent method. In what follows, we do not suppress the $s$-dependence of the observables, i.e. $\sigma_{\rm tot} \equiv \sigma_{\rm tot}(s)$, $\sigma_{\rm el} \equiv \sigma_{\rm el}(s)$. \[ss:shadow-profile\] Scaling properties of the shadow profiles --------------------------------------------------------------- In the exponential approximation given by Eqs. (\[e:dsdt-exp\],\[e:Asigma\],\[e:Bsigma\]), the shadow profile function introduced in Eq. (\[e:shadow\]) has a remarkable and very interesting scaling behaviour, as anticipated in Ref. [@Broniowski:2018xbg]: $$\begin{aligned} P(b,s) & = & 1 - \Big[ 1 - r(s) \, \exp\Big( - \frac{b^2}{2 B(s)}\Big)\Big]^2 \, - \nonumber \\ && \,\,\,\qquad - \, \rho_0^2(s) r^2(s) \, \exp\Big( - \frac{b^2}{ B(s)}\Big) , \\ \label{e:Pbs} r(s) & \equiv & 4\, \frac{ \sigma_{\rm el}(s)}{\sigma_{\rm tot}(s)} . \label{e:rs}\end{aligned}$$ Thus, the shadow profile at the center, $P_0(s) \equiv P(b=0,s)$ reads as $$P_0(s) \, = \, \frac{1}{1+\rho_0^2(s)} \, - \, \left[1+\rho_0^2(s)\right] \, \Big[ r(s) - \frac{1}{1+\rho_0^2(s)}\Big]^2 \,,$$ which cannot become maximally absorptive (or black), i.e. $P_0(s) = 1$ is not reached at those colliding energies, where $\rho_0$ is not negligibly small. The maximal absorption corresponds to $P_0(s) \, = \, \frac{1}{1+\rho_0^2(s)}$, which is rather independent of the detailed $b$-dependent shape of the inelastic collisions [@Broniowski:2018xbg]. It is achieved when the ratio of the elastic to total cross sections approaches the value $r(s) = 1/(1+\rho_0^2(s))$. Thus, at such a threshold, we have the following critical value of the ratio $$\left. \frac{\sigma_{\rm el}(s)}{\sigma_{\rm tot}(s)} \right\vert_{\mbox{\rm threshold}} = \frac{1}{4 \left[1 + \rho_0^2(s)\right] } \,. \label{crit-ratio}$$ As $\rho_0 \le 0.15$ for the existing measurements and $\rho_0(s)$ seems to decrease with increasing energies at least in the 8 $\le \sqrt{s} \le 13$ TeV region, the critical value of the elastic-to-total cross section ratio (\[crit-ratio\]) corresponds to, roughly, $\sigma_{\rm el}/\sigma_{\rm tot} \approx 24.5-25.0 $ %. Evaluating the second derivative of $P(b,s)$ at $b=0$, one may also observe that it changes sign from a negative to a positive one exactly at the same threshold given by Eq. (\[crit-ratio\]). Such a change of sign can be interpreted as an onset of the hollowness effect [@Broniowski:2018xbg]. The investigation of such a hollowness at $b=0$ is a hotly debated topic in the literature. For early papers on this fundamental feature of $pp$ scattering at the LHC and asymptotic energies, see Refs. [@Troshin:2007fq; @Fagundes:2011hv; @Dremin:2013qua; @Alkin:2014rfa; @Troshin:2014rva; @Dremin:2014spa; @Anisovich:2014wha], as well as Refs. [@RuizArriola:2016ihz; @Troshin:2016frs; @Albacete:2016pmp; @Broniowski:2017aaf; @Broniowski:2017rhz; @Troshin:2017ucy; @Dremin:2018orv; @Campos:2018tsb; @Dremin:2018urc; @Broniowski:2018xbg; @Petrov:2018wlv; @Dremin:2019tgm] for more recent theoretical discussions. As pointed out in Ref. [@Csorgo:2019fbf], the threshold (\[crit-ratio\]), within errors, is reached approximately already at $\sqrt{s} = 2.76 $ TeV. The threshold behavior saturates somewhere between 2.76 and 7 TeV and a transition may happen around the threshold energy of $\sqrt{s_{\rm th}} \approx 2.76 - 4 $ TeV. The elastic-to-total cross section ratio becomes significantly larger than the threshold value at $\sqrt{s} = 13 $ TeV colliding energies. As a result, the shadow profile function of the proton undergoes a qualitative change in the region of $2.76 < \sqrt{s} < 7 $ TeV energies. At high energies, with $\sigma_{\rm el} \ge \sigma_{\rm tot}/4$, the hollowness effect may become a generic property of the impact parameter distribution of inelastic scatterings. However, the expansion at low impact parameters corresponds to the large-$|t|$ region of elastic scattering, where the diffractive cone approximation of Eqs. (\[e:dsdt-exp\],\[e:Asigma\],\[e:Bsigma\]) technically breaks down, and more refined studies are necessary (see below). For the most recent, significant and model-independent analysis of the hollowness effect at the LHC and its extraction directly from the TOTEM data, see Ref. [@Csorgo:2019egs]. Scaling functions for testing the black-disc limit -------------------------------------------------- When discussing the scaling properties of the differential cross section of elastic scattering, let us mention that various scaling laws have been proposed to describe certain features and data-collapsing behaviour of elastic scattering proton-proton scattering already in the 1970-s. One of the early proposals was the so called geometric scaling property of the inelastic overlap function [@DiasDeDeus:1987njz; @Buras:1973km]. The concept of geometric scaling was based on a negligibly small ratio of the real-to-imaginary part of the scattering amplitude at $t=0$, $\rho_0 \le 0.01$ and resulted in an $s$-independent ratio of the elastic-to-total cross-sections, $\sigma_{\rm el}/\sigma_{\rm tot} \approx {\rm const}(s)$, while at the LHC energies, $\rho_0$ is not negligibly small and the elastic-to-total cross section ratio is a strongly rising function of $s$. Here, we just note about the geometric scaling as one of the earliest proposals to have a data-collapsing behavior in elastic scattering, but we look in detail for other kind of scaling laws that are more in harmony and consistency with the recent LHC measurements [@Csorgo:2019fbf]. Let us first detail the following two dimensionless scaling functions proposed in Ref. [@CsorgO:2013kua] and denoted as $F(y)$ and $G(z)$ in what follows. These scaling functions were introduced in order to cross-check if elastic $pp$ collisions at the LHC energies approach the so-called black-disc limit, expected at ultra-high energies, or not. In a strong sense, the black disc limit corresponds to the shadow profile $P(b) = \theta(R_b - b)$ that results in $\sigma_{\rm el} /\sigma_{\rm tot} = 1/2$, independently of the black disc radius $R_b$. This limit is clearly not yet approached at LHC energies, but in a weak sense, a black-disc limit is considered to be reached also if the shadow profile function at $b=0$ reaches unity, i.e. $P(b=0) = 1$, corresponding to black disc scattering at zero impact parameter. This kind of black disc scattering might have been approached at $\sqrt{s} = 7$ TeV LHC energies [@Nemes:2015iia]. The first scaling function of the differential cross-section is defined as follows: $$\begin{aligned} F(y) & = & \frac{|t|}{\sigma_{\rm tot}} \frac{d \sigma}{d t} \, , \\ y & = & t \sigma_{\rm tot} \,,\end{aligned}$$ In the diffractive cone approximation, the $s$-dependence in $F(y)$ does not cancel out but can be approximately written as $$\begin{aligned} F(y) & \simeq & y B(s) \frac{\sigma_{\rm el}(s)}{\sigma_{\rm tot}^2(s)} \left. \frac{d \sigma}{d t}\right\vert_{t = y/\sigma_{\rm tot}(s)}, \\ B(s) t & = & y \frac{B(s)}{ \sigma_{\rm tot}(s)} \, .\end{aligned}$$ This result clearly indicates that in the diffractive cone, the $F(y)$ scaling is strongly violated by the energy-dependent factors, while for a black-disc scattering, the $F(y)$ scaling has to be valid, see Ref. [@CsorgO:2013kua] for more details. Indeed, the aim to introduce the scaling function $F(y)$ was to clarify that even at the highest LHC energies we do not reach the black-disk limit (in the strong sense). As discussed in the previous section, the deviations from the black-disc limit might be due to the effects of the real part and the hollowness, i.e. reaching a black-ring limit instead of a black-disc one at the top LHC energies. Since in the $F(y)$ scaling function the position of the diffractive minimum (dip) remains $s$-dependent, yet another scaling function denoted as $G(z)$ was proposed to transform out such $s$-dependence of the dip. This function was introduced also in Ref. [@CsorgO:2013kua] as follows: $$\begin{aligned} G(z) & = & \frac{z |t_{\rm dip}(s)|}{\sigma_{\rm tot}(s)} \left. \frac{d \sigma}{d t}\right\vert_{t = z |t_{\rm dip}(s)|}, \\ z & = & \frac{t}{|t_{\rm dip}(s)|}.\end{aligned}$$ In principle, all black-disc scatterings, regardless of the value of the total cross section, should show a data-collapsing behaviour to the same $G(z)$ scaling function. As observed in Ref. [@CsorgO:2013kua], such an asymptotic form of the $G(z)$ scaling function is somewhat better approached at the LHC energies as compared to the lower ISR energies but still not reproduced it exactly. This is one the key indications the black-disc limit in the elastic $pp$ scattering is not achieved at the LHC, up to $\sqrt{s} = 13$ TeV. This may have several other important implications. For example, this result indicates that in simulations of relativistic heavy-ion collisions at the LHC energies, more realistic profile functions have to be used to describe the impact parameter dependence of the inelastic $pp$ collisions: a simple gray or black-disc approximation for the inelastic interactions neglects the key features of elastic $pp$ collisions at the TeV energy scales. One advantage of the scaling variables $y$ and $z$ mentioned above is that they are dimensionless. Numerically, $G(z)$ corresponds to the $F(y)$ function if the scaling variable $y$ is rescaled to $z$. As indicated in Fig. 23 of Ref. [@CsorgO:2013kua], indeed the main difference between $F(y)$ and $G(z)$ is that the diffractive minimum is rescaled in $G(z)$ to the $z=1$ position, so $G(z)$ has less evolution with $s$ as compared to $F(y)$. However, as it is clear from the above discussion, the function $$\begin{aligned} G(z) & \simeq & \frac{\sigma_{\rm el}(s)}{\sigma_{\rm tot}(s)} B(s) z |t_{\rm dip}(s)|\left. \frac{d \sigma}{d t}\right\vert_{t = z |t_{\rm dip}(s)|}, \\ B(s) t & = & B(s) t_{\rm dip}(s) \, z,\end{aligned}$$ is well-defined only for $pp$ elastic scattering, where a unique dip structure is observed experimentally. Even the dip region is not always measurable in $pp$ reactions if the experimental acceptance is limited to the cone region, which is a sufficient condition for the total cross section measurements. If the acceptance is not large enough in $|t|$ to observe the diffractive minimum, or, in the case when the diffractive minimum does not clearly exist, then the both $F(y)$ and $G(z)$ scaling functions cannot be used. So, the major disadvantage of these scaling functions for extracting the Odderon signatures from the data is that in $p\bar p$ collisions no significant diffractive minimum is found by the D0 collaboration at 1.96 TeV [@Abazov:2012qb]. Besides, even if $z$ variable were defined, the above expressions indicate, in agreement with Fig. 23 of Ref. [@CsorgO:2013kua], that the $G(z)$ scaling function has a non-trivial energy-dependent evolution in the cone ($z \ll 1$) region. Due to these reasons, variables $z$ and $y$ are not appropriate scaling variables for a scale-invariant analyzis of the crossing-symmetry violations at high energies. Having recapitulated the considerations in Ref. [@Csorgo:2019fbf], with an emphasis on the $s$-dependence of the parameters, let us now consider, how these $s$-dependencies can be scaled out at low values of $|t|$, where the diffraction cone approximation is valid, by evaluating the scaling properties of the experimental data on the differential elastic $pp$ and $p\bar p$ cross sections. For this purpose, let us look into the scaling properties of the differential cross sections and their implications related to the Odderon discovery in a new way. A new scaling function for the elastic cone ------------------------------------------- In the elastic cone region, all the $pp$ and $p\bar p$ differential cross sections can be rescaled to a straight line in a linear-logarithmic plot, when the horizontal axis is scaled by the slope parameter to $-t B(s)$ while the vertical axis is simultaneously rescaled by $B(s) \sigma_{\rm el}(s)$, namely, $$\frac{1}{B(s) \sigma_{\rm el}(s)} \frac{d\sigma}{d t} = \exp\left[ - t B(s)\right] \qquad \mbox{\rm versus}\quad x = - t B(s) \, .$$ This representation, in the diffractive cone, scales out the $s$-dependencies of the total and elastic cross section, $\sigma_{\rm tot}(s)$ and $\sigma_{\rm el}(s)$, and also that of the slope parameter, $B(s)$. As a function of the scaling variable $x = - tB$, it will correspond to the plot of $\exp(-x)$ i.e. a straight line with slope $-1$ on a linear-logarithmic plot. It is well-known that the elastic scattering is only approximately exponential in the diffractive cone, but by scaling out this exponential feature one may more clearly see the scaling violations on this simple scaling plot. We will argue that such a scaling out of the trivial energy-dependent terms can be used as a powerful method in the search for the elusive Odderon effects in the comparison of elastic $pp$ and $p\bar p$ data in the TeV energy range. In what follows, we investigate the scaling properties of the new scaling function, $$\begin{aligned} H(x) & \equiv & \frac{1}{B(s) \sigma_{\rm el}(s)} \frac{d\sigma}{d t}, \\ x & = & - t B(s) \, .\end{aligned}$$ This simple function has four further advantages summarized as follows: 1. First of all, it satisfies a sum-rule or normalization condition rather trivially, $\int dx H(x) = 1$, as follows from the definition of the elastic cross section. 2. Secondly, if almost all of the elastically scattered particles belong to the diffractive cone, the differential cross-section at the optical point is also given by $ \left. \frac{d\sigma}{dt}\right\vert_{t=0} \, = \, A(s)\, = \, B(s) \sigma_{\rm el}(s)$, and in these experimentally realized cases we have another (approximate) normalization condition, namely, $H(0) = 1.$ 3. Third, in the diffractive cone, all the energy dependence is scaled out from this function, i.e., $H(x) = \exp(-x)$ that shows up as a straight line on a linear-logarithmic plot with a trivial slope $-1$. 4. Last, but not least, the slope parameter $B(s)$ is readily measurable not only for $pp$ but also for $p\bar p$ collisions, hence the $pp$ and the $p\bar p$ data can be scaled to the same curve without any experimental difficulties. Let us first test these ideas by using the ISR data in the energy range of $\sqrt{s} = 23.5 - 62.5$ GeV. The results are shown in Fig. \[fig:scaling-ISR-x\] which indicates that the ISR data indeed show a data-collapsing behaviour. At low values of $x$, the scaling function is indeed, approximately, $H(x) \simeq \exp(-x)$, that remains a valid approximation over, at least, five orders of magnitude in the decrease of the differential cross section. However, at the ISR energies, the scaling seems to be valid, within the experimental uncertainties, not only at low values of $x = - B t$, but extended to the whole four-momentum transfer region, including the dip and bump region $(15 \le x \le 30)$ as well. Even at large-$|t|$ after the bump region, corresponding to $x \ge 30$, the data can approximately be scaled to the same, non-exponential scaling function ($H(x) \neq \exp(-x)$ in the tails of the distribution). Thus, Fig. \[fig:scaling-ISR-x\] indeed indicates a non-trivial data-collapsing behaviour to the same, non-trivial scaling function at the ISR energy range of $\sqrt{s} = 23.5 - 62.5$ GeV. This observation motivated us to generalize the derivation presented above in this section, to arbitrary positively definite non-exponential scaling functions $H(x)$. Such a generalisation is performed in the next section in order to explain the data-collapsing behaviour in Fig. \[fig:scaling-ISR-x\]. ![image](figs/fig_1_ISR.pdf){width="90.00000%"} Generalized scaling functions for non-exponential differential cross-sections {#Sect:scaling-dip-bump} ----------------------------------------------------------------------------- In this section, we search for a novel type of scaling functions of $pp$ elastic data that may be valid not only in the diffractive cone, but also in the crucial dip and bump region, as well. In Fig. \[fig:scaling-ISR-x\], we have noticed that the data-collapsing behaviour may extend well above the small $x = - tB$ region significantly beyond the diffractive maximum, indicating a clear deviation of the scaling function $H(x)$ from the exponential shape. In addition, a recent detailed study of the low-$|t|$ behaviour of the differential elastic $pp$ cross section at $\sqrt{s} = 8$ TeV observed a more than 7$\sigma$-significant deviation from the exponential shape [@Antchev:2015zza; @Csorgo:2016qyr], which also corresponds to a non-exponentiality in the scaling function $H(x)$ even in the low-$|t|$, or small $x$, range. In this section, we thus further generalize the derivation of the $H(x) = \exp(-x)$ scaling function, in order to allow for arbitrary positively definite functions with $H(x=0) = 1$ normalisation, and to develop a physical interpretation of the experimental observations. Let us start the derivation from the relation of the elastic scattering amplitude in the impact parameter space $t_{\rm el}(s,b)$ and the complex opacity function $\Omega(s,b)$ based on Eq. (\[e:tel-eikonal\]), using the same notation as in Ref. [@Nemes:2015iia]: $$t_{\rm el}(s,b) = i \left[1 - \exp(-i \, \mbox{\cal Im}\, \Omega(s,b))\sqrt{1 - \tilde\sigma_{\rm in}(s,b)} \right] \,.$$ The shadow profile function $P(s,b)$ is equal to the inelastic scattering profile $\tilde\sigma_{in}(s,b)$ as follows from Eq. (\[e:shadow\]), $P(s,b) = \tilde\sigma_{\rm in}(s,b)$. The imaginary part of the opacity function $\Omega$ is generally not known or less constrained by the data, but it is experimentally known that $\rho_0(s)$ is relatively small at high energies: at all the measured LHC energies and below, $\rho_0 \le 0.15$, hence, $\rho^2 \le 2.3 $ %. Here, we thus follow the choice of Ref. [@Nemes:2015iia], that has demonstrated that the ansatz $$\mbox{\cal Im} \, \Omega(s,b) \propto \tilde\sigma(s,b)$$ gives a satisfactory description of the experimental data in the $-t \le 2.5$ GeV$^2$ region, with a small coefficient of proportionality that was denoted in Ref. [@Nemes:2015iia] by $\alpha$ parameter. This ansatz assumes that the inelastic collisions at low four-momentum transfers correspond to the cases when the parts of proton suffer elastic scattering but these parts are scattered to different directions, not parallel to one another. Soon we shall see that the physical interpretation of this parameter being close to unity is actually due to $\rho_0 \ll 1$. So we will use this approximation in our analysis below. Based on the results of the previous section obtained in the diffractive cone in the $\rho_0 \ll 1$ limit, we have the following scaling property of the opacity function: $$\begin{aligned} \mbox{\cal Re} \, \exp\left[-\Omega(s,b)\right] & = & 1 - r(s) E( \tilde{x}), \\ \mbox{\cal Im} \, \exp\left[-\Omega(s,b)\right] & = & \rho_0(s) \, r(s) E( \tilde{x}), \\ \tilde{x } & = & b / R(s), \label{e:Hx} \\ R(s) & = & \sqrt{B(s)} \,, \label{e:RB}\end{aligned}$$ where $r(s)$ is four times the ratio of the elastic to the total cross section, as given in Eq. (\[e:rs\]), and $E(\tilde{x})$ describes the distribution of the inelastic collisions as a function of the dimensionless impact parameter $b$ normalised to $\sqrt{B(s)}$, the characteristic length-scale of the $pp$ collisions at a given value of the center-of-mass energy $\sqrt{s}$. This ansatz allows for a general shape of the impact parameter $b$-dependent scattering amplitude, under the assumption that the $b$-dependence may occur only through the two-dimensional scaling variable $\tilde{x}$, as described by the scaling function $E(\tilde{x})$, $$t_{\rm el}(s,{b}) = (i + \rho_0(s)) \, r(s) E(\tilde{x}) \,. \label{e:tel-scaling}$$ Here we assume that $E(\tilde{x})$ is a real function that depends on the modulus of the dimensionless impact parameter $\tilde{x} = b/R(s)$. For normalization, we choose that the Fourier-transform $\tilde E({0}) = 1$, which also corresponds to the condition $$\int \, d^2\tilde{x} E(\tilde{x}) = 1 \,, \label{e:E-norm-new}$$ keeping in mind that we have two-dimensional Fourier-transform which at zero is equal to the integral over the two different directions in the impact-parameter space. Let us investigate first the consequences of this scaling ansatz for the shadow profile function $P(s,b)$. The algebra is really very similar to that of the exponential cone approximation that was implemented above. We obtain the following result: $$\begin{aligned} P(s,b) & = & \frac{1}{1+\rho_0^2(s)} - \nonumber \\ & - & (1 + \rho_0^2(s)) \left[ r(s)E\left(\frac{b}{R(s)}\right) - \frac{1}{1+\rho_0^2(s)}\right]^2 \, . \label{e:Psb-any-H}\end{aligned}$$ Evaluating the above relation at $b=0$ and using the normalization condition $E({0}) = 1$, we obtain again that the shadow profile at zero impact parameter value has a maximum that is slightly less than unity: $P(s,0) \le 1/(1+\rho_0^2)$. It is interesting to note that the maximum in the profile function is reached at the same threshold (\[crit-ratio\]) as in the case of the exponential cone approximation, corresponding to $$\begin{aligned} \left. r(s)\right\vert_{\rm threshold} & = & \frac{1}{1+\rho_0^2(s)} \, , \label{e:rthreshold}\\ \left. \frac{\sigma_{\rm el}}{\sigma_{\rm tot}}\right\vert_{\rm threshold} & = & \frac{1}{4(1 +\rho_0^2(s))} \,. \label{e:sigmaelpertot-threshold} \,.\end{aligned}$$ Thus a threshold-crossing behaviour seems to happen if the elastic-to-total cross-section ratio exceeds $0.25$. Remarkably, in the domain of validity of our derivation, this threshold crossing point is independent of the detailed shape of the $H(x)$ scaling function for a broad class of models. However, it is also clear from Eq. (\[e:Psb-any-H\]) that the shape of $E(\tilde x)$ function plays an important role in determining the hollowness effect, so a detailed precision shape analysis is necessary to obtain the significance of this effect. Starting from the definition, Eq. (\[e:dsigmadt-Tel\]), the scattering amplitude in the $b$-space (\[e:tel-scaling\]) yields the following form of the differential cross section in the momentum space: $$\label{dsigdtscaling} \frac{d\sigma}{dt} = \frac{1 +\rho_0^2(s)}{4 \pi} r^2(s) R^4(s) |\tilde E(R(s) \Delta)|^2 \,.$$ Utilizing Eq. (\[e:RB\]), we find that this form of the differential cross section is dependent on the four-momentum transfer squared, $t$, indeed only through the variable $x \equiv - B(s) t = R^2(s) \Delta^2$, so it is a promising candidate to be a scaling variable. Now, if we consider the function (\[dsigdtscaling\]) at the optical point, $t = 0$, we find $$A(s) = \left. \frac{d\sigma}{dt}\right\vert_{t=0} \, = \, \frac{1 +\rho_0^2(s)}{4 \pi} r^2(s) R^4(s) |\tilde E(0)|^2 \,. \label{e:As-optical}$$ If the impact parameter dependent elastic amplitude has an $s$-dependent internal scale and $s$-dependent strength, we thus obtain the following generalized scaling relation for arbitrary elastic scattering amplitudes that satisfy Eq. (\[e:tel-scaling\]): $$\frac{1}{A(s)} \frac{d\sigma}{dt} \equiv \, H(x) \, = \, \frac{|\tilde E(\sqrt{x})|^2}{|\tilde E(x=0)|^2} \, \label{e:Hx-general}$$ is satisfied. This scaling function clarifies that, in general, the normalization of $H(x)$ scaling function on the left hand side of Eq. (\[e:Hx-general\]) should be made by the value of the differential cross section at the optical ($t = 0$) point as given by Eq. (\[e:As-optical\]). This value for differential cross sections with nearly exponential diffractive cone is indeed approximately equal to $A(s) = B(s) \sigma_{\rm el}(s)$. In this case, the normalization condition $H(0) = 1$ is maintained, while the integral of $H(x)$ becomes unity only for differential cross sections dominated by the exponential cone (i.e. when the integral contribution from the non-exponential tails is several orders of magnitude smaller as compared to the integral of the cone region). For the total cross section, we find from Eq. (\[e:sigmatot\]) $$\sigma_{\rm tot}(s) = 2 r(s) R^2(s) \tilde E(0) = \sqrt{ \frac{16 \, \pi \, A(s)}{1 + \rho^2_0(s)}} \,.$$ Note that here we have indicated the normalization just for clarity, but one should keep in mind that in our normalization, $\tilde E(0) = 1$, and correspondingly, $H(x=0)=1$ by definition. As clarified by Eq. (\[e:Hx-general\]), the scaling function $H(x)$ coincides with the modulus squared of the normalized Fourier-transform of the scaling function $E(\tilde{x})$, if the elastic amplitude depends on the impact parameter $b$ only through its scale invariant combination $x = \frac{b}{R(s)}$. This way, the $H(x)$ scaling is directly connected to the impact parameter dependence of the elastic amplitude and transforms out the trivial $s$-dependencies coming from $\sigma_{\rm tot}(s)$, $\sigma_{\rm el}(s)$, $B(s)$, and $\rho_0(s)$ functions. The above derivation also indicates that it is a promising possibility to evaluate the $H(x)$ scaling function directly from the experimental data. It has a clear normalization condition, $H(0) = 1$. Furthermore, in the diffractive cone, for nearly exponential cone distributions, $H(x) \approx \exp(-x)$. We have shown above in this section that even if one neglects the possible $t$ dependence of $\rho(s,t)$, the arbitrary $H(x)$ profile functions can be introduced if the elastic amplitude is a product of $s$-dependent functions, and its impact parameter dependence originates only through an $s$-dependent scaling variable which can be conveniently defined as $\tilde{x}^2 = \frac{b^2}{B(s)}$. Thus, the violations of the $H(x)$ scaling may happen if not only the slope parameter $B(s)$, the real-to-imaginary ratio $\rho_0(s)$ and the integrated elastic and total cross sections $\sigma_{\rm el}(s)$ and $\sigma_{\rm tot}(s)$ depend on $s$, but also the $b$-dependence of the elastic scattering amplitude starts to change noticeably. Namely, the $H(x)$ scaling breaks if and only if the scaling relation $t_{\rm el}(b,s) = C(s) E(b/R(s))$ gets violated. Finally, let us note that the exponential shape of $H(x) \approx \exp(-x)$ can be derived as a consequence of the analyticity of $T_{\rm el}(s,\Delta)$ at $\Delta = 0$ corresponding to the $t =0$ optical point. However, our recent analysis of the differential elastic cross sections in the LHC energy range [@Csorgo:2018uyp; @Csorgo:2018ruk] suggests that this approximation breaks down since the TOTEM experiment observed a significant non-exponential behaviour already in the diffractive cone. In this case, at low values of $|t|$, nearly Lévy stable source distributions can be introduced, that lead to an approximate $H(x)\propto \exp(-x^{\alpha})$ behaviour, where $\alpha = \alpha_{\rm Levy}/2 \le 1.$ For example, as we have shown in Refs. [@Csorgo:2018uyp; @Csorgo:2018ruk], at low $|t|$, a stretched exponential form with $\alpha \simeq 0.9$ describes the elastic scattering data from ISR to LHC energies reasonably well in a very broad energy range from $23.5$ GeV to $13$ TeV. Results in the TeV energy range {#s:results-TeV} =============================== Keeping in mind that the $H(x)$ scaling holds within experimental errors at the ISR energies, where the center-of-mass energies vary from $23.5$ to $62.5$ GeV, that is less then by a factor of three, let us also investigate the same scaling function at the LHC energies, where the TOTEM measurements span, on a logarithmic scale, a similar energy range, from $2.76$ TeV to $13$ TeV, i.e. slightly more than by a factor of four. The TOTEM data at 13, 7 and $2.76$ TeV are collected from Refs. [@Antchev:2017dia], [@Antchev:2013gaa], and Ref. [@Antchev:2018rec], respectively, and plotted in Fig. \[fig:scaling-LHC\]. Note that the possible scaling violating terms are small in the $\sqrt{s} = 2.76 - 7$ TeV region: they are within the statistical errors, when increasing $\sqrt{s}$ from 2.76 to 7 TeV, i.e. by about a factor of 2.5, by starts to be significantly violated at higher energies. Let us look into this effect in more detail. ![image](figs/fig_2_LHC_stat.pdf){width="98.00000%"} ![image](figs/fig_2_LHC.pdf){width="98.00000%"} This plot indicates that the $H(x)$ scaling is approximately valid in the diffraction cone also in the LHC energy range, however, the range of validity of this scaling is more limited. Instead of being approximately valid in the whole measurable $x$ region, at the LHC this scaling remains valid only through about 3-4 orders of magnitude drop in the differential cross-section. The so called “swing” effect becomes clear at $\sqrt{s} = 13$ TeV: the scaling function starts to decrease faster than exponential before the diffractive mimimum, and also the diffractive minimum moves to lower values in $x$ as compared to its position at lower LHC energies. This swing effect, apparent in Fig. \[fig:scaling-LHC\], can be interpreted in terms of changes in the shadow profile of protons at the LHC energies as the energy range increases from $2.76$ through $7$ to $13$ TeV. Indeed, such small $s$-dependent scaling violations in the $H(x)$ scaling function show the same qualitative picture as what has been observed by the direct reconstruction of the $P(s,b)$ shadow profiles in the TeV energy range in several earlier papers, see for example Refs. [@Kohara:2017ats; @Dremin:2018urc; @Dremin:2019tgm] or our Refs. [@Nemes:2015iia; @Csorgo:2018uyp; @Csorgo:2018ruk]. Inspecting directly Fig. \[fig:scaling-LHC\], we find, that the $H(x)$ scaling functions agree within statistical errors if the colliding energy is increased from $\sqrt{s} = 2.76$ TeV to 7 TeV, and change significantly if the colliding energy increases further to $\sqrt{s } = 13$ TeV. This implies that the possible scaling violating terms are small in the $\sqrt{s} = 2.76 - 7$ TeV region as they are within the statistical errors, when increasing $\sqrt{s}$ from 2.76 to 7 TeV, by about a factor of 2.5. However, the $H(x)$ scaling is violated by $s$-dependent terms when increasing $\sqrt{s}$ from 7 to 13 TeV, and such a scaling violation is larger than the quadratically added statistical and $t$-dependent systematic errors. This behaviour may happen due to approaching a new domain, where the shadow profile function of $pp$ scattering changes from a nearly Gaussian form to a saturated shape, that in turn may develop hollowness at 13 TeV and higher energies. The experimental indications of such a threshold-crossing behaviour were summarized recently in Ref. [@Csorgo:2019fbf], and are also described above: a new domain may be indicated by a sudden change of $B(s)$ in between 2.76 and 7 TeV and, similarly, the crossing of the critical $\sigma_{\rm el}(s)/\sigma_{\rm tot}(s) = 1/4$ line in multi-TeV range of energies, somewhere between 2.76 and 7 TeV. From the theoretical side, we have previously noted such as drastic change in the size of the proton substructure between the ISR and LHC energy domains from a dressed quark-like to a dressed di-quark type of a substructure [@Csorgo:2018uyp; @Csorgo:2018ruk] which may be, in principle, connected to such a dramatic change in the scaling behaviour of the elastic cross section. However, in this work we are focused on the scaling properties of the experimental data, and we are not intended to draw any model-dependent conclusions here. Instead, in Fig. \[fig:scaling-LHC-vs-ISR\] we directly compare the scaling properties of the differential cross sections in the form of the $H(x)$ scaling function, using the same data sets at the ISR energies, as in Figs. \[fig:scaling-ISR-x\] and \[fig:scaling-LHC\]. This range of data now spans nearly a factor of about 500, about a three orders of magnitude increase in the range of available colliding energies, from 23.5 GeV to 13 TeV. As can be seen in the corresponding Fig. \[fig:scaling-LHC-vs-ISR\], the scaling works approximately in the diffractive cone, however, the $H(x)$ scaling function cannot be considered as an approximately constant if such a huge change in the colliding energies is considered. ![image](figs/fig_3_ISR_LHC_stat.pdf){width="98.00000%"} ![image](figs/fig_3_ISR_LHC.pdf){width="98.00000%"} Comparing Figs. \[fig:scaling-ISR-x\], \[fig:scaling-LHC\] and \[fig:scaling-LHC-vs-ISR\], we find that the $s$-dependence of the $H(x)$ scaling functions is rather weak if $s$ changes within a factor of two, however, there are very significant changes if the range of energies is changing by a factor of a few hundred, from the ISR energy range of $\sqrt{s} = 23.5 - 62.5$ GeV to the LHC energy range of 2.76 – 7.0 – 13.0 TeV. In the left panel of Fig. \[fig:scaling-LHC-2.76-vs-D0\], the $H(x)$ function of the $\sqrt{s} = 2.76 $ TeV TOTEM data set of Ref. [@Antchev:2018rec] is compared with that of the $p\bar p$ collisions measured by the D0 collaboration at $\sqrt{s} = 1.96 $ TeV Tevatron energy [@Abazov:2012qb]. The right panel of Fig. \[fig:scaling-LHC-2.76-vs-D0\] compares the $H(x)$ scaling functions of elastic $pp$ collision at $\sqrt{s} = 7.0$ TeV LHC energy [@Antchev:2011zz; @Antchev:2013gaa] to that of the elastic $p\overline{p}$ collisions at the Tevatron energy, $\sqrt{s} = 1.96$ TeV. On both panels, the statistical errors and $t$-dependent systematic errors are added in quadrature. Lines are shown to guide the eye corresponding to fits with the model-independent Lévy series studied in Refs. [@Csorgo:2018uyp; @Csorgo:2018ruk]. These plots suggest that the comparison of the $H(x)$ scaling functions or elastic $pp$ to $p\bar p$ collisions in the TeV energy range is a promising method for the Odderon search, and a precise quantification of the difference between the $H(x)$ scaling functions for $pp$ to $p\bar p$ collisions data sets is important. But how big is the difference between the $H(x)$ scaling functions of elastic $pp$ collisions at similar energies? The $H(x)$ scaling of the differential cross section $d\sigma/dt$ of elastic $pp$ collisions is compared at the nearby $\sqrt{s} = 2.76$ and $7$ TeV LHC energies on Fig. \[fig:scaling-LHC-7-vs-2.76-Hx\]. These plots are similar to the panels of Fig. \[fig:scaling-LHC-2.76-vs-D0\]. The $H(x)$ scaling functions are remarkably similar, in fact, they are the same within the statistical errors of these measurements. Due to their great similarity, it is important to quantify precisely how statistically significant their difference is. We stress in particular that the possible scaling violations are small, apparently within the statistical errors, when $pp$ results are compared at LHC energies and $\sqrt{s}$ is increased from 2.76 to 7 TeV, by about a factor of 2.5. This makes it very interesting to compare the differential cross-sections of $pp$ and $p\bar p$ elastic scattering at the nearest measured energies in the TeV range, where crossing-odd components are associated with Odderon effects given that all Reggeon contributions are expected to be negligibly small in the TeV energy range. Actually, the largest $\sqrt{s}$ of $p\bar p$ elastic scattering data is 1.96 TeV, a measurement by the D0 collaboration [@Abazov:2012qb] while at the LHC the public data set on the elastic $pp$ scattering is available at $\sqrt{s} = 2.76$ TeV [@Antchev:2018rec], corresponding a change in $\sqrt{s}$ by a factor of $2.76/1.96 \approx 1.4$. This is a rather small multiplicative factor on the logarithmic scale, relevant to describe changes both in high energy $pp$ and $p\bar p$ collisions. Given that the $H(x)$ scaling function is nearly constant between 2.76 TeV and 7 TeV within the statistical errors of these data sets, we will search for a significant difference between the $H(x)$ scaling function of elastic $pp$ collisions at $\sqrt{s} = 2.76 $ and $7.0$ TeV as well as that of the elastic $p\bar p$ scattering at $\sqrt{s} = 1.96 $ TeV. If such a difference is observed, then there must be a crossing-odd (Odderon) component in the scattering amplitude of elastic $pp$ and $p\bar p$ scatterings. Let us now consider Fig. \[fig:scaling-antiprotons\]. This plot compares the $H(x)$ scaling functions for $p\bar p$ collisions at various energies from $\sqrt{s} = 546$ GeV to 1.96 TeV. Within experimental errors, an exponential cone is seen that extends to $x = - t B \approx 10$ at each measured energies, while for larger values of $x$ the scaling law breaks down in an energy dependent manner. At lower energies, the exponential region extends to larger values of $x \approx 13$, and the tail regions are apparently changing with varying colliding energies. Due to this reason, in this paper we do not scale the differential cross section of elastic $p\bar p$ collisions to different values of $\sqrt{s}$ as this cannot be done model-independently. This property of elastic $p\bar p$ collisions is in contrast to that of the elastic $p p$ collisions, where we have demonstrated in Figs. \[fig:scaling-ISR-x\],\[fig:scaling-LHC\] that in a limited energy range between $\sqrt{s} = 23.5$ and $62.5$ GeV, as well as at the LHC in the energy range between $\sqrt{s} = 2.76$ and $7$ TeV, the $H(x)$ scaling works well. Due to these experimental facts and the apparent violations of the $H(x)$ scaling for $p\bar p$ collisions in the $x = -t B \ge 10$ region, in this paper we do not attempt to evaluate the energy dependence of the differential cross sections for $p\bar p$ collisions. However, based on the observed $H(x)$ scaling in $pp$ collisions, we do find a model-independent possibility to rescale the differential cross sections of elastic $pp$ collisions in limited energy ranges. After the above qualitative discussion of $H(x)$ scaling for both $pp$ and $p\bar p$ elastic collisions, let us work out the details of the possibility of rescaling the measured differential cross sections to other energies in the domain where $H(x)$ indicates a scaling behaviour within experimental errors. The left panel of Fig. \[fig:rescaling-of-dsigma-dt-at-ISR-and-LHC\] indicates the result of rescaling of the differential cross sections of elastic $pp$ scattering from the lowest $\sqrt{s} = 23.5$ GeV to the highest $62.5$ GeV ISR energy, using Eq. (\[e:dsdt-rescaling\]). We have evaluated the level of agreement of the rescaled 23.5 GeV $pp$ data with the measured 62.5 GeV $pp$ data with the help of Eq. (\[e:chi2-data\]). The result indicates that the data measured at $\sqrt{s} = 25.5$ GeV and duly rescaled to 62.5 GeV are, within the errors of the measurements, consistent with the differential cross section of elastic $pp$ collisions as measured at $\sqrt{s} = 62.5$ GeV. This demonstrates that our method can also be used to extrapolate the differential cross sections at other energies by rescaling, provided that the $H(x)$ scaling is not violated in that energy range and that the nuclear slope and the elastic cross sections are known at a new energy as well as at the energy from where such a rescaling starts. A similar method is applied at the LHC energies in the middle panel of Fig. \[fig:rescaling-of-dsigma-dt-at-ISR-and-LHC\]. This plot also indicates a clear agreement between the 2.76 TeV data and the rescaled 7 TeV data, which corresponds to a $\chi^2/{\rm NDF} = 39.3/63$ and a CL of 99.2 % and a deviation on the 0.01 $\sigma$ level only. This suggests that indeed the rescaling of the differential cross section of elastic scattering can be utilized not only in the few tens of GeV range but also in the few TeV energy range. Most importantly, this plot indicates that there is a scaling regime in elastic $pp$ collisions, that includes the energies of $\sqrt{s} = $ 2.76 and 7 TeV at LHC, where the $H(x)$ scaling is within errors, not violated. This is in a qualitative contrast to the elastic $p\bar{p}$ collisions at TeV energies, where the validity of the $H(x)$ scaling is limited only to the diffractive cone region with $x \le 10$, while at larger values of $x$, the $H(x)$ scaling is violated. The right panel of Fig. \[fig:rescaling-of-dsigma-dt-at-ISR-and-LHC\] indicates a surprising agreement: after rescaling of the differential cross section of elastic $pp$ collisions from 2.76 TeV to 1.96 TeV, we find no significant difference between the rescaled 2.76 TeV $pp$ data with the $p\bar p$ data at the same energy, $\sqrt{s} = 1.96 $ TeV. The agreement between the extrapolated $pp$ and the measured $p\bar p$ differential cross sections correspond to an agreement at a CL of 7.9 %, i.e. a surprising agreement at the $1.76\sigma$ level. It can be seen on the right panel of Fig. \[fig:rescaling-of-dsigma-dt-at-ISR-and-LHC\] that in the swing region, before the dip, the rescaled $pp$ differential cross section seems to differ qualitatively with the $p\bar p$ collisions data. However, according to our $\chi^2$ analysis that also takes into account the horizontal errors of the TOTEM data, we find that this apparent qualitative difference between these two data sets is quantitatively not significiant: it is characterized as an agreement within less than 2$\sigma$. ![image](figs/fig_4_1960_2760_4_stat+sys+levy.pdf){width="48.00000%"} ![image](figs/fig_5_1960_7000_4_stat+sys+levy.pdf){width="48.00000%"} These plots suggests that the $H(x)$ scaling functions of elastic $pp$ and $p\bar p$ collisions differ at similar energies, while the same scaling functions for elastic $pp$ collisions are similar at similar energies, thus the comparison of the $H(x)$ scaling functions of elastic $pp$ and $p\bar p$ collisions is a promising candidate for an Odderon search. Due to this reason, it is important to quantify how significant is this difference, given that the $H(x)$ scaling functions scale out the dominant $s$-dependent terms, that arise from the energy-dependent $\sigma_{\rm el}(s)$ and $B(s)$ functions. Such a quantification is the subject of the next section. Before going into more details, we can already comment on a new Odderon effect qualitatively. When comparing the $H(x)$ scaling function of the differential cross section of elastic $pp$ collisions at 2.76 and 7.0 TeV colliding energies, we see no qualitative difference. By extrapolation, we expect that the $H(x)$ scaling function may be approximately energy independent in a bit broader interval, that extends down to 1.96 TeV. Such a lack of energy evolution of the $H(x)$ scaling function of the $pp$ collisions is in a qualitative contrast with the evolution of the $H(x)$ scaling functions of $p\bar p$ collisions at energies of $\sqrt{s} = 0.546 - 1.96$ TeV, where a qualitative and significant energy evolution is seen in the $x = -t B > 10 $ kinematic range. Thus, our aim is to quantify the Odderon effect in particular in this kinematic range of $x = -t B > 10 $ in order to evaluate the significance of this qualitative difference between elastic $pp$ and $p\bar p$ collisions. ![image](figs/fig_6_2760_7000_2_stat+levy.pdf){width="98.00000%"} ![image](figs/fig_6_2760_7000_4_stat+sys+levy.pdf){width="98.00000%"} ![image](figs/fig_7_UA4_CDF_D0.pdf){width="95.00000%"} ![image](figs/fig_11_0023_0062_extrapolated_chi2_cross-check.pdf){width="33.00000%"} ![image](figs/fig_8_7000_2760_extrapolated_chi2_full_errors.pdf){width="33.00000%"} ![image](figs/fig_9_2760_1960_extrapolated_chi2_full_errors.pdf){width="33.00000%"} ![image](figs/fig_10_7000_1960_extrapolated_chi2_full_errors.pdf){width="80.00000%"} Quantification {#s:quantification} ============== In this section, we investigate the question of how to compare the two different scaling functions $H(x) = \frac{1}{B\sigma_{el}}\frac{d\sigma}{dt}$ with $x = - t B$ introduced above measured at two distinct energies. We would like to determine if two different measurements correspond to significantly different scaling functions $H(x)$, or not. In what follows, we introduce and describe a model-independent, simple and robust method, that enables us to quantify the difference of datasets or $H(x)$ measurements. The proposed method takes into account the fact that the two distinct measurements may have partially overlapping acceptance in $x$ and their binning might be different, so the datasets may correspond to two different sets of $x$ values. Let us first consider two different datasets denoted as $D_i$, with $i = 1, 2$. In the considered case, $D_i = \big\{x_i(j), H_i(j), e_i(j)\big\}$, $j = 1, ... n_i$ consists of a set of data points located on the horizontal axis at $n_i$ different values of $x_i$, ordered as $x_i(1) < x_i(2) < ... < x_i(n_i)$, $H_i(j) \equiv H_i(x_i(j))$ are the measured values of $H(x)$ at $x=x_i(j)$ points, and $e_i(j)\equiv e_i(x_i(j))$ is the corresponding error found at $x_i(j)$ point. In general, two different measurements have data points at different values of $x$. Let us denote as $X_1 = \big\{x_1(1), ... x_1(n_1)\big\}$ the domain of $D_1$, and similarly $X_2 = \big\{x_2(1), ... , x_2(n_2)\big\}$ stands for the domain of $D_2$. Let us choose the dataset $D_1$ which corresponds to $x_1(1) < x_2(1)$. In other words, $D_1$ is the dataset that starts at a smaller value of the scaling variable $x$ as compared to the second dataset $D_2$. If the first dataset ends before the second one starts, i.e. when $x_1(n_1) < x_2(1)$, their acceptances would not overlap. In the latter limiting case the two datasets cannot be compared using our method. Fortunately, however, the relevant cases e.g. the D0 data on elastic $p\overline{p}$ collisions at $\sqrt{s} = 1.96 $ TeV have an overlapping acceptance in $x$ with the elastic $pp$ collisions of TOTEM at $\sqrt{s} = 2.76$, 7 and 13 TeV. So from now on we consider the case with $x_1(n_1) > x_2(1)$. If the last datapoint in $D_2$ satisfies $x_2(n_2) < x_1(n_1)$, then $D_2$ is within the acceptance of $D_1$. In this case, let us introduce $f_2 = n_2$ as the final point with the largest value of $x_f$ from $D_2$. If $D_2$ has $x_2(n_2) > x_1(n_1)$, then the overlapping acceptance ends at the largest (final) value of index $f_2$ such that $x_2(f_2) < x_1(n_1) < x_2(f_2+1)$. This means that the point $f_2$ of $D_2$ is below the largest value of $x$ in $D_1$, but the next point in $D_2$ is already above the final, largest value of $x(n_1)$ in $D_1$. The beginning of the overlapping acceptance can be found in a similar manner. Due to our choice of $D_1$ as being a dataset that starts at a lower value, $x_1(1) < x_2(1)$, let us determine the initial point $i_1$ in $D_1$ that already belongs to the acceptance domain of $D_2$. This is imposed by the criterion that $x_1(i_1-1) < x_2(1) < x_1(i_1)$. We compare the $D_1$ and $D_2$ datasets in the region of their overlapping acceptance, defined above, either in a one-way or in a two-way projection method. The projection $1 \rightarrow 2$ has the number of degrees of freedom NDF$(1 \rightarrow 2)$ equal to the number of points of $D_2$ in the overlapping acceptance. For any of such a point $x_i(2)$, we used linear interpolation of the nearest points from $D_1$ so that $x_j(1) < x_i(2) \le x_{j+1}(1)$ to evaluate the data and the errors of $D_1$ at this particular value of $x = x_i(2)$. We used a linear interpolation using as a default a (linear, exponential) scales in the $(x, H(x))$ plane, that is expected to work well in the diffraction cone, where the exponential cone is a straight line. However, for safety and due to the unknown exact structure at the dip and bump region, we have also tested the linear interpolation utilizing the (linear, linear) scales in the $(x, H(x))$ plane. Similarly, the projection $2 \rightarrow 1$ has the number of degrees of freedom NDF$(2\rightarrow 1)$ as the number of points of dataset $D_1$ that fell into the overlapping common acceptance. A linear extrapolation was used for each $x_i(1)$ points in this overlapping acceptance, so that $x_j(2) < x_i(1) \le x_{j+1}(2)$, using both (linear, exponential) and (linear, linear) scales in the $(x, H(x)) $ planes. For the two-way projections, for example using $1 \longleftrightarrow 2$ has the number of degrees of freedom is the sum of the points of $D_1$ and $D_2$ in the overlapping acceptance, defined as NDF$(1\longleftrightarrow 2)$ = NDF$(1\rightarrow 2)$ + NDF$(2\rightarrow 1)$ Let us describe the two-way projections a bit more details, as the one-way projections can be considered as a special cases of this method. A common domain $X_{12} = \big\{ x_{12}(1), ... , x_{12}(n_{12})\big\}$ in the region of the overlap of the $X_1$ and $X_2$ domains can be introduced as follows. Take the data points in the interval $[i_1\dots n_1]$ from the $D_1$ set and the data points in the interval $[1\dots f_2]$ from the $D_2$ set. This selection procedure provides a total of $n_{12} = n_1+f_2-i_1 + 1$ points. Let us order this new set of points and denote such a united domain as $X_{12}$. This domain corresponds to a common acceptance region which has $n_{12}$ data points on the horizontal axis denoted as $\big\{ x_{12}(1), ... x_{12}(n_{12})\big\}$. In order to compare the datasets $D_1$ and $D_2$, one needs to build two analog datasets that are both extrapolated to the same common domain $X_{12}$ starting from $D_1$ and $D_2$ as if the data in both analog datasets were measured at the same values of $x$. So far, either $D_1$ or $D_2$ has some data value on any element of the domain $X_{12}$, but only one of them is determined. Let us take first those points from $X_{12}$ that belong to $D_1$, and label them with $j$ index. There are $n_1 - i_1 +1 $ such points. For such points, the data and error-bars of the extrapolated data set $D_{12}$ will be taken from $D_1$: $d_{12}(x_{12}(j) = d_1(x_1(j))$, $e_{12}(x_{12}(j) = e_1(x_1(j))$. However, for the same points, $D_2$ has no measured value. But we need to compare the data of $D_1$ and $D_2$ at common values of $x$. So $D_2$ data and errors can be interpolated using linear or more sophisticated interpolation methods. If the binning is fine enough, linear interpolation between the neighbouring datapoints can be used. At this point, let us consider that in the diffractive cone, when an exponential approximation to the differential cross section can be validated, the shape of the scaling function is known to be $H(x) \approx \exp(-x)$. This function is linear on a (linear, logarithmic) plot of $(x, H(x))$. In what follows, we will test both a (linear, exponential) interpolation in the $(x, H(x))$ plots (that is expected to give the best results in the diffractive cone) and a (linear, linear) interpolation that has the least assumptions and that may work better than the (linear, exponential) interpolation technique around the diffractive minimum. These two different interpolation methods also allow us to estimate the systematic error that comes from the interpolation procedure itself. If the data points are measured densely enough in the $(x, H(x))$ plot, both methods are expected to yield similar results. We present our final results using both techniques and note that indeed we find similar results with both these methods. Suppose that for the $j$-th point of data set $D_{12}$ and for some $i$ value of $D_2$, $x_2(i) < x_{12}(j) < x_2(i+1)$. Then a linear interpolation between the $i$-th and $i+1$-th point of $D_2$ yields the following formula: $$d_{12}(j) = d_2(i) + (d_2(i+1) - d_2(i)) \frac{x_{12}(j) - x_2(i)}{x_2(i+1)-x_2(i)}.$$ Similarly, the errors can also be determined by linear interpolation as $$e_{12}(j) = e_2(i) + (e_2(i+1) - e_2(i)) \frac{x_{12}(j) - x_2(i)}{x_2(i+1)-x_2(i)} \,.$$ This way, one extends $D_2$ to the domain $X_{12}$, corresponding to the overlapping acceptance of two measurements. If there is a measured value in $D_2$, we use that value and its error bar. If there is no measurement in $D_2$ precisely at that given value of $x$ that is part of the overlapping acceptance (corresponding to a value $x$ from $D_1$) then we use the two neighbouring points from $D_2$ and use a (linear) interpolation to estimate the value at this intermediate point. This method works if the binning of both data sets is sufficiently fine so that non-linear structures are well resolved. This way, for those $j= 1, ... , n_1 - i_1 +1 $ points from $X_{12}$ that belonged to $D_1$, we have defined the data values from $D_1$ by identity and defined the data points from $D_2$ by linear interpolation from the neighbouring bins, so for these points both data sets are defined. A similar procedure works for the remaining points in $D_{12}$ that originate from $D_2$. There are $f_2$ number of such points. Let us index them with $k = 1, ... f_2$. For these points, data and error-bars of the extrapolated data set $D_{12}$ will be taken from $D_2$: $d_{21}(x_{12}(k)) = d_2(x_2(k))$, while the errors are given as $e_{12}(x_{12}(k)) = e_2(x_2(k))$. However, for the same points, $D_1$ has no measured value. As we need to compare the data of $D_1$ and $D_2$ at common values of $x$, for these points, $D_1$ data and errors can be extrapolated using the linear or more sophisticated interpolation methods based on the nearest measured points. If the binning is fine enough, linear interpolation between the neighbouring data-points can be appropriately used. For broader bins, more sophisticated interpolation techniques may also be used that take into account non-linear interpolations based on more than two nearby bins, for example interpolations using Levy series expansion techniques of Ref. [@Csorgo:2018uyp]. However, in the present manuscript such refinements are not necessary as the (linear, linear) and the (linear, exponential) interpolations in ($x, H(x)$) give similar results. Consider now that for the $k$-th point of data set $D_{12}$ and for some $l$-th value of $D_2$, $x_1(l) < x_{12}(k) < x_1(l+1)$. Then linear interpolation between the $l$-th and $l+1$-th point of $D_2$ yields the following formula: $$d_{21}(k) = d_1(l) + (d_1(l+1) - d_1(l)) \frac{x_{12}(k) - x_1(l)}{x_1(l+1)-x_1(l)} \,. \label{e:interpolation}$$ Similarly, the errors can also be determined by linear interpolation as $$e_{21}(k) = e_1(l) + (e_1(l+1) - e_1(l)) \frac{x_{12}(k) - x_1(l)}{x_1(l+1)-x_1(l)} \,. \label{e:error-E}$$ This way, using the linear interpolation techniques between the neighbouring data points, we can now compare the extended $D_1$ and $D_2$ to their common kinematic range: $D_1$ was embedded and extrapolated to data points and errors denoted as $d_{12}(x_{12})$ and $e_{12}(x_{12})$ while $D_2$ was embedded and extrapolated to data points and errors denoted as $d_{21}(x_{12})$ and $e_{21}(x_{12})$, respectively. Note that the domain of both of these extended data sets is the same $X_{12}$ domain. The index “12” indicates that $D_1$ was extended to $X_{12}$, while index “21” indicates that $D_2$ was extended to domain $X_{12}$. Now, we are done with the preparations to compare the two data sets, using the following $\chi^2$ definition: $$\chi^2 \equiv \chi^2_A \, = \, \sum_{j=1}^{n_{12}} \frac{(d_{12}(j) - d_{21}(j))^2}{e_{12}^2(j) + e_{21}^2(j)}. \label{e:chi2-data}$$ In this comparison, there are no free parameters, so the number of degrees of freedom is NDF $= n_{12} = n_1+f_2-i_1+1$, the number of data points in the unified data sample. Based on the above Eq. (\[e:chi2-data\]) we get the value of $\chi^2$ and NDF, which can be used to evaluate the $p$-value, or the confidence level (CL), of the hypothesis that the two data sets represent the same $H(x)$ scaling function. If CL satisfies the criteria that CL $ > 0.1\%$, the two data sets do not differ significantly. In the opposite case, if CL $ < 0.1\%$ the hypothesis that the two different measurements correspond to the same a priori $H(x)$ scaling function, can be rejected. The advantage of the above $\chi^2$ definition by Eq. (\[e:chi2-data\]) is that it is straightforward to implement it, however, it has a drawback that it does not specify how to deal with the correlated $t$ or $x = -t B$ dependent errors, and horizontal or $x$ errors. The $t$ measurements at $\sqrt{s}=7$ TeV are published with their horizontal errors according to Table 5 of Ref. [@Antchev:2013gaa]. These errors should be combined with the published errors on the nuclear slope parameter $B$ to get a horizontal error on $x$ indicated as $\delta x$. Such a horizontal error has to be taken into account in the final calculations of the significance of the Odderon observation. Regarding the correlations among the measured values, and the measured errors, the best method would be to use the full covariance matrix of the measured differential cross section data. However, this covariance matrix is typically unknown or unpublished, with an exception of the $\sqrt{s} = 13$ TeV elastic $pp$ measurement by TOTEM [@Antchev:2018edk]. Given that this TOTEM measurement of $d\sigma/dt$ at 13 TeV indicates already the presence of small scaling violating terms in $H(x)$ according to Fig. \[fig:scaling-LHC\], this 13 TeV dataset cannot be used directly in our Odderon analysis, that is based on the $s$-independence of the scaling function of the differential elastic $pp$ cross section $H(x) \ne H(x,s)$ in a limited range that includes $\sqrt{s} = $ 2.76 and 7 TeV, but does not extend up to 13 TeV. However, we can utilize this TOTEM measurement of $d\sigma/dt$ at 13 TeV, to test the method of diagonalization of the covariance matrix that we apply in our final analysis of the Odderon significance. Our analysis of the covariance matrix relies on a method developed by the PHENIX Collaboration and described in detail in Appendix A of Ref. [@Adare:2008cg]. This method is based on the following separation of the various types of experimental uncertainties: Type A) errors are point-to-point uncorrelated systematic uncertainties. Type B) errors are point-to-point varying but correlated systematic uncertainties, for which the point-to-point correlation is 100 %, as the uncorrelated part is separated and added to type A) errors in quadrature. Type C) systematic errors are point-independent, overall systematic uncertainties, that scale all the data points up and down by exactly the same, point-to-point independent factor. Type D) errors are point-to-point varying statistical errors. These type D) errors are uncorrelated hence can be added to type A) errors in quadrature. In this paper, where we apply this method to compare two different $H(x)$ scaling functions, we also consider a fifth kind of error, type E) that corresponds to the theoretical uncertainty, which we identify with the error of the interpolation of one of the (projected) data sets to the $x$ values that are compared at some (measured) values of $x$ to a certain measured data point at a measured $x$ value. This type E) error is identified with the value calculated from the linear interpolation, described above, as given for each A), B), C) and D) type of errors similarly by Eq. (\[e:error-E\]). Type D) errors are added in quadrature to type A) errors, and in what follows we index these errors with the index of the data point as well as with subscripts $a$, $b$ and $c$, respectively. Using this notation, Eq. (A16) of Ref. [@Adare:2008cg] yields the following $\chi^2$ definition, suitable for the projection of dataset $D_2$ to $D_1$, or $2 \rightarrow 1$: $$\begin{aligned} \tilde{\chi}^2 (2 \rightarrow 1) & = &\sum_{j=i_1}^{f_1} \frac{(d_{1}(j)- d_{21}(j) +\epsilon_{b,1} e_{b}(j) +\epsilon_{c,1} d_{1}(j) e_{c} )^2} {\tilde e_{a,1}^2(j)} \nonumber \\ \null & \null & \qquad + \epsilon_{b,1}^2 +\epsilon_{c,1}^2 \, , \label{e:chi2-final-without-horizontal-errors}\end{aligned}$$ where $\tilde e_{a,12}(j)$ is the type A) uncertainty of the data point $j$ of the united data set $D_{12}$ scaled by a multiplicative factor such that the fractional uncertainty is unchanged under multiplication by a point-to-point varying factor: $$\tilde{e}_{a,1}(j)= e_{a,1}(j) \left( \frac{d_{1}(j) +\epsilon_{b,1} e_{b}(j) + \epsilon_{c,1} d_{1}(j) e_{c}}{d_{1}(j)}\right) \,. \label{eq:tildesigma}$$ In these sums, there are NDF$_1 = f_1 - i_1 - 1$ number of data points in the overlapping acceptance from dataset $D_1$. A similar sum describes the one-way projection $1 \rightarrow 2$, but there are NDF$_2 = f_2$ points in the common acceptance. For the two-way projections, not only the number of degrees of freedom add up, ${\rm NDF}_{12} = {\rm NDF}_1+{\rm NDF}_2$, but also the $\chi^2$ values are added as $\chi^2 (1 \leftrightarrow 2) = \chi^2(1 \rightarrow 2) + \chi^2( 2 \rightarrow 1)$. Let us note at this point, that $H(x)$ is a scaling function that is proportional to the differential cross section normalized by the integrated cross section. In this ratio, the overall, type C) point-independent normalization errors multiply both the numerator and the denominator, hence these type C) errors cancel out in $H(x)$. Given that these type C) errors are typically rather large, for example, 14.4 % for the D0 measurement of Ref. [@Abazov:2012qb], it is an important advantage in the significance computation that we use a normalized scaling function $H(x)$. So in what follows, we set $\epsilon_{c,1} = 0$ and rewrite the equation for the $\chi^2$ definition accordingly. This effect increases the significance of a $H(x)$-scaling test. The price we have to pay for this advantage is that we have to take into account the horizontal errors on $x$ in order to not overestimate the significance of our $\chi^2$ test. In this step, we follow the propagagion of the horizontal error to the $\chi^2$ as utilized by the so-called effective variance method of the CERN data analysis programme ROOT. This yields the final $\chi^2$ definition that we have utilized in our significance analysis for the case of symmetric errors in $x$: $$\begin{aligned} \tilde{\chi}^2 (2 \rightarrow 1) & = &\sum_{j=1}^{n_{12}} \frac{(d_{1}(j)- d_{21}(j) +\epsilon_{b,1} e_{b}(j))^2 }{\tilde e_{a,1}^2(j) + (\delta x_{1}(j) d^{\prime}_{1}(j))^2} + \epsilon_{b,1}^2 \,, \label{e:chi2-final}\end{aligned}$$ where $\delta x_{12}(j)$ is the (symmetic) error of $x$ in the $j$-th data point of the data set $D_{1}$, and $d^{\prime}_{1}(j))^2$ is the numerically evaluated derivative of the extrapolated value of the projected data point obtained with the help of a linear interpolation using Eq. (\[e:interpolation\]). Such definition is valid when the type B) errors are known and are symmetric for the data set $D_1$ and the errors on $x$ are also symmetric. When the data set $D_1$ corresponds to the D0 measurement of elastic $p\bar p$ collisions, Ref. [@Abazov:2012qb], we have to take into account that D0 did not publish the separated statistical and $|t|$-dependent systematic errors, but decided to published their values added in quadrature. So we use these errors as type A errors and with this method, we underestimate the significance of the results as we neglect the correlations among the errors of the data points in the D0 dataset. The TOTEM published the $|t|$-dependent statistical type D) errors and the $|t|$-dependent systematic errors both for the 2.76 TeV and 7 TeV measurements of the differential cross sections [@Antchev:2011zz; @Antchev:2013gaa; @Antchev:2018rec], with the note that the $|t|$-dependent systematic errors are almost fully correlated. In these works, TOTEM did not separate the point-to-point varying uncorrelated part of the $|t|$-dependent systematic errors. We thus estimate the type A) errors by the statistical errors of these TOTEM measurements, we then slightly underestimate them, hence overestimate the $\chi^2$ and the difference between the compared data sets. Given that they are almost fully correlated, we estimate the type B) errors by the point-to-point varying almost fully correlated systematic errors published by the TOTEM. We have tested this scheme by evaluating the $\chi^2$ from a full covariance matrix fit and from the PHENIX method of diagonalizing the covariance matrix at $\sqrt{s} = 13$ TeV, using the Lévy expansion method of Ref. [@Csorgo:2018uyp]. We find that the fit with the full covariance matrix results the same minimum within one standard deviation of the fit parameters, hence the same significance as the fit with the PHENIX method of Appendix A of Ref. [@Adare:2008cg]. We have thus validated the PHENIX method of Ref. [@Adare:2008cg] for the application of the analysis of differential cross section at $\sqrt{s} = 13 $ TeV, together with the effective variance method of the ROOT package. Now, we can employ our final $\chi^2$ definition of Eq. (\[e:chi2-final\]) to estimate the significance of the Odderon signal in comparison of the $H(x)$ scaling functions for elastic $pp$ and $p\bar p$ collisions. This validation is important as the full covariance matrix of the $\sqrt{s} = 2.76 $ TeV and $7$ TeV measurements by TOTEM is not published, but the PHENIX method appended with the ROOT method of effective variances can be used to effectively diagonalize the covariance matrix and to get similar results within the errors of the analysis. Extrapolations {#s:extrapolations} ============== In this section, we discuss how to extrapolate the data points to energies where measurements are missing. As we have found, for example, in the ISR energy range of $\sqrt{s} = 23.5$ – $62.5 $ GeV the $H(x)$ scaling function is independent of $\sqrt{s}$ within errors. We show how to extrapolate data points to unmeasured energies, under the condition that in a given energy range, $H(x)$ is independent of the collision energy, $H(x) \neq H(x,s) $. In general, such a feature has to be established or cross-checked experimentally. This case is important, given that we have shown before, for example in Fig. \[fig:scaling-LHC-7-vs-2.76-Hx\], that $H(x)$ for $pp$ collisions stays energy-independent within errors between the LHC energies of $2.76 $ TeV $\le \sqrt{s}\le $ $7$ TeV. Furthermore, we have already shown that for $p\overline{p}$ collisions, $H(x) = H(x,s)$ in the energy range of 0.546 $\le \sqrt{s} \le 1.96$ TeV, as indicated in Fig. \[fig:scaling-antiprotons\]. Let us denote two different center-of-mass energies between which $H(x)={\rm const}(\sqrt{s})$ within the experimental errors as $\sqrt{s_1}$ and $\sqrt{s_2}$. Analogically, we denote various observables as $B_i \equiv B(s_i)$, $\sigma_i \equiv \sigma_{el,i}\equiv \sigma_{el}(s_i)$, $x_i \equiv B_i t$. The energy independence of the $H(x)$ scaling function formally can be written as $$H_1(x_1) = H_2(x_2) = H(x) \qquad \mbox{\rm if}\quad x_1 = x_2 \, .$$ This simple statement has tremendous experimental implications. The equality $x_1 = x_2$ means that the scaling function is the same, if at center-of-mass energy $\sqrt{s_1}$ it is measured at $t_1$ and at energy $\sqrt{s_2}$ it is measured at $t_2$, so that $$t_1 B_1 = t_2 B_2 \qquad \mbox{\rm if}\quad x_1 = x_2 \, .$$ The equality $H_1(x_1) = H_2(x_2) = H(x)$ is expressed as $$\frac{1}{B_1 \sigma_1}\left. \frac{d\sigma}{dt} \right\rvert_{t_1 = x/ B_1} = \frac{1}{B_2 \sigma_2}\left. \frac{d\sigma}{dt} \right\rvert_{t_2 = x/ B_2} \,.$$ Putting these equations together, this implies that the experimental data can be scaled to other energies in an energy range where $H(x)$ is found to be independent of $\sqrt{s}$ as follows: $$\left. \frac{d\sigma}{dt} \right\rvert_{t_1} = \frac{B_1 \sigma_1}{B_2 \sigma_2}\left. \frac{d\sigma}{dt} \right\rvert_{t_2 = t_1 B_1 / B_2} \, . \label{e:dsdt-rescaling}$$ With the help of this equation, the data points on differential cross sections can be scaled to various different colliding energies, if in a certain energy region the $H(x)$ scaling holds within the experimental errors. In other words, the differential cross section can be rescaled from $\sqrt{s_1}$ to $\sqrt{s_2}$ by rescaling the $|t|$-variable using the ratio of $B_1/B_2=B(s_1)/B(s_2)$, and by multiplying the cross section with the ratio $\frac{B_1 \sigma_1}{B_2 \sigma_2}$. Results {#s:results} ======= In this section, we present our results and close the energy gap, as much as possible without a direct measurement, between the TOTEM data on elastic $pp$ collisions at $\sqrt{s} = 2.76$ and $7.0$ TeV and D0 data on elastic $p\bar p$ collisions at $\sqrt{s} = 1.96 $ TeV. This section is based on the application of Eq. (\[e:dsdt-rescaling\]) in this energy range. After the rescaling procedure, the resulting data set at the new energy is compared with the measured data quantitatively with the help of Eq. (\[e:chi2-data\]). We have used the rescaling equation, Eq. (\[e:dsdt-rescaling\]) first to test and to cross-check, if the rescaling of the $\sqrt{s} = 23.5 $ GeV ISR data to other ISR energies works, or not. The left panel of Fig. \[fig:rescaling-of-dsigma-dt-at-ISR-and-LHC\] indicates that such a rescaling of the differential cross sections from the lowest ISR energy of $\sqrt{s} = 23.5$ to the highest ISR energy of $62.5$ GeV actually works well. The level of agreement of the rescaled 23.5 GeV $pp$ data with the measured 62.5 GeV $pp$ data has been evaluated with the help of Eq. (\[e:chi2-data\]). We found an agreement with a $\chi^2/{\rm NDF} = 111/100$, corresponding to a CL = 21.3 % and a difference is at the level of 1.25$\sigma$ only. This result demonstrates that our rescaling method can also be used to get the differential cross sections at other energies, provided that the nuclear slope and the elastic cross sections are known at the new energy as well as at the energy from where we start the rescaling procedure. Subsequently, one can also rescale the TOTEM data at $\sqrt{s} = 2.76$ or $7$ TeV to $1.96$ TeV, given that $H(x)$ is (within errors) energy independent in the range of $2.76 - 7 $ TeV, corresponding to nearly a factor of 2.5 change in $\sqrt{s}$, while the change in $\sqrt{s}$ from $1.96$ to $2.76$ TeV is only a factor of 1.4. The right panel of Fig. \[fig:rescaling-of-dsigma-dt-at-ISR-and-LHC\] indicates that rescaling the differential elastic $pp$ cross section from $\sqrt{s} = 2.76$ to $1.96$ TeV also gives valuable results. We have evaluated the confidence level of the comparison of the rescaled 2.76 TeV $pp$ data with the 1.96 TeV $p\bar p$ data with the help of Eq. (\[e:chi2-data\]). As was already mentioned above, we have found a surprising agreement with a $\chi^2/{\rm NDF} = 18.1/11$, corresponding to a CL = 7.93 %, and a difference at the level of 1.75 $\sigma$ only. Another important result is illustrated in Fig. \[fig:rescaling-from-7-to-1.96TeV\]. This comparison indicates a difference between the rescaled $\sqrt{s} = $ 7 TeV elastic $pp$ differential cross-section [@Antchev:2011zz; @Antchev:2013gaa] to the $\sqrt{s} = $ 1.96 TeV energy and to the corresponding $p\bar p$ data measured at $\sqrt{s}= 1.96 $ TeV [@Abazov:2012qb]. To obtain a first estimate, this difference is quantified with the help of Eq. (\[e:chi2-data\]) yielding a CL of $5.13\cdot 10^{-7}$ %. As this method adds the statistical and systematic errors in quadrature, it underestimates the actual significance of the difference between the two data sets. Although this estimate already provides a significant, greater than 5$\sigma$ effect for the Odderon observation, corresponding to a significant, 5.84$\sigma$ difference between the $pp$ dataset and the 1.96 TeV $p\overline p$ dataset, however, the evaluation of this significance does not yet take into account the rather large overall normalization error of 14.4 % that has been published by the D0 collaboration. The comparison of the differential cross-sections is sensitive to such type C errors, hence this effect has to be taken into account in the final significance analysis, or the significance has to be finalized using the $H(x) $ scaling function, where the type C errors of the absolute normalization cancel. It can be seen in Fig. \[fig:rescaling-from-7-to-1.96TeV\] that in the swing region, before the dip, the rescaled $pp$ differential cross section differ significantly from that of $p\bar p$ collisions. This $\chi^2$ analysis also took into account the horizontal errors of the TOTEM data discussed above. Although the estimates of statistical significances given in this Section are based on a $\chi^2$ test that includes the $|t|$-dependent statistical errors and the $|t|$-dependent systematic errors added in quadrature, the values of $\chi^2/$NDF and significances given above can still be only considered as estimates. Indeed, although the $|t|$-dependent systematic errors on these $\sqrt{s} = 7$ TeV data are known to be almost fully correlated, the covariance matrix is not publicly available at the time of closing this manuscript from the TOTEM measurement at $\sqrt{s} = 7$ TeV. It is clear that the $\chi^2$ is expected to increase if the covariance matrix is taken into account, and this effect would increase the disagreement between the measured $p\bar p$ and the extrapolated $pp$ differential cross sections at $\sqrt{s} = 1.96$ TeV. Note, the above estimate of significances does not yet take into account the overall correlated, $|t|$-independent vertical uncertainty in the differential cross section measurements. This uncertainty shifts all the data points up or down by a common, $|t|$-independent factor and may also decrease the significance of the difference between the measured $p\bar{p}$ and the extrapolated $pp$ cross sections at $\sqrt{s} = 1.96$ TeV. So this indicates that we have to consider the proposed rescaling method as conservative as possible, that allows us to take into account the statistical and $|t|$-dependent correlated systematic errors, as well as the $|t|$-independent correlated systematic errors. Such an analysis is presented in the next section, where we quantify the differences between the scaling functions $H(x)$ of elastic $pp$ and $p\bar p$ collisions using the fact that $H(x)$ is free of $|t|$-independent normalisation errors. A significant Odderon signal {#s:Odderon-significance} ============================ In this section, we summarize our discovery of at least 6.55$\sigma $ Odderon signal that we demonstrate below when comparing the $H(x)$ scaling functions of $pp$ and $p\bar p$ collisions. We have found a significant Odderon signal by comparing the $H(x)$ scaling functions of the differential cross section of elastic $pp$ collisions with $\sqrt{s} = 7$ TeV to that of $p\bar p$ collisions with $\sqrt{s} = 1.96$ TeV, as indicated in Fig. \[fig:rescaling-from-7-to-1.96TeV-and-back\]. The comparison is made in both possible ways, by comparing the $pp$ data to the $p\bar p$ data, and vice versa. The difference between these two datasets corresponds to at least a $\chi^2/{\rm NDF} = 84.6/17$, giving rise to a CL of $5.78 \times 10^{-9}$ % and to a 6.55$\sigma$ significance. The overall, $|t|$-independent normalization error of 14.4 % on the D0 data set cancels from this $H(x)$, and does not propagate to our conclusions. These results are obtained for the $\sigma_{\rm el} = 17.6 \pm 1.1$ mb value of the elastic $p\bar p$ cross section at $\sqrt{s} = 1.96 $ TeV, and for the linear-exponential interpolation in $(x, H(x))$. Using this method of interpolation, the nearest points were connected with a linear-exponential line, that corresponds to a straight line on a linear-logarithmic plot in $(x, H(x))$. We have used the published values of the differential cross sections $\frac{d\sigma}{dt}$, that of the nuclear slope parameter $B$ and the measured value of the elastic cross section $\sigma_{\rm el}$ for 7 TeV $pp$ elastic collisions. For the elastic cross section of $p\bar p$ collisions at $\sqrt{s}$ $ = 1.96$ TeV, we have numerically integrated the differential cross section with an exponentail approximation at very low-$|t|$ that provided us with $\sigma_{\rm el} = 17.6 \pm 1.1$ mb. We have systematically checked the effect of variations in our interpolation method by switching from the (linear-exponential) in $(x, H(x))$ interpolation to a linear-linear one and by changing the value of the elastic $p\bar p$ collisions from the numerically integrated differential cross-section value of $\sigma_{\rm el} = 20.2 \pm 1.4$ mb, which is an unusually large value, but equals within the quoted 14.4 % systematic error to the $\sigma_{\rm el} = 17.6 \pm 1.1$ mb value, that corresponds to the trend published by the Particle Data Group, see the Fig. 51.6, bottom panel, yellow line of Ref. [@Tanabashi:2018oca]. The input values of the nuclear slope parameter $B$ and the elastic cross-section $\sigma_{\rm el}$ are summarized in Table \[table:B-sigma-summary\]. -------------- ----------------- ------------------- -------------------------------------------- Energy $\sigma_{el}$ $B$ Reference (GeV) (mb) (GeV$^{-2}$) 1960 17.6 $\pm$ 1.1 Fig. 51.6 of Ref. [@Tanabashi:2018oca] ($p\bar{p}$) 20.2 $\pm$ 1.4 from low $-t$ fit to data [@Abazov:2012qb] 16.86 $\pm$ 0.2  [@Abazov:2012qb] 2760 21.8 $\pm$ 1.4 [@Nemes:2017gut] ($pp$) 17.1 $\pm$ 0.3 [@Antchev:2018rec] 7000 25.43$\pm$ 1.02 [@Antchev:2013iaa] ($pp$) 19.89 $\pm$ 0.272 [@Antchev:2013gaa] -------------- ----------------- ------------------- -------------------------------------------- : Summary table of the elastic cross-sections $\sigma_{\rm el}$, the nuclear slope parameters $B$, and their sources or references. []{data-label="table:B-sigma-summary"} As part of our systematic studies, we have also changed the direction of the projection. The results are summarized in Table \[table:7-to-1.96-TeV-one-way-comparison\]. They indicate that the final version of Fig. \[fig:rescaling-from-7-to-1.96TeV\], shown as the top left panel of Fig. \[fig:rescaling-from-7-to-1.96TeV\] and evaluated with the help of our final $\chi^2$ definition of Eq. (\[e:chi2-final\]) corresponds to the most conservative case of Odderon observation based on the $\sqrt{s} = 7$ TeV TOTEM and the $\sqrt{s} = 1.96$ TeV D0 data sets. This panel indicates that the Odderon signal is observed in this comparison with at least a 6.55$\sigma$ significance, corresponding to a statistically [*significant* ]{} Odderon observation. The detailed figures, that show the $\chi^2(\epsilon_b)$ functions for each of these cases are summarized in the left and right panels of Fig. \[fig:chi2-vs-epsilon-b-for-1960-vs-7000-GeV.png\] for the comparison of the 7 TeV TOTEM data set with the 1.96 TeV D0 data set. Each plot indicates a clear, nearly quadratic minimum. The values of $\chi^2$ at the minima are summarized in Table \[table:7-to-1.96-TeV-one-way-comparison\], together with other characteristics of significance, like the confidence level and the significance in terms of standard variations. Similarly, the $\chi^2(\epsilon_b)$ functions for the comparison of the 2.76 TeV TOTEM data set with the 1.96 TeV D0 data set are summarized in Fig. \[fig:chi2-vs-epsilon-b-for-1960-vs-2760-GeV.png\]. The values of $\chi^2$ at the minima are given in Table \[table:2.76-to-1.96-TeV-one-way-comparison\], together with other relevant characteristics. ![image](figs/chi2-vs-eps-B-7000_1960_4curve_low.png){width="48.00000%"} ![image](figs/chi2-vs-eps-B-7000_1960_4curve_high.png){width="48.00000%"} ![image](figs/Final-significance-from-1960-to-7000-GeV-one-way-comparison-6p55-sigma.JPG){width="95.00000%"} As summarized in Fig. \[fig:rescaling-from-7-to-1.96TeV-and-back\], a significant Odderon signal is found in the comparison of the $H(x)$ scaling functions of the differential elastic $pp$ (at $\sqrt{s} = 7.0$ TeV) vs $p\bar p$ ($\sqrt{s} = 1.96$ TeV) cross sections. The horizontal error bars are indicated by a properly scaled horizontal line or $-$ at the data point. The statistical (type A, point-to-point fluctuating) errors are indicated by the size of the vertical error bars ($|$), while shaded boxes indicate the size of the (asymmetric) type B (point-to-point varying, correlated) systematic errors. The overall normalization errors ($|t|$-independent, type C errors) cancel from the $H(x)$ scaling functions since they multiply both the numerator and the denominator of $H(x)$ in the same way. The correlation coefficient of the $|t|$-dependent systematic errors, $\epsilon_b$, is optimized to minimize the $\chi^2$ based on Eq. (\[e:chi2-final\]), and the values indicated in Fig. \[fig:rescaling-from-7-to-1.96TeV-and-back\] correspond to the minimum of the $\chi^2(\epsilon_b)$. These $\chi^2$ values, as well as the numbers of degrees of freedom (NDFs) and the corresponding confidence levels (CLs) are indicated on both panels of Fig. \[fig:rescaling-from-7-to-1.96TeV-and-back\], for both projections. The $\chi^2(\epsilon_b)$ functions are summarized in Fig. \[fig:chi2-vs-epsilon-b-for-1960-vs-7000-GeV.png\]. The 1.96 TeV $\rightarrow $ 7 TeV projection has a statistical significance of 6.55$\sigma$ of an Odderon signal, corresponding to a $\chi^2/{\rm NDF} = 84.6 / 17$ and CL = $5.78 \times 10^{-9}$ %. Thus the probability of Odderon observation in this analysis is $P = 1-CL = 0.9999999999422$. Fig. \[fig:rescaling-from-7-to-1.96TeV-and-back\] summarizes the results of our systematic studies in four different panels described as follows. The top-left panel of this figure uses a linear-exponential interpolation in the $(x, H(x))$ plane and uses the value of 17.6 $\pm$ 1.1 mb for the elastic $p\bar p$ cross section at $\sqrt{s} = 1.96$ TeV. This case gives the lowest (6.55$\sigma$) significance for the Odderon observation from among the possible cases that we have considered. The top-right panel is similar but for a linear-linear interpolation in the $(x, H(x))$. The bottom-left panel is similar to the top-left panel, but now using 20.2 $\pm$ 1.4 mb for the elastic $p\bar p$ cross section at $\sqrt{s} = 1.96 $ TeV and also using a linear-exponential interpolation in $(x, H(x))$. The bottom-right panel is similar to the bottom-left panel, but using a linear-linear interpolation method. ![image](figs/final_42_7000_1960_17mb_exp.pdf){width="49.00000%"} ![image](figs/final_43_7000_1960_17mb_lin.pdf){width="49.00000%"} ![image](figs/final_44_7000_1960_20mb_exp.pdf){width="49.00000%"} ![image](figs/final_45_7000_1960_20mb_lin.pdf){width="49.00000%"} ![image](figs/chi2-vs-epsilon-b-for-1960-vs-2760-TeV.png){width="100.00000%"} The results of the scaling studies for a comparison of elastic $pp$ collisions at $\sqrt{s} = 2.76 $ TeV, measured by the TOTEM experiment at the LHC [@Antchev:2018rec] to that of $p\bar p$ collisions at $\sqrt{s} = 1.96 $ TeV, measured by D0 at the Tevatron [@Abazov:2012qb] are summarized in Fig. \[fig:rescaling-from-2.76-to-1.96TeV-and-back-17mb-lin-exp\]. The top-left panel uses $\sigma_{\rm el} = 17.6 \pm 1.7$ mb and a linear-exponential interpolation method in $(x, H(x))$. The top-right panel is the same as the top-left panel, but for a linear-linear interpolation in $(x, H(x))$. The bottom-left panel is nearly the same as the top-right panel, but for $\sigma_{\rm el} = 20.2 \pm 1.4$ mb. The bottom-right panel is the same as the bottom-left panel, but for a linear-linear interpolation in $(x, H(x))$. Neither of these comparisions shows a significant difference between the $H(x)$ scaling function of elastic $pp$ collisions at $\sqrt{s} = 2.76 $ TeV as compared to that of $p\bar p$ collisions at $\sqrt{s} = 1.96 $ TeV. It seems that the main reason for such a lack of significance is the acceptance limitation of the TOTEM dataset at $\sqrt{s} = 2.76$ TeV, which extends up to $x = - t B \approx 13$, in contrast to the acceptance of the 7 TeV TOTEM measurement that extends up to $x = - tB \approx 20$. We have cross-checked this by limiting the 7 TeV data set also to the same acceptance region of $4.4 < -Bt < 12.7$ as that of the 2.76 TeV data set. This artificial acceptance limitation has resulted in a profound loss of significance, down a to $\chi^2/{\rm NDF} = 25.7 / 11$, that corresponds to a CL = 0.71% and to a deviation at the 2.69 $\sigma$ level only. This result indicates that if we limit the acceptance of the 7 TeV TOTEM measurement to the acceptance of the 2.76 TeV TOTEM measurement, the significance of the Odderon observation decreases well below the 5$\sigma$ discovery treshold. ![image](figs/final_46_2760_1960_17mb_exp.pdf){width="49.00000%"} ![image](figs/final_47_2760_1960_17mb_lin.pdf){width="49.00000%"} ![image](figs/final_48_2760_1960_20mb_exp.pdf){width="49.00000%"} ![image](figs/final_49_2760_1960_20mb_lin.pdf){width="49.00000%"} ![image](figs/Final-significance-from-1960-to-2760-GeV-one-way-comparisons-0p03-sigma.JPG){width="95.00000%"} ![image](figs/Final-significance-from-1960-to-7000-two-way-comparison-more-than-8-sigma-effect.JPG){width="95.00000%"} ![image](figs/Final-significance-from-1960-to-2760-GeV-two-way-comparisons-4p39-sigma.JPG){width="95.00000%"} A summary of cross-checks {#s:cross-checks} ========================= In this section, we summarize some of the most important cross-checks that we performed using our methods and results. We have cross-checked what happens if one rescales the differential cross section of elastic $pp$ scattering form the lowest ISR energy of $\sqrt{s} = 23.5$ GeV to the top ISR energy of $\sqrt{s} = 62.5$ GeV. As can be expected based on the approximate equality of all the $H(x)$ scaling functions at the ISR energies, as indicated on the left panel of Fig. \[fig:rescaling-of-dsigma-dt-at-ISR-and-LHC\], the rescaled 23.5 GeV $pp$ data coincide with the measured 62.5 GeV $pp$ data. The resulting $\chi^2/{\rm NDF} = 111/100$ corresponds to a CL = 21.3 %, or a lack of significant difference – a 1.3$\sigma$ effect. In other words, our quantitative analysis indicates that the two data sets at the ISR energies of 23.5 and 62.5 GeV correspond to the same $H(x)$ scaling function. This indicates that the method that we applied to extrapolate the 2.76 and 7 TeV data sets to lower energies satisfied the cross-checks at the ISR energies, i.e. our method works well. As one of the critical cross-checks of these calculations, two different co-authors coded the same formulae with two different codes using two different programming languages, and these codes were cross-checked against one another until both provided the same values of significances. We have validated the PHENIX method of Ref. [@Adare:2008cg] implemented in the form of our final $\chi^2$ definition of Eq. (\[e:chi2-final\]) for the diagonalization of the covariance matrix on fits to the $\sqrt{s} = 13 $ TeV TOTEM data of ref. [@Antchev:2018edk]. This PHENIX method resulted, within one standard deviation, the same minimum, hence the same significances, as the use of the full covariance matrix at $\sqrt{s} = 13 $ TeV elastic $pp$ collisions. At the lower LHC energies of $\sqrt{s} = 2.76$ and $7.0$ TeV, due to the lack of publicly available information on the covariance matrix, only the PHENIX method of Ref. [@Adare:2008cg] was available for our final significance analysis. We have also explored the main reason of the observation of a significant Odderon signal in the comparision of the $H(x)$ scaling functions of elastic $pp$ collisions at $\sqrt{s} = 7 $ TeV with that of the elastic $p\bar p$ collisions at $\sqrt{s} = 1.96 $ TeV. The question was rather intriguing as we have found no significant difference between the $H(x)$ scaling functions of elastic $pp$ collisions at $\sqrt{s}$ = 2.76 TeV and 7 TeV. At the same time, we have also seen that the comparison of the 2.76 TeV $pp$ dataset to the 1.96 TeV $p\bar{p}$ dataset does not indicate a significant Odderon effect. We have found that the Odderon signal vanishes from the comparison of the 7 TeV $pp$ and the 1.96 TeV $p\bar{p}$ datasets too, if we limit the acceptance of the 7 TeV dataset to the acceptance in $x = -tB$ as that of the 2.76 TeV $pp$ dataset: the significance of the Odderon observation decreased from a 6.55$\sigma$ discovery effect to a 2.69$\sigma$ level agreement. We may note that a similar observation was made already in Ref. [@Ster:2015esa] that pointed out a strong $|t|$ dependence of the Odderon contribution. Discussion {#s:discussion} ========== We have explored the scaling properties of the elastic differential cross sections at various energies, from the ISR up to the highest LHC energy. We have recalled that the earlier proposals for the $F(y)$ and $G(z)$ scaling functions were useful to explore if elastic scattering of protons in the LHC energy range is already close to the black-disc limit or not. After investigating several possible new dimensionless scaling variables and scaling function candidates, we have realized that in order to look for scaling violations in the low $|t|$ kinematic range, corresponding to the diffractive cone it is advisable to scale all the diffractive cones to the same dimensionless scaling function, $H(x) \approx \exp(-x)$. This function can be obtained as the differential cross section normalized to its value at the optical point, which also for nearly exponential distributions equals to the elastic cross section $\sigma_{\rm el}$ multiplied by the slope parameter $B$. Both are readily measurable in elastic $pp$ and $p\bar p$ collisions, while other scaling variables that we have investigated may depend on $t_{\rm dip}$ values – the location of the diffractive minimum, which however is not readily accessible neither in elastic $p\bar p$ collisions (where there is no significant dip) nor in the acceptance limited elastic $pp$ differential cross section (where the diffractive minimum may be located outside the acceptance of the experiment for that particular data set). Given that the scaling function $H(x)$ of elastic proton-(anti)proton scattering transforms out the energy dependence of the elastic slope $B(s)$, the real-to-imaginary ratio $\rho_0(s)$ as well as the total and elastic cross sections, $\sigma_{\rm tot}(s)$ and $\sigma_{\rm el}(s)$, Figs. \[fig:rescaling-from-7-to-1.96TeV\] and  \[fig:rescaling-from-7-to-1.96TeV-and-back\] clearly indicate a crossing-odd component of the elastic scattering amplitude. At the $\sim$ 2 TeV energy scale, where the Reggeon contributions to the scattering amplitude are suppressed by their power-law decays, this is apparently a clear Odderon effect, a characteristic difference in the shape of the scaling function of elastic scattering between $pp$ and $p\bar p$ collisions at the logarithmically similar energies of 7 and 1.96 TeV, respectively. The effects due to the energy-induced difference between TOTEM and D0 data sets can be estimated by the change of the $H(x)$ scaling function for $pp$ scattering between 2.76 TeV and 7 TeV, which are within the systematic errors of the TOTEM data sets. However, the $H(x)$ scaling function of elastic $pp$ scattering at $\sqrt{s} = 7.0$ TeV is significantly different from the corresponding result of elastic $p\bar p$ scattering at $\sqrt{s} = 1.96$ TeV. These qualitative and quantitative differences, first, show up below the diffractive minimum of the $pp$ elastic scattering, namely, the $H(x)$ function for $pp$ collisions indicates a strong “swing” or faster than exponential decrease effect, before developing a characteristic diffractive minimum. In contrast, the D0 data on $p\bar p$ elastic scattering features a structureless exponential decrease that in turn changes to a plateaux or a shoulder-like structure at higher values of the scaling variable $x$. No clear indication of a diffractive maximum is seen in the $p\bar p$ elastic scattering data [@Abazov:2012qb], while the TOTEM data sets at each LHC energies of 2.76, 7 and 13 TeV clearly indicate a diffractive minimum followed by an increasing part of the differential cross section before the edge of the TOTEM acceptance is reached, respectively [@Antchev:2018rec; @Antchev:2011zz; @Antchev:2018edk]. These qualitative and quantitative differences between the $H(x)$ scaling functions of elastic $pp$ and $p\bar p$ scatterings provide a clear-cut and statistically significant evidence for a crossing-odd component in the scattering amplitude in the TeV energy range. This corresponds to the observation of the Odderon exchange in the $t$-channel of the elastic scattering. The Odderon in this context is a trajectory that at $J=1$ contains a $J^{\rm PC} = 1^{--}$ vector glueball as well as other glueball states with higher angular momentum. Hence, one of the implication of our result is that not only one but several glueball states should exist in Nature. Due to the presence of the faster-than exponentially decreasing (swing) region in elastic $pp$ scatterings, the high-statistic $pp$ elastic scattering data at $\sqrt{s} = 1.96 $ TeV may be taken as an additional measurement clearly closing the energy gap. However, the aperture limitation of the LHC accelerator is already resulting in a loss of significance of the comparison of the $H(x)$ scaling function at 2.76 TeV with that of the D0 data at 1.96 TeV. Due to this reason, we propose an additional measurement of the dip and bump region of elastic $pp$ collisions in the domain where the $H(x)$ scaling was shown to work, in between 2.76 TeV and 7 TeV, if that can be harmonized with the LHC running schedule and scenarios. The current TOTEM acceptance ends at $-t B \approx 12$ at $\sqrt{s} = 2.76$ TeV. Although more detailed acceptance studies are necessary, it seems that reaching $x = -t B\approx 8-9 $ seems to be a sufficient acceptance, as the swing effect in this range is already making a substantial and qualitative difference between the $H(x) = (1/A) d\sigma/dt $ scaling functions of elastic $pp$ and $p\bar p$ collisions. New elastic $pp$ scattering data around $\sqrt{s} \approx$ 4 – 5 TeV could be particularly useful to determine more precisely any possible residual dependence of these Odderon effects as a function of $\sqrt{s}$. The current significance of the Odderon observation can be further increased from the 6.55$\sigma$ effect by a tedious experimental re-analysis of some of the already published data, for example, by separating the point-to-point uncorrelated statistical and systematic errors (type A errors) from the point-to-point correlated systematic errors in elastic $p\bar p$ collisions by D0, as well as by determining the covariance matrix of the elastic cross section measurement of $pp$ collisions at 2.76 and 7 TeV colliding energies by TOTEM. Summary and conclusions {#s:summary} ======================= We have introduced a new, straightforwardly measurable scaling function $H(x)$ of elastic proton-(anti)proton scattering. This scaling function transforms out the trival energy-dependent factors, in particular, the effects due to the $s$-dependencies stemming from the elastic slope $B(s)$, from the real-to-imaginary ratio $\rho_0(s)$, as well as from the total and elastic cross sections, $\sigma_{\rm tot}(s)$ and $\sigma_{\rm el}(s)$, respectively. Figs. \[fig:rescaling-from-7-to-1.96TeV\] and  \[fig:rescaling-from-7-to-1.96TeV-and-back\] clearly indicate a difference between the scaling properties of the elastic $pp$ and $p\bar p$ collisions, corresponding to a crossing-odd component of the elastic scattering amplitude at the TeV energy scale. As in this kinematic region the Reggeon contributions to the scattering amplitude are suppressed by their power-law decays, a significant characteristic difference between the $H(x)$ scaling functions of elastic $pp$ and $p\bar p$ collisions at the logarithmically similar energies of 7, 2.76 and 1.96 TeV is considered as a clear-cut Odderon effect, because the trivial energy dependences of $\sigma_{\rm el}(s)$ and $B(s)$ as well as that of $\rho(s)$ and $\sigma_{\rm tot}(s)$ are scaled out from $H(x)$ by definition. A comparison in Fig. \[fig:rescaling-from-7-to-1.96TeV-and-back\] indicates a significant difference between the rescaled 7 TeV $pp$ data set down to 1.96 TeV with the corresponding $p\bar p$ data measured at $\sqrt{s} = 1.96 $ TeV. In the swing region, i.e. before the dip, this difference is quantified with a CL of $5\times 10^{-9}$ %. These re-analyzed D0 and TOTEM data, taken together with the verified energy independence of the $H(x)$ scaling function in the $\sqrt{s} = 2.76 - 7.0$ TeV energy range amount to the closing of the energy gap between 2.76 and 1.96 TeV in a model-independent way, as much as reasonably possible without a direct measurement. At the same time, Fig. \[fig:scaling-LHC-7-vs-2.76-Hx\] indicates that the same 7 TeV data rescaled down to $\sqrt{s} = $ 2.76 TeV do not significantly differ from the TOTEM data measured at the same energy of 2.76 TeV, which is logarithmically close to 1.96 TeV, the highest available colliding energy of $p\bar p$ elastic collisions. So, we have utilized the observed energy independence of the $H(x)$ scaling function of elastic $pp$ collisions in the few TeV energy range. One of the new, qualitative Odderon effects that we have identified was the approximate energy independence of the $H(x)$ scaling function for elastic $pp$ collisions in the few TeV energy range, in contrast to a stronger energy dependence of the $H(x)$ scaling function for elastic $p\bar p$ collisions. In conclusion, we find from a model-independent re-analysis of the scaling properties of the differential cross sections of already published D0 and TOTEM data sets a statistically significant, more than a 6.55$\sigma$ effect of $t$-channel Odderon exchange. Elastic $pp$ scattering data in a vicinity of $\sqrt{s}\approx $ 2 TeV as well as in between 2.76 and 7 TeV would definitively be most useful for confirming our significant Odderon signal. Our analysis indicates that the statistically significant Odderon signal is in the kinematic range of $10 \le x = -t B \le 20$, hence it is important to measure these elastic scattering cross-sections well beyond the kinematic domain of the diffractive cone. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge inspiring discussions with S. Giani, P. Grannis, W. Guryn, G. Gustafson, V. A. Khoze, E. Levin, L. Lönnblad, K. Österberg, C. Royon, M. Strikman and M. Šumbera. R.P. is partially supported by the Swedish Research Council grants No. 621-2013-4287 and 2016-05996, by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 668679), as well as by the Ministry of Education, Youth and Sports of the Czech Republic project LT17018 and by the NKFI grant K133046 (Hungary). T. Cs. and A. S. were partially supported by the NKIFH grants No. FK-123842 and FK-123959, by the NKFI grant K133046 and by the EFOP 3.6.1-16-2016-00001 grant (Hungary). The work has been performed in the framework of COST Action CA15213 “Theory of hot matter and relativistic heavy-ion collisions” (THOR).
--- abstract: 'The $V_{ud}$ element of the Cabibbo-Kobayashi-Maskawa quark mixing matrix has traditionally been determined from the analysis of data in nuclear superallowed $0^+\rightarrow 0^+$ transitions, neutron decay and pion beta decay. We show here that this element can independently be determined from nuclear mirror transitions. The extracted value, $|V_{ud}| = 0.9719\pm 0.0017$, is at 1.2 combined standard deviations from the value obtained in superallowed $0^+\rightarrow 0^+$ transitions and has a similar precision than the value obtained from neutron decay experiments. We discuss some prospects to improve its precision through experiments in nuclear mirror transitions.' author: - 'O. Naviliat-Cuncic' - 'N. Severijns' title: 'Determination of $|V_{ud}|$ from nuclear mirror transitions' --- The unitarity conditions of the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix [@cabibbo63; @kobayashi73] provide sensitive means to test the consistency of the three generation standard electroweak model and to search for new physics beyond. A stringent test is obtained from the elements of the first row $$V_{ud}^2 + V_{us}^2 + V_{ub}^2 = 1 \label{eq:unitarity}$$ where $V_{uj}$ denotes the amplitude of the quark mass eigenstate $j$ into the quark weak eigenstate $d'$. The accuracy in the verification of this condition is largely due to the dominant value and error of the $V_{ud}$ element which is obtained from weak decay processes involving the lightest quarks. Three traditional sources to determine $|V_{ud}|$ from experiments have been considered during the past decades, namely, nuclear superallowed $0^+\rightarrow 0^+$ pure Fermi transitions, neutron decay and pion beta decay, and these have regularly been reviewed [@Towner98; @Towner03; @Hardy07]. The partial half-lives of nine nuclear superallowed $0^+\rightarrow 0^+$ transitions have been studied in great detail [@hardy05; @towner08]. Measurements of lifetimes, masses and branching ratios have reached precisions such that the required inputs for the calculation of the ${\cal F}t$ values were obtained at a level of few parts in $10^{-4}$, yielding the value [@towner08] $$|V_{ud}| = 0.97418(26) \ {\rm (superallowed \ 0^+\rightarrow 0^+)} , \label{eq:Vud_superallowed}$$ The value of $|V_{ud}|$ deduced from $0^+\rightarrow 0^+$ transitions is since many years [@Towner98] dominated by uncertainties in theoretical corrections. Present experimental activities are oriented toward further reducing these uncertainties by testing the calculations in other transitions [@towner08]. Neutron decay involves both the vector and the axial-vector interactions so that the determination of $|V_{ud}|$ from neutron decay data, although free of nuclear structure corrections, requires the analysis of at least two observables. The most precise determinations have so far been obtained by combining the neutron lifetime with the beta asymmetry parameter. The first determination of $|V_{ud}|$ using only neutron decay data [@Thompson90] yielded the value $|V_{ud}| = 0.9790(30)$. The present world average recommended value for the neutron lifetime, $\tau_n$ = 885.7(8) s [@amsler08], combined with the world average value for the beta asymmetry parameter, $A_n = -0.1173(13)$ [@amsler08], yields $$|V_{ud}| = 0.9746(19) \ {\rm (neutron \ decay)} . \label{eq:Vud_neutron}$$ The improvement by a factor of about 1.5 over almost two decades shows the difficulty of the associated experiments (see e.g. [@Nico05; @abele08]). Other results have however been reported [@abele08; @abele02] by taking selected values of the most precise experimental data. Many experimental projects are under way [@PPSN08] to improve the uncertainties on the neutron lifetime and on several of the correlation parameters. Finally, the absolute pion beta decay rate provides a clean observable for the determination of $|V_{ud}|$. The main experimental difficulty arises here from the very weak ($10^{-8}$) branching of the beta decay channel. The most recent experimental determination yields [@pocanic04] $$|V_{ud}| = 0.9728(30) \ {\rm (pion \ decay)} \label{eq:Vud_pion} ,$$ what is less precise than the value from neutron decay. We consider here a new source to determine $|V_{ud}|$, namely, the beta decay transitions between $T=1/2$ isospin doublets in mirror nuclei. Such transitions are sometimes called “mirror decays” and similarly to neutron decay –which is the simplest mirror transition– proceed via the vector and axial-vector interactions. The principle to extract $|V_{ud}|$ from such transitions is then similar to that used in the analysis of neutron decay, except for the corrections associated with the nuclear system. The corrections for the determination of the ${\cal F}t$ values in mirror transitions have recently been surveyed [@Severijns08] and were obtained with sufficient precision for their consideration in the analysis reported here. We use then below the results of this new survey and adopt the definitions and notations given there, unless possible ambiguities require it otherwise. The vector part of the corrected statistical decay rate function is given by [@Severijns08] $${\cal F}t \equiv f_V t(1 + \delta^\prime_R)(1 + \delta^V_{NS} - \delta^V_C) \label{eq:Ft1}$$ where $f_V$ is the uncorrected statistical rate function, $\delta^\prime_R$ denotes nuclear dependent radiative corrections obtained from QED calculations, $\delta^V_{NS}$ are nuclear structure corrections and $\delta^V_C$ are isospin symmetry breaking corrections for the vector contribution. For mixed Fermi/Gamow-Teller transitions, ${\cal F}t$ is related to $V_{ud}$ by [@Severijns08] $${\cal F}t = \frac{K}{G^2_F V^2_{ud}} \frac{1}{C^2_V |M_F^0|^2 (1+\Delta^V_R)(1+f_A\rho^2/f_V)} \label{eq:Ft2}$$ where $K/(\hbar c)^6 = 2 \pi^3 ~\ln 2 ~ \hbar / (m_ec^2)^5$ and has the value $K/(\hbar c)^6 = 8120.278(4) \times 10^{-10}$ GeV$^{-4}$s, $G_F/(\hbar c)^3 = 1.16637(1) \times 10^{-5}$ GeV$^{-2}$ is the Fermi constant [@amsler08], $C_V = 1$ is the vector coupling constant, $\Delta_R^V$ is a transition-independent radiative correction [@marciano06], $f_A$ is the statistical rate function for the axial-vector part of the interaction, and $\rho$ is the Gamow-Teller to Fermi mixing ratio. This ratio is defined by [@Severijns08] $$\begin{aligned} \rho & = & \frac{C_A M_{GT}^0}{C_V M_F^0} \left[ \frac{(1 + \delta_{NS}^A - \delta_C^A)(1 + \Delta_R^A)} {(1 + \delta_{NS}^V - \delta_C^V)(1 + \Delta_R^V)} \right]^{1/2} \nonumber \\ & \approx & \frac{C_A M_{GT}^0}{C_V M_F^0} . \label{eq:rho}\end{aligned}$$ where the square root contains the nuclear structure, isospin symmetry breaking and radiative corrections for the vector and axial-vector contributions, $C_A$ is the axial-vector coupling constant ($C_A/C_V \approx -1.27$) and $M_F^0$ and $M_{GT}^0$ are the isospin symmetry limit values of the Fermi and Gamow-Teller matrix elements, with $|M_F^0|^2$ = 1 for the $T_i = T_f = 1/2$ mirror transitions. Using the corrected ${\cal F}t$ values from the recent compilation [@Severijns08] it is possible to extract $|V_{ud}|$ from Eq.(\[eq:Ft2\]) provided another observable, also function of $\rho$, be known with sufficient precision. In the present analysis we consider transitions where the beta-neutrino angular correlation coefficient, $a_{\beta\nu}$, and the beta decay asymmetry parameter, $A_{\beta}$, have been measured in the past. For $\beta^+$ mirror transitions, their expressions as a function of the mixing ratio $\rho$, in the limit of zero momentum transfer, are [@Severijns06] $$a_{\beta\nu}(0) = \left( 1-\rho^2/3 \right) / \left( 1+\rho^2 \right) \label{eq:a0}$$ and $$A_{\beta}(0) = \frac{\rho^2 - 2 \rho \sqrt{J(J+1)}}{(1+\rho^2)(J+1)} \label{eq:A0}$$ where $J$ denotes the spin of the initial and final states in the transition. At a precision level of about 1%, as is the case for the correlation coefficients we are dealing with here, the impact of recoil effects have however to be considered. To first order in recoil, assuming the absence of second class currents [@grenacs85] and time reversal invariance, one then has for a $\beta^+$ transition within a common isotopic multiplet [@holstein74] $$a_{\beta\nu} = f_2(E)/f_1(E) , \label{eq:a}$$ and $$A_{\beta} = f_4(E)/f_1(E) , \label{eq:A}$$ with the spectral functions $$\begin{aligned} f_1(E) & = & a^2 + c^2 - \frac{2E_0}{3M} (c^2 - cb) + \frac{2E}{3M}(3a^2 + \nonumber \\ & & + 5c^2-2cb) - \frac{2 m_e^2}{3E M} (c^2 -cb) ,\end{aligned}$$ $$\begin{aligned} f_2(E) = a^2 - \frac{1}{3} c^2 + \frac{2 E_0}{3M} (c^2 - cb) -\frac{4E}{3M} (3c^2 - cb) ,\end{aligned}$$ and $$\begin{aligned} f_4(E) & = & \left( \frac{J}{J+1} \right)^{1/2} \left[2ac - \frac{2 E_0}{3 M} (ac - ab) + \right. \nonumber \\ & & \left. + \frac{2 E}{3 M} (7ac - ab) \right] + \left( \frac{1}{J+1} \right) \left[c^2 + \frac{2 E_0}{3 M} (-c^2 + \right. \nonumber \\ & & \left. + cb) + \frac{E}{3 M} (-11c^2 + 5cb) \right] .\end{aligned}$$ Here $E$ and $E_0$ denote respectively the total and the total maximal positron energies, $M$ is the average mass of the mother and daughter isotopes, and $m_e$ is the electron mass. In this notation [@holstein74] $a$, $b$ and $c$ designate respectively, the Fermi-, weak magnetism- and Gamow-Teller form factors $$a = g_V M_F , \ c = g_A M_{GT} .$$ with $C_{i} = V_{ud} \ G_F \ g_{i}(q^2 \rightarrow 0)$, ($i = V,A$), $g_i$ being the vector and axial-vector form factors and $q$ the momentum transfer. According to the conserved-vector-current hypothesis [@holstein71; @holstein74] $$b = A \sqrt{(J+1)/J} M_F \ \mu .$$ where $A$ is the mass number and $\mu = [\mu(T_3) - \mu(T_3^\prime)]/ ( T_3 - T_3^\prime )$ is the isovector contribution to the magnetic moment, with $T_3$ the third component of the isospin (in the convention where $T_3 = +1/2$ for a proton) and $\mu(T_3)$ and $\mu(T_3^\prime)$ the magnetic moments of the mother and daughter nuclei. The extraction of $|V_{ud}|$ proceeds then by solving Eqs.(\[eq:a\]) or (\[eq:A\]) for $\rho$ and then inserting its value in Eq.(\[eq:Ft2\]) with the corresponding ${\cal F}t$ value from Ref. [@Severijns08] yielding $$V_{ud}^2 = \frac{K^\prime}{{\cal F}t (1 + f_A \rho^2/f_V)} ,$$ where $K^\prime = {K}/[G_F^2 \ C_V^2 \ ( 1 + \Delta_R^V )] = 5831.3(22)$ s, and $\Delta_R = 2.361(38)$% [@marciano06]. The error on $K^\prime$ is dominated by the uncertainty on the radiative corrections $\Delta_R$. The sign of $\rho$ was taken to be the same as in Ref. [@Severijns08]. The data included in the present analysis is summarized in Table \[tab:data\]. The mirror transitions considered here are those in $^{19}$Ne, $^{21}$Na and $^{35}$Ar, for which the beta-neutrino angular correlation coefficient or the beta decay asymmetry parameter have been measured. The inclusion of recoil effects in the ${\cal F}t$ values was found to have negligible effects on the resulting values for $|V_{ud}|$. Corrections for $\delta_R^\prime$, $\delta_{NS}$ and $\delta_C$ in the correlation coefficients cancel in the ratios of the spectral functions, Eqs.(\[eq:a\]) and (\[eq:A\]). Electromagnetic corrections [@holstein74], other than the dominant Coulomb effects contained in the energy-dependent Fermi function $F(Z, E)$ and included in the $f_{V,A}$ factors, were verified to be negligible at the present level of precision. The values for $E$ used in Eqs.(\[eq:a\]) and (\[eq:A\]) and listed in Table \[tab:data\], are average values determined from the experimental conditions. $^{19}$Ne     $^{21}$Na     $^{35}$Ar     ------------------- ----------------- ---------------- ---------------- $a_{\beta\nu}$ —       0.5502(60) —       $A_{\beta}$ $-0.0391(14)$ —       0.430(22) ${\cal F}t$ \[s\] 1718.4(32) 4085(12) 5688.6(72) $f_A/f_V$ 1.01428 1.01801 0.98938 $E_0$ \[MeV\] 2.72783(30) 3.03658(70) 5.45514(70) $E$ \[MeV\] 0.510999 1.614(1) 2.780(1) $M$ \[amu\] 19.0001417(7) 20.9957509(10) 34.9720551(14) $b$ $-148.5605(26)$ 82.6366(27) $-8.5704(90)$ $\rho$ 1.5995(46) $-0.7136(72)$ $-0.279(15)$ $|V_{ud}|$ 0.9716(22) 0.9696(36) 0.9755(38) : Input data used to determine the values of $\rho$ and $|V_{ud}|$ from the mirror transitions in $^{19}$Ne, $^{21}$Na and $^{35}$Ar. \[tab:data\] The beta asymmetry parameter in $^{19}$Ne decay has been measured a couple of decades ago by the Princeton group [@Calaprice75; @Schreiber83]. Although the value reported in [@Schreiber83] has a better precision than the results quoted in [@Calaprice75], we do not include here that input since the result has never been published. From the value reported in [@Calaprice75], i.e. $A_{\beta} = -0.0391(14)$, the value $\rho = 1.5995(46)$ is extracted, leading to $|V_{ud}|(^{19}{\rm Ne}) = 0.9716(22)$. A recent measurement of the beta-neutrino angular correlation coefficient in $^{21}$Na decay produced the value $a_{\beta\nu} = 0.5502(60)$ [@Vetter08]. This result constitutes the most precise measurement of this coefficient in a mirror transition. The value of the mixing ratio extracted from this result is $\rho = -0.7136(72)$ leading to $|V_{ud}|(^{21}{\rm Na}) = 0.9696(36)$. Finally, in the decay of $^{35}$Ar, the beta asymmetry parameter has reliably been measured twice, with the results $A_\beta = 0.49(10)$ [@Garnett88] and $A_\beta = 0.427(23)$ [@Converse93]. The weighted mean of these (Table \[tab:data\]), which is dominated by the most precise of both, yields a value $\rho = -0.279(15)$, leading to $|V_{ud}|(^{35}{\rm Ar}) = 0.9755(38)$. Except for $^{19}$Ne, the recoil corrections appeared not to have a significant impact in the determination of $\rho$. For $^{19}$Ne, Eq. (\[eq:A0\]) yields $\rho$ = 1.6015(44) what differs by about half a standard deviation from the value quoted above. For $^{21}$Na and $^{35}$Ar the values of $\rho$ obtained from Eqs. (\[eq:a0\]) and (\[eq:A0\]) are identical to those given above. The values of $\rho$ and $|V_{ud}|$ are also summarized in Table \[tab:data\]. The results obtained from $^{21}$Na and $^{35}$Ar have comparable uncertainties, which are a factor of 1.7 larger than the uncertainty on the value obtained from $^{19}$Ne. The weighted mean of the three values is $$|V_{ud}| = 0.9719(17) \ {\rm (nuclear~mirror~transitions)} \label{eq:Vud_mirrors}$$ This result is consistent within 1.2 combined standard deviations with the value obtained from nuclear superallowed $0^+\rightarrow 0^+$ transitions, Eq.(\[eq:Vud\_superallowed\]), and has an uncertainty comparable to that obtained from neutron decay, Eq.(\[eq:Vud\_neutron\]). This shows that nuclear mirror transitions provide an independent sensitive source for the determination of $|V_{ud}|$ and deserve therefore further theoretical studies and experimental investigations to improve the required inputs. The survey in Ref. [@Severijns08] reports, for each decay, the contributions of the five inputs ($f_V$, lifetime, branching ratio, $\delta_R$ and $\delta_C - \delta_{NS}$) to the error on the ${\cal F}t$ values. For $^{19}$Ne the uncertainty is dominated by experimental inputs to determine $f_V$ and by the lifetime. The situation is similar in $^{21}$Na where in addition, the uncertainty on the branching ratio contributes at the third place. Except for $^{35}$Ar, where all five inputs are known with a relative uncertainty below $10^{-3}$, all other transitions ranging from $^{3}$He to $^{45}$V have uncertainties dominated by the experimental inputs. Improvements in the determination of $|V_{ud}|$ offer therefore new opportunities for precision experiments in mirror transitions. However, the uncertainties on the values of $|V_{ud}|$ given in Table \[tab:data\] are dominated by those on $\rho$, so that improvements of these values call in priority for new measurements of correlation coefficients. We consider here in particular the impact of a new measurement of $a_{\beta\nu}$ in $^{19}$Ne and $^{35}$Ar with a similar precision to that achieved for $^{21}$Na [@Vetter08]. For this purpose we use Eqs.(\[eq:a0\]) and (\[eq:A0\]) instead of Eqs.(\[eq:a\]) and (\[eq:A\]). Figure \[fig:ne19\] shows the sensitivity of $a_{\beta\nu}$ and $A_\beta$ to $\rho$ for $^{19}$Ne decay. The solid lines are the differences between Eq.(\[eq:A0\]) and the experimental values (Table \[tab:data\]) at $\pm 1\sigma$. The dotted lines are the differences between Eq.(\[eq:a0\]) and the values of $a_{\beta\nu}$ calculated with Eq.(\[eq:a0\]) using the value of $\rho$ given in Table \[tab:data\] and assuming a relative precision of $\pm 1$% on $a_{\beta\nu}$. The two solid and the two dotted lines are superimposed on the left panel. The right panel in Fig.\[fig:ne19\] shows a zoom to the intersection region of the two curves with zero. It is seen that a measurement of $a_{\beta\nu}$ for $^{19}$Ne with a 1% relative uncertainty enables to reduce the uncertainty on $\rho$ by a factor of about 3. The sensitivities of the two observables to $\rho$ are comparable. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![Left panel: sensitivity of the angular correlation coefficient and of the decay asymmetry parameter to the mixing ratio in $^{19}$Ne decay. Right panel: intersection between the two curves. The dotted lines indicate the region allowed by a measurement of $a_{\beta\nu}$ with a relative uncertainty of $\pm 1$%.[]{data-label="fig:ne19"}](fig1_Ne19_1.eps "fig:"){height="63mm"} ![Left panel: sensitivity of the angular correlation coefficient and of the decay asymmetry parameter to the mixing ratio in $^{19}$Ne decay. Right panel: intersection between the two curves. The dotted lines indicate the region allowed by a measurement of $a_{\beta\nu}$ with a relative uncertainty of $\pm 1$%.[]{data-label="fig:ne19"}](fig1_Ne19_2.eps "fig:"){height="64mm"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Figure \[fig:ar35\] shows the same analysis for $^{35}$Ar decay. The improvement is here very moderate since the intersection of the two curves with zero occurs in the region where the sensitivity of $A_{\beta}$ to $\rho$ is the largest. In fact, among all mirror transitions considered in Ref. [@Severijns08], it appears that $a_{\beta\nu}$ shows the largest sensitivity to $\rho$ in $^{19}$Ne decay and the smallest sensitivity in $^{35}$Ar. ------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------- ![Same than Fig. \[fig:ne19\] but for $^{35}$Ar decay.[]{data-label="fig:ar35"}](fig2_Ar35_1.eps "fig:"){height="63mm"} ![Same than Fig. \[fig:ne19\] but for $^{35}$Ar decay.[]{data-label="fig:ar35"}](fig2_Ar35_2.eps "fig:"){height="63mm"} ------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------- Based on the values of ${\cal F}t$ and $\rho$ listed in Ref. [@Severijns08] we found that measurements of $a_{\beta\nu}$ provide better prospects than $A_\beta$ to improve on the value of $|V_{ud}|$ from mirror transitions, the highest sensitivities being obtained for $^{3}$He, $^{17}$F, $^{19}$Ne and $^{41}$Sc. In the same context, measurements of $A_\beta$ look of interest only in $^{19}$Ne decay, the sensitivity being then similar to that of $a_{\beta\nu}$ in the same decay, as shown above. In conclusion, we have deduced the value of the CKM matrix element $|V_{ud}| = 0.9719\pm 0.017$ using only data from transitions in $^{19}$Ne, $^{21}$Na and $^{35}$Ar. This demonstrates that nuclear mirror transitions provide an independent sensitive source for the determination of $|V_{ud}|$. Further theoretical studies as well as precise determinations of the experimental inputs, and in particular of the correlation coefficients, are desirable. [99]{} N. Cabibbo, Phys. Rev. Lett. **10**, 531 (1963). M. Kobayashi and T. Maskawa, Prog. Theor. Phys. **49**, 652 (1973). I.S. Towner and J.C. Hardy, arXiv:nucl-th/9809087v1, and in Proc. of the 5$^{th}$ Int. WEIN Symp. [*Physics Beyond the Standard Model*]{}, P. Herczeg, C.M. Hoffman and H.V. Klapdor eds. (World Scientific, Singapore, 1999). I.S. Towner and J.C. Hardy J. Phys. G: Nucl. Part. Phys. **29**, 197 (2003). J.C. Hardy, arXiv:hep-ph/0703165v1. J.C. Hardy and I.S. Towner, Phys. Rev. C **71**, 055501 (2005). I.S. Towner and J.C. Hardy, Phys. Rev. C **77**, 025501 (2008). D. Thompson, J. Phys. G: Nucl. Part. Phys. **16**, 1423 (1990). C. Amsler et al. (Particle Data Group), Phys. Lett. B **667**, 1 (2008). J.S. Nico and W.M. Snow, Annu. Rev. Nucl. Part. Sci. **55**, 27 (2005). H. Abele, Prog. Part. Nucl. Phys. **60**, 1 (2008) and references therein. H. Abele, et al. Phys. Rev. Lett. **88**, 211801 (2002). See contributions to Proc. Int. Workshop on Particle Physics with Slow Neutrons, Grenoble May 2008, to appear in Nucl. Instr. and Meth. in Phys. Res. A. D. Pocanic et al., Phys. Rev. Lett. **93**, 181803 (2004). N. Severijns, M. Tandecki, T. Phalet and I.S. Towner, arXiv:nucl-ex/08072201v1, submitted to Phys. Rev. C. W.J. Marciano and A. Sirlin, Phys. Rev. Lett. **96**, 032002 (2006). See e.g. N. Severijns, M. Beck and O. Naviliat-Cuncic, Rev. Mod. Phys. [**78**]{}, 991 (2006). For a review see L. Grenacs, Annu. Rev. Nucl. Part. Sci. **35**, 455 (1985). B.R. Holstein, Rev. Mod. Phys. **46**, 789 (1974) and **48**, 653 (1976). B.R. Holstein and S.B. Treiman, Phys. Rev. **3**, 1921 (1971). P.A. Vetter, J.R. Abo-Shaeer, S.J. Freedman and R. Maruyama, Phys. Rev. C **77**, 035502 (2008). F.P. Calaprice, S.J. Freedman, W.C. Mead and H.C. Vantine, Phys. Rev. Lett. **35**, 1566 (1975). J.D. Garnett, E.D. Commins, K.T. Lesko and E.B. Norman, Phys. Rev. Lett. **60**, 499 (1988). A. Converse et al., Phys. Lett. B **304**, 60 (1993). G. Audi, O. Bersillon, J. Blachot and H. Wapstra, Nucl. Phys. A **729**, 3 (2003). N.J. Stone, At. Data and Nucl. Data Tabl. **90**, 75 (2005). D.F. Schreiber, Ph.D. thesis, Princeton University, (1983).
--- abstract: 'We establish the optimal diversity-multiplexing (DM) tradeoff for coherent selective-fading multiple-access MIMO channels and provide corresponding code design criteria. As a byproduct, on the conceptual level, we find an interesting relation between the DM tradeoff framework and the notion of dominant error event regions, first introduced in the AWGN case by Gallager, *IEEE Trans. IT*, 1985. This relation allows us to accurately characterize the error mechanisms in MIMO fading multiple-access channels. In particular, we find that, for a given rate tuple, the maximum achievable diversity order is determined by a single outage event that dominates the total error probability exponentially in SNR. Finally, we examine the distributed space-time code construction proposed by Badr and Belfiore, *Int. Zurich Seminar on Commun.*, 2008, using the code design criteria derived in this paper. -1' author: - '[^1]' bibliography: - 'biblio.bib' title: 'Selective-Fading Multiple-Access MIMO Channels: Diversity-Multiplexing Tradeoff and Dominant Outage Event Regions' --- Introduction ============ The diversity-multiplexing (DM) tradeoff framework introduced by Zheng and Tse allows to efficiently characterize the information-theoretic performance limits of communication over multiple-input multiple-output (MIMO) fading channels both in the point-to-point [@ZheTse02] and in the multiple-access (MA) case [@TVZ04]. For coherent point-to-point flat-fading channels, DM tradeoff optimal code constructions have been reported in [@YaoWor03; @GamCaiDam04; @BelRekVit05; @TW05]. The optimal DM tradeoff in point-to-point selective-fading MIMO channels was characterized in [@pco07]. In the MA case, the optimal DM tradeoff is known only for flat-fading channels [@TVZ04]. A corresponding DM tradeoff optimal code construction was reported in [@NamGam07]. *Contributions.* The aim of this paper is to characterize the DM tradeoff in selective-fading MIMO multiple-access channels (MACs) and to derive corresponding code design criteria. As a byproduct, on the conceptual level, we find an interesting relation between the DM tradeoff framework and the notion of dominant error event regions, first introduced in the case of additive white Gaussian noise (AWGN) MACs by Gallager [@Gallager85]. This relation leads to an accurate characterization of the error mechanisms in MIMO fading MACs. Furthermore, we extend the techniques introduced in [@pco07] for computing the DM tradeoff in point-to-point selective-fading channels to the MA case. Finally, we examine the distributed space-time code construction proposed in [@BadBel08] using the code design criteria derived in this paper. *Notation.* $\mt$ and $\mr$ denote, respectively, the number of transmit antennas for each user and the number of receive antennas. The set of all users is $\mc{U}=\{1,2,\ldots, U\}$, $\setS$ is a subset of $\mc{U}$ with $\bar{\setS}$ and $|\set{}|$ denoting its complement in $\mc{U}$ and its cardinality, respectively. The superscripts ${}^T$ and ${}^H$ stand for transposition and conjugate transposition, respectively. $\mat{A}\otimes \mat{B}$ and $\mat{A}\odot \mat{B}$ denote, respectively, the Kronecker and Hadamard products of the matrices $\mat{A}$ and $\mat{B}$. If $\mat{A}$ has the columns $\mat{a}_k$ ($k \negmedspace = \negmedspace 1, 2, \ldots, m$), $\vecop{\mat{A}}=[\mat{a}_1^T \: \mat{a}_2^T \: \cdots \: \mat{a}_m^T]^T$. ${\left\lVert\mat{a}\right\rVert}$ and ${\left\lVert\mat{A}\right\rVert}_\mathrm{F}$ denote, respectively, the Euclidean norm of the vector $\mat{a}$ and the Frobenius norm of the matrix $\mat{A}$. For index sets $\mc{I}_1 \subseteq \sizecurly{1, 2, \ldots, n}$ and $\mc{I}_2 \subseteq \sizecurly{1, 2, \ldots, m}$, $\mat{A}(\mc{I}_{1},\mc{I}_{2})$ stands for the (sub)matrix consisting of the rows of $\mat{A}$ indexed by $\mc{I}_{1}$ and the columns of $\mat{A}$ indexed by $\mc{I}_{2}$. The eigenvalues of the $n\times n$ Hermitian matrix $\mat{A}$, sorted in ascending order, are denoted by $\lambda_k(\mat{A})$, $k=1,2,\ldots, n$. The Kronecker delta function is defined as $\delta_{n,m}=1$ for $n=m$ and zero otherwise. If $X$ and $Y$ are random variables (RVs), $X\sim Y$ denotes equivalence in distribution and $\mathbb{E}_X$ is the expectation operator with respect to (w.r.t.) the RV $X$. The random vector $\mat{x} \sim \cn{\mat{0}}{\mat{C}}$ is zero-mean jointly proper Gaussian (JPG) with $\mean{\mat{x}\mat{x}^H}=\mat{C}$. $f(x)$ and $g(x)$ are said to be exponentially equal, denoted by $f(x)\doteq g(x)$, if $\lim_{x\rightarrow \infty} \frac{\log f(x)}{\log x} = \lim_{x\rightarrow \infty} \frac{\log g(x)}{\log x}$. Exponential inequality, indicated by $\:\dotgeq$ and $\dotleq$, is defined analogously. Channel and signal model\[Sec.Model\] ===================================== We consider a selective-fading MAC where $U$ users, with $\mt$ transmit antennas each, communicate with a single receiver with $\mr$ antennas. The corresponding input-output relation is given by -1 \_n = \_[u=1]{}\^U \_[u,n]{} \_[u, n]{} +\_n, n=0, 1, …, N-1,\[Eq.SigModel\] where the index $n$ corresponds to a time, frequency, or time-frequency slot and SNR denotes the per-user signal-to-noise ratio at each receive antenna. The vectors $\mat{y}_n $, $\mat{x}_{u,n}$, and $\mat{z}_n$ denote, respectively, the $\mr \times 1$ receive signal vector, the $\mt \times 1$ transmit signal vector corresponding to the $u$th user, and the $\mr \times 1$ zero-mean JPG noise vector satisfying $\mean{\mat{z}_n\mat{z}_{n' }^H}= \delta_{n,n'}\: \mat{I}_{\mr}$, all for the $n$th slot. We assume that the receiver has perfect knowledge of all channels and the transmitters do not have channel state information (CSI) but know the channel law. We restrict our analysis to spatially uncorrelated Rayleigh fading channels so that, for a given $n$, $\mat{H}_{u,n}$ has i.i.d. $\cn{0}{1}$ entries. The channels corresponding to different users are assumed to be statistically independent. We do, however, allow for correlation across $n$ for a given $u$, and assume, for simplicity, that all scalar subchannels have the same correlation function so that $\mean{\mat{H}_{u,n}(i,j)\: (\mat{H}_{u',n'}(i,j))^*}=\mat{R}_\mbb{H}(n,n') \delta_{u,u'}$, for $i=1, 2, \ldots, \mr$ and $ j=1, 2, \ldots, \mt$. The covariance matrix $\mat{R}_\mbb{H}$ is obtained from the channel’s time-frequency correlation function [@Bello63]. In the sequel, we let $\rho\triangleq\rank{\mat{R}_\mbb{H}}$. For any set $\setS=\{u_1, \ldots, u_{|\setS|}\}$, we stack the corresponding users’ channel matrices for a given slot index $n$ according to -1 $$\mat{H}_{\setS,n} = [\mat{H}_{u_1,n} \: \cdots \: \mat{H}_{u_{|\setS|},n}].\label{Eq.Hsn}$$ With this notation, it follows that = \_(n,n’) \_[||]{}\[Eq.CovModel\]. Preliminaries ============= Assuming that all users employ i.i.d. Gaussian codebooks[^2], the set of achievable rate tuples $(R_1,R_2, \ldots, R_U)$ for a given channel realization $\{\mat{H}_{\mc{U},n}\}_{n=0}^{N-1}$ is given by $$\begin{split} \mc{R}&= \Bigg\{(R_1, R_2, \ldots, R_U): \forall \setS \subseteq\mathcal{U}, \\&R(\setS)\leq \frac{1}{N} \sum_{n=0}^{N-1}\mutinfexp{}{\frac{\snr}{\mt}\:\mat{H}_{\setS,n}\mat{H}_{\setS,n}^H} \Bigg\}\label{Eq.ART} \end{split}$$ where $R(\setS) = \sum_{u\in \setS} R_u$. If a given rate tuple $(R_1, R_2, \ldots, R_U)\notin \mc{R}$, we say that the channel is in outage w.r.t. this rate tuple. Denoting the corresponding outage event as $\outage$, we have $$\prob{\outage} = \prob{\bigcup_{\setS\:\subseteq\:\mathcal{U}} \outage_\setS} \label{Eq.P1}$$ where the $\setS$-outage event $\outage_\setS$ is defined as $$\label{Eq.Os} \begin{split} \outage_\setS &\triangleq \Bigg\{\{\mat{H}_{\setS,n}\}_{n=0}^{N-1}: \\ &\hspace{3mm}\frac{1}{N} \sum_{n=0}^{N-1}\mutinfexp{}{\frac{\snr}{\mt}\: \mat{H}_{\setS,n}\mat{H}_{\setS,n}^H} < R(\setS)\Bigg\}. \end{split}$$ Our goal is to characterize as a function of the rate tuple $(R_1, R_2, \ldots, R_U)$ in the high-SNR regime and to establish sufficient conditions on the users’ codebooks to guarantee that the corresponding error probability is exponentially (in SNR) equal to $\prob{\outage}$. To this end, we employ the DM tradeoff framework [@ZheTse02], which, in its MA version [@TVZ04], will be briefly summarized next.-1 In the DM tradeoff framework, the data rate of user $u$ scales with SNR as $R_u(\snr) = r_u \log\snr$, where $r_u$ denotes the multiplexing rate. Consequently, a sequence of codebooks $\codebook_{r_u}(\snr)$, one for each SNR, is required. We say that this sequence of codebooks constitutes a family of codes $\codebook_{r_u}$ operating at multiplexing rate $r_u$. The family $\codebook_{r_u}$ is assumed to have block length $N$. At any given SNR, $\codebook_{r_u}(\snr)$ contains codewords $\mat{X}_u= [\mat{x}_{u,0}\: \mat{x}_{u,1}\: \cdots \:\mat{x}_{u,N-1}]$ satisfying the per-user power constraint -1 $${\left\lVert\mat{X}_u\right\rVert}_\mathrm{F}^2 \leq \mt N, \; \forall\:\mat{X}_u \in \codebook_{r_u}.\label{Eq.PC}$$ In the remainder of the paper, we will say “the power constraint ” to mean that has to be satisfied for $u=1,2,\ldots, U$. The overall family of codes is given by $\codebook_{\bm{r}}=\codebook_{r_1}\times \codebook_{r_2}\times \cdots \times \codebook_{r_U}$, where $\bm{r}=(r_1, r_2, \ldots, r_U)$ denotes the multiplexing rate tuple[^3]. At a given SNR, the corresponding codebook $\codebook_{\bm{r}}(\snr)$ contains $\snr^{Nr(\mc{U})}$ codewords with $r(\mc{U})=\sum_{u=1}^U r_u$. -1 The DM tradeoff realized by $\codebook_{\bm{r}}$ is characterized by the function $$d(\codebook_{\bm{r}})=-\lim_{\snr\rightarrow\infty}\frac{\log P_e(\codebook_{\bm{r}})}{\log\snr}$$ where $P_e(\codebook_{\bm{r}})$ is the *total* error probability obtained through maximum-likelihood (ML) decoding, that is, the probability for the receiver to decode at least one user in error. The optimal DM tradeoff curve $d^\star\mspace{-2.0mu}(\bm{r}) = \sup_{\codebook_{\bm{r}}} d(\codebook_{\bm{r}})$, where the supremum is taken over all possible families of codes satisfying the power constraint , quantifies the maximum achievable diversity order as a function of the multiplexing rate tuple $\bm{r}$. Since the outage probability $\prob{\outage}$ is a lower bound (exponentially in SNR) on the error probability of any coding scheme [@TVZ04 Lemma 7], we have-1 $$d^\star\mspace{-2.0mu}(\bm{r}) \leq - \lim_{\snr\rightarrow\infty}\frac{\log\prob{\outage}}{\log\snr}\label{Eq.OutLB}$$ where the outage event $\outage$, defined in and , is w.r.t. the rates $R_u(\snr)=r_u \log\snr$, $\forall u$. As an extension of the corresponding result for the flat-fading case [@TVZ04], we shall show that holds with equality also for selective-fading MACs. However, just like in the case of point-to-point channels, a direct characterization of the right-hand side (RHS) of for the selective-fading case seems analytically intractable since one has to deal with the sum of correlated (recall that the $\mat{H}_{u,n}$ are correlated across $n$) terms in . In the next section, we show how the technique introduced in [@pco07] for characterizing the DM tradeoff of point-to-point selective-fading MIMO channels can be extended to the MA case.-1 Computing the optimal DM tradeoff curve ======================================= Lower bound on $\prob{\outage_\setS}$ ------------------------------------- First, we derive a lower bound on the individual terms $\prob{\outage_\setS}$. We start by noting that for any set $\setS\subseteq \mc{U}$, Jensen’s inequality provides the following upper bound: $$\frac{1}{N} \sum_{n=0}^{N-1} \mutinfexp{}{\frac{\snr}{\mt}\mat{H}_{\setS,n}\mat{H}_{\setS, n}^H} \leq \log\det\sizeparentheses{\mat{I}+\frac{\snr}{\mt N}\channelH_\setS \channelH_\setS^H} \triangleq \jensen_\setS(\snr) \label{Eq.Jensen} $$ where the “Jensen channel" [@pco07] is defined as $$\channelH_\setS = \begin{cases} [\mat{H}_{\setS ,0} \; \mat{H}_{\setS ,1}\; \cdots \; \mat{H}_{\setS ,N-1}], & \text{if $\mr\leq|\setS|\mt$,}\\ [\mat{H}_{\setS ,0}^H \; \mat{H}_{\setS ,1}^H\; \cdots \; \mat{H}_{\setS ,N-1}^H], & \text{if $\mr > |\setS|\mt$.} \end{cases}$$ Consequently, $\channelH_\setS$ has dimension $\minant(\setS) \times N \maxant(\setS)$, where $$\begin{aligned} \minant(\setS)&\triangleq\min(|\setS|\mt,\mr)\\ \maxant(\setS)&\triangleq\max(|\setS|\mt,\mr).\end{aligned}$$ In the following, we say that the event $\joutage_\setS$ occurs if the Jensen channel $\channelH_\setS$ is in outage w.r.t. the rate $r(\setS)\log\snr$, where $r(\setS) = \sum_{u\in \setS} r_u$, i.e., $\joutage_\setS\triangleq\sizecurly{\jensen_\setS(\snr)<r(\setS)\log\snr}$. From , we can conclude that, obviously, $\prob{\joutage_\setS}\leq\prob{\outage_\setS}$. -1 We next characterize the Jensen outage probability analytically. Recalling , we start by expressing $\channelH_\setS$ as $\channelH_\setS = \channelHw (\mat{R}^{T/2} \otimes \mat{I}_{\maxant(\setS)})$, where $\mat{R}=\mat{R}_\mbb{H}$, if $\mr \leq |\setS|\mt$, and $\mat{R}=\mat{R}_\mbb{H}^T$, if $\mr > |\setS|\mt$, and $\channelHw$ is an i.i.d. $\cn{0}{1}$ matrix with the same dimensions as $\channelH_\setS$ given by $$\channelHw = \begin{cases} \; [\mat{H}_{w ,0} \; \mat{H}_{w ,1}\; \cdots \; \mat{H}_{w ,N-1}], & \text{if $\mr\leq|\setS|\mt$,} \\ [\mat{H}_{w ,0}^H \; \mat{H}_{w ,1}^H\; \cdots \; \mat{H}_{w ,N-1}^H], & \text{if $\mr > |\setS|\mt$.}\end{cases} \label{Eq.Hwn}$$ Here, $\mat{H}_{w ,n}$ denotes i.i.d. $\cn{0}{1}$ matrices of dimension $\mr\times |\setS|\mt$. Using $\channelHw \mat{U} \sim \channelHw$, for any unitary $\mat{U}$, and $\lambda_n(\mat{R}_\mbb{H})=\lambda_n(\mat{R}_\mbb{H}^T)$ for all $n$, we get $$\channelH_\setS\channelH_\setS^H \sim \channelHw (\mat{\Lambda}\otimes \mat{I}_{\maxant(\setS)}) \channelHw ^H$$ where $\mat{\Lambda}=\diag{\lambda_1(\mat{R}_\mbb{H}), \lambda_2(\mat{R}_\mbb{H}), \ldots, \lambda_\rho(\mat{R}_\mbb{H}), 0,\ldots, 0}$. Setting $\channelHwbar = \channelHw([1:\minant(\setS)], [1:\rho\maxant(\setS)])$, it was shown in [@pco07] that $\prob{\joutage_\setS}$ is nothing but the outage probability of an effective MIMO channel with $\rho\maxant(\setS)$ transmit and $\minant(\setS)$ receive antennas and satisfies-1 $$\begin{aligned} \prob{\joutage_\setS} &\doteq \prob{\log\det\sizeparentheses{\mat{I}_{}+ \snr \:\channelHwbar\channelHwbar^H}\negmedspace< r(\setS) \log\snr}\notag\\ &\doteq \snr^{-d_\setS(r(\setS))}\label{Eq.Outprob2}\end{aligned}$$ where we infer from the results in [@ZheTse02] that $d_\setS(r)$ is the piecewise linear function connecting the points $(r,d_\setS(r))$ for $r=0, 1, \ldots, \minant(\setS)$, with $$d_\setS(r) = (\minant(\setS)-r)(\rho\maxant(\setS)-r).\label{Eq.JensenCurve}$$ Since, as already noted, $\prob{\outage_\setS}\geq\prob{\joutage_\setS}$, it follows from that $$\prob{\outage_\setS} \dotgeq \snr^{-d_\setS(r(\setS))} \label{Eq.LBOs}$$ which establishes the desired lower bound. Error event analysis -------------------- Following [@Gallager85; @TVZ04], we decompose the total error probability into $2^U-1$ disjoint error events according to $$P_e(\codebook_{\bm{r}}) = \sum_{\setS\:\subseteq\;\mc{U}} \prob{\mc{E}_\setS}\label{Eq.PeDecomposition}$$ where the $\setS$-error event $\mc{E}_\setS$ corresponds to *all* the users in $\setS$ being decoded in error and the remaining users being decoded correctly. More precisely, we have $$\label{Eq.SError} \mc{E}_\setS \triangleq \sizecurly{(\hat{\mat{X}}_u\neq \mat{X}_u, \forall u \in \setS)\; \land\; (\hat{\mat{X}}_u= \mat{X}_u, \forall u \in \bar{\setS})}$$ where ${\mat{X}}_u$ and $\hat{\mat{X}}_u$ are, respectively, the transmitted and ML-decoded codewords corresponding to user $u$. We note that, in contrast to the outage events $\outage_\setS$ defined in , the error events $\mc{E}_\setS$ are disjoint. The following result establishes the DM tradeoff optimal code design criterion for a specific error event $\mc{E}_\setS$.-1 \[Th.CDC\] For every $u\in \setS$, let $\codebook_{r_u}$ have block length $N\geq\rho|\setS|\mt$. Let the nonzero[^4] eigenvalues of $\mat{R}_\mbb{H}^T \odot (\sum_{u\in\setS} \mat{E}_u^H\mat{E}_u)$, where $\mat{E}_u=\mat{X}_u-\mat{X}_u'$ and $\mat{X}_u$, $\mat{X}_u' \in \codebook_{r_u}(\snr)$, be given—in ascending order—at every SNR level by $\uplambda_n(\snr)$, $n=1, 2, \ldots, \rho|\setS|\mt$. Furthermore, set $$\Lambda_{\minant(\setS)}^{\rho|\setS|\mt}(\snr) \triangleq \min_{\begin{subarray}{c} \{\mat{E}_u=\mat{X}_u-\mat{X}_u'\}_{u\in \setS}\\\mat{X}_u,\mat{X}_u' \:\in\: \codebook_{r_u}(\snr)\end{subarray}} \quad \prod_{k=1}^{\minant(\setS)} \uplambda_k(\snr). \label{Eq.DefLambda}$$ If there exists an $\epsilon>0$ independent of $\snr$ and $r$ such that $$\Lambda_{\minant(\setS)}^{\rho|\setS|\mt}(\snr)\dotgeq \snr^{-(r(\setS)-\epsilon)},\label{Eq.ThCDC}$$ then, under ML decoding, $\prob{\mc{E}_\setS} \dotleq \snr^{-d_\setS(r(\setS))}$. We start by deriving an upper bound on the average (w.r.t. the random channel) pairwise error probability (PEP) of an $\setS$-error event. From , we note that $\mat{E}_{u}=[\mat{e}_{u,0} \: \mat{e}_{u,1} \: \cdots \: \mat{e}_{u,N-1}]$, with $\mat{e}_{u,n}=\mat{x}_{u,n}-\mat{x}_{u,n}'$, is nonzero for $u\in \setS$ but $\mat{E}_u=\mat{0}$ for any $u\in \bar{\setS}$. Assuming, without loss of generality, that $\setS=\{1, \ldots, |\setS|\}$, the probability of the ML decoder mistakenly deciding in favor of the codeword $\mat{X}'$ when $\mat{X}$ was actually transmitted can be upper-bounded in terms of $\mat{X}-\mat{X}'= [\mat{E}_{1}\:\cdots\:\mat{E}_{|\setS|}\: \mat{0}\:\cdots\:\mat{0}]$ as-1 $$\begin{split} &\prob{\mat{X}\rightarrow\mat{X}'}\\ &\negmedspace\leq \mathbb{E}_{\{\mat{H}_{\setS,n}\}_{n=0}^{N-1}}\sizecurly{\negthinspace\expf{\negthinspace-\frac{\snr}{4\mt}\sum_{n=0}^{N-1}\trace{\mat{H}_{\setS,n}\mat{e}_n\mat{e}_n^H\mat{H}_{\setS,n}^H}\negmedspace}\negmedspace}\label{p1} \end{split}$$ where $$\trace{\mat{H}_{\setS,n}\mat{e}_n\mat{e}_n^H\mat{H}_{\setS,n}^H}={\left\lVert\sum_{u\in\setS}\mat{H}_{u,n} \mat{e}_{u,n}\right\rVert}^2$$ with $\mat{H}_{\setS,n}$ defined in and $\mat{e}_n=[\mat{e}_{u_1,n}^T\:\cdots\:\mat{e}_{u_{|\setS|},n}^T]^T$. Setting $\mat{H}_{\setS}=[\mat{H}_{\setS,0}\;\mat{H}_{\setS,1}\;\cdots\;\mat{H}_{\setS,N-1}]$, we get from -1 $$\begin{aligned} &\prob{\mat{X}\rightarrow\mat{X}'} \notag \\ &\leq \mathbb{E}_{\mat{H}_\setS}\sizecurly{\expf{-\frac{\snr}{4\mt}\trace{\mat{H}_{\setS}\:\diag{\mat{e}_n\mat{e}_n^H}_{n=0}^{N-1}\mat{H}_{\setS}^H}}}\notag\\ &= \mathbb{E}_{\mat{H}_w}\sizecurly{\expf{-\frac{\snr}{4\mt}\trace{\mat{H}_{w}\mat{\Upsilon}\mat{\Upsilon}^H\mat{H}_{w}^H}}}\label{p2}\end{aligned}$$ where we have used $\mat{H}_\setS=\mat{H}_w (\mat{R}_\mbb{H}^{T/2} \otimes \mat{I}_{|\setS|\mt})$ with $\mat{H}_w$ an $\mr \times N |\setS|\mt$ matrix with i.i.d. $\cn{0}{1}$ entries and $$\mat{\Upsilon}=(\mat{R}_{\mbb{H}}^{T/2} \otimes \mat{I}_{|\setS|\mt}) \:\diag{\mat{e}_n}_{n=0}^{N-1}.\label{Eq.WhiteMat}$$ We note that $\mat{\Upsilon}^H\mat{\Upsilon}=\mat{R}_\mbb{H}^T \odot (\sum_{u\in\setS}\mat{E}_u^H\mat{E}_u)$, where we have $\rank{\sum_{u\in\setS}\mat{E}_u^H\mat{E}_u}\leq |\setS|\mt$ because $\mat{E}_u$ has dimension $\mt\times N$ and $N\geq |\setS|\mt$ by assumption. Recalling that $\rank{\mat{R}_\mbb{H}}=\rho$ and using the property $\rank{\mat{A}\odot\mat{B}}\leq\rank{\mat{A}}\rank{\mat{B}}$, it follows that $\rank{\mat{\Upsilon}^H\mat{\Upsilon}}\leq\rho|\setS|\mt$, which is to say that $\mat{\Upsilon}^H\mat{\Upsilon}$ has at most $\rho|\setS|\mt$ eigenvalues that are not identically equal to zero for all SNRs. We stress, however, that these eigenvalues may decay to zero as a function of SNR. Next, using the fact that for any matrix $\mat{A}$ the nonzero eigenvalues of $\mat{A}\mat{A}^H$ equal the nonzero eigenvalues of $\mat{A}^H\mat{A}$, the assumption (made in the statement of the theorem) that $\mat{R}_\mbb{H}^T \odot (\sum_{u\in\setS}\mat{E}_u^H\mat{E}_u)$ has $\rho|\setS|\mt$ eigenvalues that are not identically equal to zero for all SNRs implies that so does $\mat{\Upsilon \Upsilon}^H$. The remainder of the proof proceeds along the lines of the proof of [@pcoj07 Th. 1]. In particular, we split and subsequently bound the $\setS$-error probability as $$\begin{aligned} \prob{\mc{E}_\setS} &= \prob{\mc{E}_\setS, \joutage_\setS} + \prob{\mc{E}_\setS, \bar{\joutage}_\setS}\notag\\ &= \prob{\joutage_\setS} \underbrace{\prob{\mc{E}_\setS| \joutage_\setS}}_{\leq 1} \notag\\ &{\hspace{7mm}}+ \underbrace{\prob{\bar{\joutage}_\setS}}_{\leq 1} \prob{\mc{E}_\setS| \bar{\joutage}_\setS}\notag\\ &\leq \prob{\joutage_\setS} +\prob{\mc{E}_\setS | \bar{\joutage}_\setS}.\label{Eq.UpBoundErrorProb}\end{aligned}$$ As detailed in the proof for the point-to-point case given in [@pcoj07], the code design criterion yields the following upper bound on the second term in : $$\begin{aligned} \label{Eq.UnionBound} \prob{\mc{E}_\setS| \bar{\joutage}_\setS} &\dotleq \snr^{Nr(\setS)} \expf{- \frac{\snr^{\epsilon/{\minant(\setS)}}}{4\mt}}.\end{aligned}$$ In contrast to the Jensen outage probability which satisfies $\prob{\joutage_\setS} \doteq \snr^{-d_\setS(r(\setS))}$, the RHS of decays exponentially in SNR. Hence, upon inserting into , we get $\prob{\mc{E}_\setS}\dotleq \prob{\joutage_\setS}$, and can therefore conclude that $\prob{\mc{E}_\setS} \dotleq \snr^{-d_\setS(r(\setS))}$. In summary, for every $\mc{E}_\setS$, constitutes a sufficient condition on $\{\codebook_{r_u}: u\in \setS\}$ for $\prob{\mc{E}_\setS}$ to be exponentially upper-bounded by $\prob{\joutage_\setS}$. This condition is nothing but the DM tradeoff optimal code design criterion for a point-to-point channel with $|\setS|\mt$ transmit antennas and $\mr$ receive antennas presented in [@pcoj07]. In order to satisfy this condition, the users’ codebooks have to be designed jointly. We stress, however, that this does not require cooperation among users at the time of communication. We are now ready to establish the optimal DM tradeoff for the selective-fading MAC and provide corresponding design criteria on the overall family of codes $\codebook_{\bm{r}}$.-1 Optimal code design ------------------- We start by noting that implies $\prob{\outage} \geq \prob{\outage_\setS}$ for any $\setS\subseteq \mc{U}$, which combined with gives rise to $2^U-1$ lower bounds on $\prob{\outage}$. For a given multiplexing rate tuple $\bm{r}$, the tightest lower bound (exponentially in SNR) corresponds to the set $\setS$ that yields the smallest SNR exponent $d_\setS(r(\setS))$. More precisely, the tightest lower bound is -1 $$\prob{\outage} \dotgeq \snr^{-d_{\setS^\star}(r(\setS^\star))} \label{Eq.LB}$$ with the dominant outage event given by $\outage_{\setS^\star}$, where $$\setS^\star \triangleq \underset{\setS\:\subseteq\:\mc{U}}{\arg\min} \; d_\setS(r(\setS))\label{Eq.TEE}$$ is the dominant outage set. Next, we show that, for any multiplexing rate tuple, the total error probability $P_e(\codebook_{\bm{r}})$ can be made exponentially equal to the RHS of by appropriate design of the users’ codebooks. As a direct consequence thereof, using , , and $P_e(\codebook_{\bm{r}}) \dotgeq \prob{\outage}$ [@TVZ04 Lemma 7], we then obtain that $d_{\setS^\star}(r(\setS^\star))$ constitutes the optimal DM tradeoff of the selective-fading MIMO MAC. Before presenting this result, let us define the function $r_{\scriptscriptstyle\setS}(d)$ as the inverse of $d_\setS(r)$, i.e., $d=d_\setS\big(r_{\scriptscriptstyle\setS}(d)\big)$ and $r=r_\setS\big(d_{\scriptscriptstyle\setS}(r)\big)$. We note that $r_{\scriptscriptstyle\setS}(d)$ is a decreasing function of $d$ and $d_\setS(r)$ is a decreasing function of $r$. \[Th.Proc\] The optimal DM tradeoff of the selective-fading MIMO MAC in is given by $d^\star(\bm{r})=d_{\setS^\star}(r(\setS^\star))$, that is $$\label{Eq.ThOptDMT} d^\star(\bm{r})=(\minant(\setS^\star)-r(\setS^\star))(\rho\maxant(\setS^\star)-r(\setS^\star)).$$ Moreover, if the overall family of codes $\codebook_{\bm{r}}$ satisfies for the dominant outage set $\setS^\star$ and, for every $\setS\neq\setS^\star$, there exists $\epsilon>0$ such that $$\label{Eq.NewCDC} \Lambda_{\minant(\setS)}^{\rho|\setS|\mt}(\snr)\dotgeq \snr^{-(\gamma_\setS-\epsilon)}$$ where $$0\leq \gamma_\setS \leq r_\setS(d_{\setS^\star}(r(\setS^\star)))\label{Eq.gamma}$$ then $$\label{Eq.ThOptCode} d(\codebook_{\bm{r}}) =d^\star(\bm{r}).$$ Using , we write $$\begin{aligned} \label{tmp1} P_e(\codebook_{\bm{r}}) = \prob{\mc{E}_{\setS^\star}} + \sum_{\setS\neq\setS^\star} \prob{\mc{E}_{\setS}}.\end{aligned}$$ We bound the terms in the sum on the RHS of separately. By assumption, $\codebook_{\bm{r}}$ satisfies for $\setS^\star$ and, hence, it follows from Theorem \[Th.CDC\] and that $$\label{tmp2} \prob{\mc{E}_{\setS^\star}} \dotleq \snr^{-d_{\setS^\star}(r(\setS^\star))} \doteq \prob{\joutage_{\setS^\star}}.$$ Next, we consider the terms $\prob{\mc{E}_\setS}$ for $\setS\neq \setS^\star$ and use to write $$\begin{aligned} \prob{\mc{E}_\setS}&\leq \prob{\joutage_\setS} + \prob{\mc{E}_\setS| \bar{\joutage}_\setS}\notag\\ &\doteq \snr^{-d_\setS(\gamma_\setS)}\label{tmp3}\end{aligned}$$ where is obtained by the same reasoning as used in the proof of Theorem \[Th.CDC\] with the users’ codebooks $\{\mc{C}_{r_u}:u\in\setS\}$ satisfying instead of . Inserting and into yields $$\begin{aligned} P_e(\codebook_{\bm{r}}) &\dotleq \snr^{-d_{\setS^\star}(r(\setS^\star))} + \sum_{\setS\neq\setS^\star} \snr^{-d_\setS(\gamma_\setS)}\label{tmp4}\\ &\doteq \snr^{-d_{\setS^\star}(r(\setS^\star))}\label{tmp5}\end{aligned}$$ where follows from the fact that implies $d_\setS(\gamma_\setS) \geq d_{\setS^\star}(r(\setS^\star))$, for all $\setS\neq \setS^\star$, and consequently, the dominant outage event dominates the upper bound on the total error probability. With $P_e(\codebook_{\bm{r}}) \dotgeq \prob{\outage}$ [@TVZ04 Lemma 7], combining and yields $$P_e(\codebook_{\bm{r}}) \doteq \prob{\outage} \doteq \snr^{-d_{\setS^\star}(r(\setS^\star))}\label{tee1}.$$ Since, by definition, $d(\codebook_{\bm{r}}) \leq d^\star(\bm{r})$, using , we can finally conclude from that $$d(\codebook_{\bm{r}}) = d^\star(\bm{r})= d_{\setS^\star}(r(\setS^\star)).\label{tee5}$$ As a consequence of Theorem \[Th.Proc\], the optimal DM tradeoff is determined by the tradeoff curve $d_{\setS^\star}(r(\setS^\star))$, which is simply the SNR exponent of the Jensen outage probability $\prob{\mathcal{J}_{\setS^\star}}$ corresponding to the dominant outage set. By virtue of , , and the fact that the relations $\prob{\outage_\setS}\leq\prob{\outage}$ and $\prob{\joutage_\setS}\leq\prob{\outage_\setS}$ hold for every $\setS$ and, a fortiori, for the dominant outage set $\setS^\star$, we get $$\prob{\mathcal{O}_{\setS^\star}} \dotleq \prob{\mathcal{O}} \doteq \prob{\mathcal{J}_{\setS^\star}} \dotleq \prob{\mathcal{O}_{\setS^\star}}$$ which is to say that $$\prob{\mathcal{O}_{\setS^\star}} \doteq \snr^{-d_{\setS^\star}(r(\setS^\star))}.$$ Hence, as in the point-to-point case [@pcoj07], the Jensen upper bound on mutual information yields a lower bound on the outage probability which is exponentially tight (in SNR). In order to achieve DM tradeoff optimal performance, the families of codes $\{\codebook_{r_u}, u\in\mc{U}\}$ are required to satisfy for the dominant outage set $\setS^\star$ and, in addition, the probability $\prob{\mc{E}_{\setS}}$ corresponding to the sets $\setS\neq\setS^\star$ should decay at least as fast as $\prob{\outage_{\setS^\star}}=\prob{\joutage_{\setS^\star}}$, a requirement that is guaranteed when is satisfied for every $\setS\neq\setS^\star$. Note that this code design criterion is less stringent than requiring all the terms $\prob{\mc{E}_\setS}$ to satisfy condition , as originally proposed in [@CGB08 Th. 2]. We conclude by pointing out that the code design criterion in Theorem \[Th.Proc\] was shown to be necessary and sufficient for DM tradeoff optimality in Rayleigh flat-fading MACs in [@AB09]. We stress, however, that there exist codes—at least in the two-user flat-fading case—that satisfy in Theorem \[Th.CDC\] for all $\setS\subseteq\mc{U}$ as we will show in Section \[Sec.CodeEx\]. Dominant outage event regions ----------------------------- The following example illustrates the application of Theorem \[Th.Proc\] to the two-user case, and reveals the existence of multiplexing rate regions dominated by different outage events. Remarkably, although the error mechanism at play here (outage) is different from the one in [@Gallager85], the dominant outage event regions we obtain have a striking resemblance to the dominant error event regions found in [@Gallager85]. ### Example {#example .unnumbered} We assume $\mt=3$, $\mr=4$, and $\rank{\mat{R}_\mbb{H}}=\rho=2$. For $U=2$, the $2^2-1 = 3$ possible outage events are denoted by $\mc{O}_1$ (user 1 is in outage), $\mc{O}_2$ (user 2 is in outage) and $\mc{O}_3$ (the channel obtained by concatenating both users’ channels into an equivalent point-to-point channel is in outage). The SNR exponents of the corresponding outage probabilities are obtained from as $$\begin{aligned} d_u(r_u) &= (3-r_u)(8-r_u), \quad u=1,2,\label{Eq.Ex}\\ d_3(r_1+r_2) &= \big(4-(r_1+r_2)\big)\big(12-(r_1+r_2)\big)\label{Eq.Ex2}.\end{aligned}$$ Based on and , we can now explicitly determine the dominant outage event for every multiplexing rate tuple $\bm{r}=(r_1, r_2)$. In Fig. \[Fig.TEE\], we plot the rate regions dominated by the different outage events. Note that the boundaries $r_1<3$, $r_2<3$, and $r_1+r_2 <4$ are determined by the ergodic capacity region. In the rate region dominated by $\mc{O}_1$, we have $d_1(r_1)<d_2(r_2)$ and $d_1(r_1)<d_3(r_1+r_2)$, implying that the SNR exponent of the total error probability equals $d_1(r_1)$, i.e., the SNR exponent that would be obtained in a point-to-point selective-fading MIMO channel with $\mt=3$, $\mr=4$, and $\rho=2$. The same reasoning applies to the rate region dominated by $\mc{O}_2$ and, hence, we can conclude that, in the sense of the DM tradeoff, the performance in regions $\mc{O}_1$ and $\mc{O}_2$ is not affected by the presence of the respective other user. In contrast, in the area dominated by $\mc{O}_3$, we have $d_3(r_1+r_2)<d_u(r_u)$, $u=1,2$, which is to say that multiuser interference does have an impact on the DM tradeoff and reduces the diversity order that would be obtained if only one user were present. ![\[Fig.TEE\] Dominant outage event regions for a two-user MA MIMO channel with $\mt=3$, $\mr=4$, and $\rho=2$.](reg1){width=".7\textwidth"} Fig. \[Fig.TEE2\] shows the dominant outage event regions for the same system parameters as above but with one additional receive antenna, i.e., $\mr=5$. We observe that not only larger sum multiplexing rates are achievable, i.e., $r_1+r_2 \leq 5$, but also that the area where $\mc{O}_3$ dominates the total error probability, and hence where multiuser interference reduces the achievable diversity order, is significantly smaller relative to the area dominated by the single user outage events $\mc{O}_1$ and $\mc{O}_2$. This effect can be attributed to the fact that increasing $\mr$ yields more spatial degrees of freedom at the receiver and, consequently, alleviates the task of resolving multiuser interference. ![\[Fig.TEE2\] Dominant outage event regions for a two-user MA MIMO channel with $\mt=3$, $\mr=5$, and $\rho=2$.](reg2){width=".7\textwidth"} Multiplexing rate region at a given diversity level --------------------------------------------------- The dominant outage event determines the maximum achievable diversity order as a function of the multiplexing rate tuple $\bm{r}$. Conversely, one may also be interested in finding the region $\mc{R}(d)$ of achievable multiplexing rates at a minimum diversity order $d\in[0,\rho \mt\mr]$ associated with the total error probability. This is accomplished by designing an overall family of codes that is DM tradeoff optimal and satisfies $$\label{cmp1} d_{\setS}(r(\setS))\geq d, \quad \forall \setS\subseteq\mc{U}$$ which upon application of $r_{\scriptscriptstyle\setS}(\cdot)$ to both sides is found to be equivalent to $$r(\setS) \leq r_{\scriptscriptstyle\setS}(d), \quad \forall \setS\subseteq\mc{U}.$$ We just proved the following extension of [@TVZ04 Th. 2] to selective-fading MA MIMO channels. \[Cor.MRR\] Consider an overall family of codes $\codebook_{\bm{r}}$ that achieves the optimal DM tradeoff in the sense of Theorem \[Th.Proc\]. Then, the region of multiplexing rates for which the total error probability decays with SNR exponent at least equal to $d$ is characterized by $$\mc{R}(d) \triangleq \bigg\{\bm{r}: r(\setS) \leq r_{\scriptscriptstyle\setS}(d), \forall \setS\subseteq\mc{U}\bigg\}\label{eq:MRR}$$ where $r_{\scriptscriptstyle\setS}(d)$ is the inverse function of $d_\setS(r)$. To illustrate the concept of a multiplexing rate region [@TVZ04], consider the two-user case with $\mt=3$, $\mr=4$, and $\rho=2$. Fig. \[Fig.mrr\] shows the multiplexing rate regions $\mc{R}(d)$ corresponding to several diversity order levels, i.e., $d \in \{0,2,4,8,16\}$. The region $\mc{R}(0)$ is the pentagon described by the constraints $r_1\leq 3$, $r_2\leq 3$, and $r_1+r_2\leq \min(2\mt,\mr)=4$. Higher diversity order can be achieved at the expense of tighter constraints on the achievable multiplexing rates $r_1$ and $r_2$. For instance, for a diversity order requirement of $d\geq 8$, the achievable multiplexing rate region is given by the pentagon $0\mathrm{ABCD}$. Increasing the minimum required diversity order results in multiplexing rate regions that shrink towards the origin. Note that to realize a diversity order requirement of $d\geq 16$, the allowed multiplexing rate region is a square; in this case, performance (in the sense of the DM tradeoff) is not affected by the presence of a second user. Intuitively, the required diversity order is so high that users can only communicate at very small multiplexing rates and multiuser interference does not dominate the total error probability. ![\[Fig.mrr\] Multiplexing rate regions as a function of the diversity order $d\in\sizecurly{0,2,4,8,16}$ corresponding to the total error probability ($\mt=3$, $\mr=4$, and $\rho=2$).](r){width=".7\textwidth"} Analysis of a code construction for the two-user flat-fading case {#Sec.CodeEx} ================================================================= In this section, we study the algebraic code construction proposed recently in [@BadBel08] for flat-fading MACs with two single-antenna users and an arbitrary number of antennas at the receiver. We examine whether this code satisfies the code design criteria of Theorem \[Th.Proc\] and focus on the case of a two-antenna receiver, for simplicity. We start by briefly reviewing the code construction described in [@BadBel08] for a system with $\mt=1$, $\mr=2$, $U=2$, $N=2$, and $\rho=1$ (i.e., flat fading). For each user $u$, let $\mc{A}_u$ denote a QAM constellation with $2^{R_u'(\snr)}$ points carved from $\mbb{Z}[i]=\{k+il:k,l\in\mbb{Z}\}$, where $i=\sqrt{-1}$ and $R_u'(\snr)=(r_u-\epsilon)\log\snr$ for some $\epsilon>0$, i.e., $$\label{constellation} \mc{A}_u = \sizecurly{(k+il): \frac{-2^{R_u'(\snr)/{2}}}{{2}}\leq k, l\leq\frac{2^{R_u'(\snr)/2}}{{2}}, \;k, l\in\mbb{Z}}.$$ The proposed code spans two slots so that the vector of information symbols corresponding to user $u$ is given by $\mat{s}_u = [s_{u,1}\; s_{u,2}]$, where $s_{u,1}, s_{u,2} \in \mc{A}_u$. The vector $\mat{s}_u$ is then encoded using the unitary transformation matrix $\mat{U}$ underlying the Golden Code [@BelRekVit05] according to-1 $$\label{Eq.GoldenTransform} \tilde{\mat{x}}_u^T = \mat{U}\:\mat{s}_u^T = \begin{bmatrix}{x}_u \\ \sigma({x}_u) \end{bmatrix}\; \text{with }\mat{U}=\frac{1}{\sqrt{5}}\begin{bmatrix} \alpha & \alpha\varphi \\ \bar{\alpha} & \bar{\alpha}\bar{\varphi}\end{bmatrix}$$ where $\varphi=\frac{1+\sqrt{5}}{2}$ denotes the Golden number with corresponding conjugate $\bar{\varphi}=\frac{1-\sqrt{5}}{2}$, $\alpha=1+i-i\varphi$ and $\bar{\alpha}=1+i-i\bar{\varphi}$. By construction, $x_u$ belongs to the quadratic extension $\mbb{Q}(i,\sqrt{5})$ over $\mbb{Q}(i)=\sizecurly{k+il : k,l\in \mbb{Q}}$. Here, $\sigma$ denotes the generator of the Galois group of $\mbb{Q}(i,\sqrt{5})$ given by-1 $$\label{Eq.Gen} \begin{array}{cccc} \sigma: & \mbb{Q}(i,\sqrt{5}) & \rightarrow & \mbb{Q}(i,\sqrt{5})\\ & a+ b \sqrt{5} & \mapsto & a -b\sqrt{5}. \end{array}$$ Moreover, one of the users, say user 2, multiplies the symbol transmitted in the first slot by a constant $\gamma\in \mbb{Q}(i)$, resulting in the transmit codeword $$\label{Eq.Codeword} \tilde{\mat{X}} = \begin{bmatrix}{x}_{1}& \sigma({x}_{1}) \\ \gamma{x}_{2} & \sigma({x}_2) \end{bmatrix}.$$ Depending on the choice of the parameter $\gamma$, the codeword difference matrices arising from this construction have a nonzero determinant. For completeness, we shall next provide a proof of this statement which was originally made in [@BadBel08]. \[Th3\] For any $\gamma \neq \pm 1$ and any two $\tilde{\mat{X}}, \tilde{\mat{X}}'$ according to , it holds that $\det(\mat{\Delta})\neq 0$, where $\mat{\Delta}=\tilde{\mat{X}}-\tilde{\mat{X}}'$.-1 Proceeding along the same lines as [@BadBel08], we start by proving that the determinant corresponding to any codeword $\tilde{\mat{X}}$ in is nonzero for any $\gamma\neq\pm1$, and hence, by the linearity of the mapping $\sigma$ over $\mbb{Q}(i,\sqrt{5})$, the determinant of any codeword difference matrix is also nonzero. Note that $$\begin{aligned} \det(\tilde{\mat{X}}) &= x_1\sigma(x_2)-\gamma x_2\sigma(x_1)\notag\\ &= x-\gamma \sigma(x) \label{rn}\end{aligned}$$ where the last step follows from setting $x=x_1\sigma(x_2)$, noting that $\sigma(\sigma(x))=x$ for any $x\in\mbb{Q}(i,\sqrt{5})$, and using the property $\sigma(x\cdot y) = \sigma(x)\cdot\sigma(y)$ for every $x, y\in\mbb{Q}(i,\sqrt{5})$. Hence, $\det(\tilde{\mat{X}})$ is zero if and only if $\gamma$ satisfies $\gamma = x/\sigma(x)$. In this case, recalling that $\gamma \in \mbb{Q}(i)$, we must have $x \in \mbb{Q}(i)$, or $x \in \sqrt{5}\mbb{Q}(i)=\sizecurly{\sqrt{5}(k+il) : k,l\in \mbb{Q}}$. These constraints yield, respectively, $\gamma=x/\sigma(x)= 1$ and $\gamma=x/\sigma(x) = -1$, from which we can infer that $\det(\tilde{\mat{X}})=0$ $\iff$ $\gamma=\pm 1$. Hence, any $\gamma\in\mbb{Q}(i)\backslash\{\pm1\}$ guarantees $\det(\tilde{\mat{X}})\neq 0$ for $x_1, x_2 \in \mbb{Q}(i,\sqrt{5})$. We are now ready to examine whether this construction satisfies the code design criteria for DM tradeoff optimality given in Theorem \[Th.Proc\]. For simplicity, we assume $\gamma=i$ in the following. We start by considering the cases $\setS=\{1\}$ and $\setS=\{2\}$. Assume that $\mc{A}_u$ is chosen according to . By and the fact that $\mat{U}$ is unitary, we obtain $$\begin{aligned} \max_{\begin{subarray}{c}\tilde{\mat{x}}_u\: : \: \tilde{\mat{x}}_u \: = \: {\mat{s}}_u\mat{U}^T \\ {s}_{u,1}, s_{u,2} \:\in\: \mc{A}_u\end{subarray}} {\left\lVert\tilde{\mat{x}}_u\right\rVert}^2 &= \max_{{s}_{u,1}, s_{u,2} \:\in\: \mc{A}_u}{\left\lVert{\mat{s}}_u\right\rVert}^2\notag\\ &=2\:\sizeparentheses{\frac{2^{R_u'(\snr)}}{2}}\label{cmp6}\end{aligned}$$ for $u=1,2$. In order to satisfy the power constraint , we scale the transmit vector corresponding to user $u$ as $$\label{Eq.scaling} \mat{x}_u =\sizeparentheses{\frac{2^{R_u'(\snr)}}{2}}^{-1/2} \:\tilde{\mat{x}}_u$$ so that, using , we get $$\begin{aligned} \max_{\mat{x}_u \:\in\: \codebook_{r_u}(\snr)}{\left\lVert\mat{x}_u\right\rVert}^2 &= \sizeparentheses{\frac{2^{R_u'(\snr)}}{2}}^{-1} \max_{\begin{subarray}{c}\tilde{\mat{x}}_u\: : \: \tilde{\mat{x}}_u \: = \: {\mat{s}}_u\mat{U}^T \\ {s}_{u,1}, s_{u,2} \:\in\: \mc{A}_u\end{subarray}} {\left\lVert\tilde{\mat{x}}_u\right\rVert}^2\notag\\ &= 2.\label{cmp2}\end{aligned}$$ For user 2, we note that remains valid after multiplying the first entry of $\mat{x}_2$ by $\gamma=i$. Next, we note that in the flat-fading case $\mat{R}_\mbb{H}^T\odot(\mat{e}_u^H\mat{e}_u)= \mat{e}_u^H\mat{e}_u$, where $\mat{e}_u=\mat{x}_u-\mat{x}_u'$ for $\mat{x}_u,\mat{x}_u' \in \codebook_{r_u}(\snr)$ and $u=1,2$. Considering user 1, i.e., $\setS=\{1\}$, we have $|\setS|=1$ and $\minant(\setS)=1$ so that the quantity defined in is simply the smallest squared norm of the first row in and satisfies $$\begin{aligned} \Lambda_1^{1}(\snr) &= \min_{\begin{subarray}{c} \mat{x}_1,\mat{x}_1' \: \in\: \codebook_{r_1}(\snr)\end{subarray}} \quad {\left\lVert\mat{x}_1-\mat{x}_1'\right\rVert}^2\notag\\ &= 2^{1-R_1'(\snr)}\:\min_{\begin{subarray}{c}\tilde{\mat{x}}_1\: : \: \tilde{\mat{x}}_1 \: = \: {\mat{s}}_1\mat{U}^T\:;\: \tilde{\mat{x}}_1'\: : \: \tilde{\mat{x}}_1' \: = \: {\mat{s}}_1'\mat{U}^T\\{s}_{1,1}, \: s_{1,2}, \:{s}_{1,1}', \: s_{1,2}'\:\in\: \mc{A}_1\end{subarray}} \quad {\left\lVert\tilde{\mat{x}}_1-\tilde{\mat{x}}_1'\right\rVert}^2\label{cmp4}\\ &= 2^{1-R_1'(\snr)}\: \min_{{s}_{1,1}, \: s_{1,2}, \:{s}_{1,1}', \: s_{1,2}'\:\in\: \mc{A}_1} \underbrace{||\mat{s}_1-\mat{s}_1'||^2}_{\geq \:d_{\min}^2}\label{cmp5}\end{aligned}$$ where follows from , and is a consequence of $\tilde{\mat{x}}_1^T = \mat{U}\mat{s}_1^T$ and the unitarity of $\mat{U}$. From , we note that $d_{\min}=1$, i.e., the minimum distance in $\mc{A}_1$ is independent of SNR, and invoking $R_1'(\snr)=(r_1-\epsilon) \log\snr$, we can conclude from that $$\Lambda_1^{1}(\snr)\doteq \snr^{-(r_1-\epsilon)}.$$ For user 2, a similar argument[^5] shows that $\Lambda_1^{1}(\snr)\doteq \snr^{-(r_2-\epsilon)}$ and, hence, the code satisfies the criteria arising from for $\setS=\{1\}$ and $\setS=\{2\}$. Next, we consider the case $\setS=\{1,2\}$. The overall transmit codeword is now given by $$\label{Eq.TxCodeword} \mat{X}= \sqrt{2} \begin{bmatrix} 2^{-R_1'(\snr)/2} \:{x}_{1}& 2^{-R_1'(\snr)/2} \:\sigma({x}_{1}) \\ 2^{-R_2'(\snr)/2} \:i{x}_{2} & 2^{-R_2'(\snr)/2} \:\sigma({x}_2) \end{bmatrix}$$ and satisfies the power constraint , i.e., $$\begin{aligned} \max_{\mat{X}\:\in\:\codebook_{\bm{r}}(\snr)}{\left\lVert\mat{X}\right\rVert}_{\mathrm{F}}^2 &= \max_{\mat{X}\:\in\:\codebook_{\bm{r}}(\snr)}\trace{\mat{XX}^H}\\ &=\max_{\mat{x}_1 \:\in\: \codebook_{r_1}(\snr)}{\left\lVert\mat{x}_1\right\rVert}^2+\max_{\mat{x}_2 \:\in\: \codebook_{r_2}(\snr)}{\left\lVert\mat{x}_2\right\rVert}^2\\ &= 4.\end{aligned}$$ From and the linearity of the mapping $\sigma$ over $\mbb{Q}(i,\sqrt{5})$, the codeword difference matrix is obtained as-1 $$\label{Eq.MatE} \mat{E}= \sqrt{2} \begin{bmatrix} 2^{-R_1'(\snr)/2} \:{e}_{1}& 2^{-R_1'(\snr)/2} \:\sigma({e}_{1}) \\ 2^{-R_2'(\snr)/2} \:i {e}_{2} & 2^{-R_2'(\snr)/2} \:\sigma({e}_2) \end{bmatrix}$$ where $e_u = x_u-x_u'$ and hence $e_u \in \mbb{Q}(i,\sqrt{5})$, $u=1,2$. Recall that in the flat-fading case $\mat{R}_\mbb{H}^T\odot(\mat{E}^H\mat{E})= \mat{E}^H\mat{E}$. Next, note that $|\setS|=2$ and $\minant(\setS)=2$ so that $\Lambda_2^{2}(\snr)=\min_\mat{E}{\left\lvert\det(\mat{E})\right\rvert}^2$. From , we readily get $$\label{new1} \min_{\begin{subarray}{c}\mat{E}={\mat{X}}-{\mat{X}}'\\ \mat{X},\mat{X}'\:\in\:\codebook_{\bm{r}}(\snr)\end{subarray}}{\left\lvert\det(\mat{E})\right\rvert}^2 =2^{1-(R_1'(\snr)+R_2'(\snr))} \:\min_{\begin{subarray}{c}\mat{\Delta}=\tilde{\mat{X}}-\tilde{\mat{X}}'\\ \tilde{\mat{X}} = f(\mat{s}_1, \mat{s}_2)\:,\:\tilde{\mat{X}}' = f(\mat{s}_1', \mat{s}_2')\\ s_{u,1}, s_{u,2}, s_{u,1}', s_{u,2}' \:\in\: \mc{A}_u,\: u=1,2\end{subarray}}{\left\lvert\det(\mat{\Delta})\right\rvert}^2$$ where we have used the notation $\tilde{\mat{X}} = f(\mat{s}_1, \mat{s}_2)$ to express the fact that $\tilde{\mat{X}}$ is obtained from $\mat{s}_1$ and $\mat{s}_2$ using and . We recall that $\det(\mat{\Delta})\neq 0$ for $\mat{\Delta}$ arising from any combination of vectors $\mat{s}_u, \mat{s}_u'$ ($u=1,2$) with entries in $\mbb{Z}[i]$. Therefore, for every SNR, there must exist an $\omega(\snr)>0$ such that $$\label{new} \min_{\begin{subarray}{c}\mat{\Delta}=\tilde{\mat{X}}-\tilde{\mat{X}}'\\ \tilde{\mat{X}} = f(\mat{s}_1, \mat{s}_2)\:,\:\tilde{\mat{X}}' = f(\mat{s}_1', \mat{s}_2')\\ s_{u,1}, s_{u,2}, s_{u,1}', s_{u,2}' \:\in\: \mc{A}_u,\: u=1,2\end{subarray}}{\left\lvert\det(\mat{\Delta})\right\rvert}^2 = \omega(\snr)$$ which, upon inserting into and using $R_u'(\snr)=(r_u-\epsilon) \log\snr$ ($u=1,2$), yields $$\begin{aligned} \Lambda_2^{2}(\snr)&=\min_{\begin{subarray}{c}\mat{E}={\mat{X}}-{\mat{X}}'\\ \mat{X},\mat{X}'\:\in\:\codebook_{\bm{r}}(\snr)\end{subarray}}{\left\lvert\det(\mat{E})\right\rvert}^2 \notag\\ &\doteq \snr^{-(r_1+r_2-2\epsilon)} \:\omega(\snr).\label{new2}\end{aligned}$$ It follows from (\[new\])—by inspection—that $\omega(\snr)$ is a nonincreasing function of $\snr$. Unfortunately, Theorem \[Th3\] does not allow us to conclude that $\omega(\snr)$ is bounded away from zero, in which case we could conclude from and that the code is DMT-optimal for all multiplexing rate tuples. Therefore, characterizing the decay of $\omega(\snr)$ as a function of SNR is key to proving or disproving the DM tradeoff optimality of the code construction. Unfortunately, we have not been able to determine how $\omega(\snr)$ decays with SNR[^6]. Characterizing this decay rate seems very difficult and is likely to require advanced algebraic concepts. We can, however, distinguish between three different possibilities. If $\omega(\snr)$ decays exponentially with SNR, the criteria for DM tradeoff optimality provided in this paper are not met. In the case of a subpolynomial decay, i.e., $$\lim_{\snr\rightarrow\infty} \frac{\log \omega(\snr)}{\log\snr}=0$$ we would get $\Lambda_2^{2}(\snr) \doteq \snr^{-(r_1+r_2-2\epsilon)}$ and, hence, such a decay would be sufficient to guarantee that satisfies the code design criterion for $\setS=\{1,2\}$ and any tuple $(r_1,r_2)$ in the multiplexing rate region. Finally, we consider the case of $\omega(\snr)$ exhibiting polynomial decay, assuming that $\omega(\snr) \doteq \snr^{-\delta}$, $\delta>0$. In this case, it would follow from that $$\Lambda_2^{2}(\snr) \doteq\snr^{-(r_1+r_2+\delta-2\epsilon)}.$$ The quantity $\Lambda_2^{2}(\snr)$ would then decay faster than required by . In other words, the code construction would not be DM tradeoff optimal in the sense of Theorem \[Th.Proc\] when the dominant outage set is $\setS^\star=\{1,2\}$. However, when the dominant outage set is either $\setS^\star=\{1\}$ or $\setS^\star=\{2\}$, the relaxed (compared to the code design criteria proposed in [@CGB08]) code design criteria provided in would still be met for any multiplexing rate tuple $(r_1, r_2)$ satisfying $$r_1+r_2+\delta \leq r_\setS(d_{\setS^\star}(r(\setS^\star))).$$ We conclude this section by noting that a DM tradeoff optimal code construction for flat-fading MACs was reported in [@NamGam07]. Specifically, it is shown in [@NamGam07] that lattice-based space-time codes achieve the optimal DM tradeoff with lattice decoding. As a consequence of the code design criterion in Theorem \[Th.Proc\] being necessary and sufficient for DM tradeoff optimality in Rayleigh flat-fading MACs [@AB09], the code construction reported in [@NamGam07] necessarily satisfies these design criteria. The systematic construction of DM tradeoff optimal codes for selective-fading MA MIMO channels seems, however, largely unexplored. Conclusion\[Sec.Conclusion\] ============================ We characterized the optimum DM tradeoff for selective-fading MA MIMO channels and studied corresponding code design criteria. Our results show that, for a prescribed multiplexing rate tuple, the optimal DM tradeoff is determined by the dominant outage event. The systematic design of DM tradeoff optimal codes for the (selective-fading) MIMO MAC remains an important open problem. Acknowledgment {#acknowledgment .unnumbered} ============== The authors would like to thank C. Akçaba for insightful remarks on the code design criterion in Theorem \[Th.Proc\] and for stimulating discussions on its proof. We would furthermore like to thank Prof. E. Viterbo for pointing out the SNR-dependency of $\omega(\snr)$ in and, in particular, as a consequence thereof a problem with the arguments used to arrive at the statement of the code proposed in [@BadBel08] satisfying our code design criteria for DM tradeoff optimality. Finally, we would like to thank Prof. J. C. Belfiore for very helpful discussions. [^1]: This work was supported in part by the Swiss National Science Foundation (SNF) under grant No. 200020-109619 and by the STREP project No. IST-026905 MASCOT within the Sixth Framework Programme of the European Commission. Parts of this work were presented at the *IEEE Int. Symp. Inf. Theory (ISIT)*, Toronto, ON, Canada, July 2008. [^2]: A standard argument along the lines of that used to obtain [@ZheTse02 Eq. 9] shows that this assumption does not entail a loss of optimality in the high SNR regime, relevant to the DM tradeoff. [^3]: Throughout the paper, we consider multiplexing rate tuples lying within the boundaries determined by the ergodic capacity region. [^4]: Here, we actually mean the eigenvalues that are not identically equal to zero for all SNR values. This fine point will be made clear in the proof. [^5]: The multiplication of the first component of $\tilde{\mat{x}}_2$ by $\gamma=i$ does not affect the Euclidean norm. [^6]: We would like to use this chance to point out that despite the claim we made in [@CGB08], we do not have a proof establishing the DM tradeoff optimality of the code construction in [@BadBel08].
--- author: - | Paul Langacker$^{(a)}$, Gil Paz$^{(a)}$, Lian-Tao Wang$^{(b)}$ and Itay Yavin$^{(b)}$\ *[(a) School of Natural Sciences, Institute for Advanced Study, Einstein Drive Princeton, NJ 08540]{}\ *[(b) Department of Physics, Princeton University, Princeton NJ. 08544]{}** bibliography: - 'ref-triProd.bib' title: 'A T-odd observable sensitive to CP violating phases in squark decay' --- Introduction ============ In the past decade it became clear that the Standard Model cannot account for electroweak baryogenesis (for recent reviews, see [@Trodden:1998ym; @Riotto:1999yt; @Bernreuther:2002uj; @Dine:2003ax]). The CP violating phases (the CKM phase and the strong phase) are too small, and with the Higgs mass constrained by LEP2 to be $m_{H} > 115{~\mathrm{GeV}}$ the electroweak phase transition is not strong enough to suppress the sphaleron process. However, it is still possible that baryogenesis is connected with electroweak scale physics and has to do with the other missing ingredient from the Standard Model, which is the mechanism for stabilizing the electroweak scale. If that mechanism is supersymmetry, then many new possibilities open up. The new supersymmetric sectors contain, in principle, many new phases. Also, there are many more contributions to the Higgs potential, and the electroweak phase transition can be made stronger. In the context of the MSSM, there is only a small window in parameter space left to accommodate electroweak baryogenesis [@Carena:1996wj; @Carena:1997ki; @Carena:2000id; @Carena:2002ss]. One difficulty arises because there are no cubic terms in the tree-level Higgs potential, and the bounds from LEP2 on the Higgs and stop masses severely restrict the loop-induced contributions and disfavor a strong first-order phase transition. Nonetheless, there seems to be no difficulty in generating a strong first order transition in slightly more general supersymmetric models, such as those involving extra singlet Higgs in which there can be cubic terms at tree-level [@Pietroni:1992in; @Davies:1996qn; @Huber:1998ck; @Huber:2000mg; @Kang:2004pp; @Menon:2004wv; @Huber:2006wf]. The other difficulty is in ensuring sufficiently large CP violating phases, given the stringent bounds on certain combinations of such phases from electron and neutron electric dipole moment (EDM) experiments [@Pokorski:1999hz; @Ibrahim:1999af; @Barger:2001nu; @Pospelov:2005pr]. Since the operator responsible for EDM involves a helicity flip, these bounds mostly constrain the phase combination involving the $\mu$-parameter and the wino $M_2$ phases. The same phase combination is responsible for the dominant electroweak baryogenesis mechanisms in the MSSM, leading to tension with the EDM constraints. This may also be relaxed in extended models involving additional phases [@Pietroni:1992in]-[@Huber:2006wf],[@Demir:2003ke]. Motivated by the possible abundance of CP violating phases in supersymmetric extensions of the Standard Model [@Brhlik:1998gu], their relevance to baryogenesis, and their impact on Higgs and sparticle spectra [@Mrenna:1999ai; @Kane:2000aq] it is important to explore them in a variety of ways. In particular, there have been a number of suggestions for their direct detection in collider experiments [@Donoghue:1987ax; @Im:1993ur; @Dawson:1995wg; @Arkani-Hamed:1997km; @Nachtmann:2003hg; @Bartl:2004jr; @Valencia:2005cx; @Kiers:2006aq; @Szynkman:2007uc]. The MSSM soft lagrangian contains many different phases, however, not all are physical. As emphasized in [@Chung:2003fi], the physical phases are those which are invariant under both $U_R(1)$ and $U_{PQ}(1)$ symmetries. There are several obvious such invariant: (i) the phases in the off diagonal elements of the soft scalar masses; (ii) the relative phases between the gaugino masses; (iii) the relative phases between the different A parameters. The phases which are affected by the $R$ and $PQ$ rotations are $\phi_\mu$, $\phi_b$ (the phases of the $\mu$ and $b$ terms), $\phi_{M_a}$ and $\phi_{\tilde{A}_f}$, the overall phases of the $A$ parameters. One can then build reparameterization invariant linear combinations of these phases which appear in physical processes (for one such parameterization see [@Lebedev:2002wq]) EDM experiments are sensitive to linear combinations of phases involving $\phi_\mu$ since the dominant diagram involves the chargino exchange (see for example [@Barger:2001nu]). In this paper we present a new observable which is sensitive to a different combination of the phases than is probed in the EDM experiments. It involves the phases in the couplings of the neutralinos to the sleptons, and in a typical limiting case depends on the difference between the phases of the bino and wino mass parameters $M_1$ and $M_2$ (which can differ for nonuniversal gaugino masses). Since it requires no higgsino insertion it is insensitive to the phase of the $\mu$-parameter. We propose an asymmetry parameter related to the usual, T-violating, triple product $\langle \vec{p}_1\cdot (\vec{p}_2\times \vec{p}_3)\rangle$, with $\vec{p}_i$ being 3 independent vectors in a given reaction. This quantity constitutes a direct measurement of T-invariance violation, which, in the context of CPT invariant theories, is a measurement of CP-violation. As can be expected on general grounds, any such asymmetry parameter is the result of interfering diagrams. As such, it is usually suppressed with respect to the leading non-interfering contribution. In the case of a reaction which proceeds through an on-shell cascade decay, interference terms are suppressed with respect to the on-shell amplitudes by a factor of the width. However, if some part of the reaction is forced to proceed off-shell, the interference terms are comparable to the leading amplitudes. We exploit this fact in the decay of a squark into a quark and two leptons $\tilde{q} \rightarrow q + \tilde{N}_2 \rightarrow q + l^+ + l^- + \tilde{N}_1$, where $\tilde{N}_{1,2}$ are neutralinos and we assume that $\tilde N_1$ is the lightest supersymmetric particle (LSP). We show that in regions of parameter space where the decay of $\tilde{N}_{2}$ is through an off-shell slepton the asymmetry parameter, $\eta$ (defined below), can be very large, $\eta \sim \mathcal{O}(1)$. The number of events required for determining $\eta$ scales as $N \propto 1/\eta^2$ (assuming Gaussian statistics). This calls for as precise a theoretical estimate of $\eta$ as possible, since a factor of $3$ reduction can translate into an order of magnitude more events. Clearly, the actual number of events needed is affected by many experimental considerations as well. Reduction of the signal due to mis-tagging, detector resolution, tagging efficiency, etc., will inevitably increase the number of events required. After presenting the theoretical results we attempt to estimate the signal reduction due to experimental limitations and present the number of events needed to determine the existence of a CP asymmetry. The paper is organized as follows: In section \[sec:GeneralDiscussion\] we present a general discussion of the T, CP, and P-violating triple product and the relevant observables. In section \[sec:on-shell\] we consider the reaction $\tilde{t} \rightarrow t + l^+ + l^- + \tilde{N}_1$ and show that it contains appropriate observables sensitive to certain CP-phases in the MSSM. However, we find that if the reaction proceeds through an on-shell cascade decay, the signal is too small to be measured. It is also possible for the reaction to proceed via an off-shell cascade. In that case, as we shall see in section \[sec:off-shell\], the effect is greatly enhanced and the signal may be large enough to be detectable. In section \[sec:ExpLim\] we attempt to estimate the reduction in the signal strength due to experimental limitations and section \[sec:conclusions\] contains our conclusions. General discussion {#sec:GeneralDiscussion} ================== In this paper we will concentrate on CP-violating effects present in cascade decays of heavy supersymmetric particles. When an unstable particle decays through a reaction involving 4 other particles, it is possible that the expression for the rate contains a contribution of the form[^1] $\epsilon_{\mu\nu\alpha\beta} p_0^\mu p_1^\nu p_2^\alpha p_3^\beta$, where $p_i$ are four independent momenta. In the rest frame of the decaying particle, $p_0=(M_0,0,0,0)$, this term is the usual P and T-odd observable given by the expectation value of the triple product $$\label{eqn:TripleProduct} {\cal T} =-M_0 ~\vec{p}_1 \cdot \left(\vec{p}_2 \times \vec{p}_3\right),$$ where $M_0$ is the mass of the decaying particle. As is well known [@Gasiorowicz], any measurement of a non-zero expectation value $\langle \cal T \rangle$ implies both P violation and either T violation [*or*]{} a “strong phase”. Assuming that CPT invariance is unbroken, T violation is equivalent to the non-conservation of CP, and is manifested by a CP-violating phase in the Lagrangian. A strong phase refers to a CP-conserving phase, due, e.g., to a strong or electromagnetic final state interaction or the phase associated with the width in the propagator of an unstable particle. The two effects can in principle be separated by measuring the expectation values of both $\cal T$ and $\overline {\cal T}$, where $\overline {\cal T} = -M_0\vec{p}_1\!^c \cdot \left(\vec{p}_2\!^c \times \vec{p}_3\!^c\right)$ is the corresponding triple product for the decay of the antiparticle, and $p_i\!^c$ are the physical momenta of the antiparticles in the final state. Since $\cal T$ is both P and T-odd a non-zero $\langle \cal T \rangle$ requires the interference of two contributions to the amplitude involving different phases and different parity, i.e., the amplitude in the rest frame $\vec p_0=0$ must contain $$\label{eqn:two_amps} \langle \vec p_i | H | \vec p_0\rangle =A(\vec p_i) e^{i(\rho_A + \phi_A)} + B(\vec p_i) e^{i(\frac{\pi}{2}+\rho_B + \phi_B)},$$ where $\vec p_i$ refers collectively to the final momenta, and, $$\begin{aligned} \label{eqn:ABparity} A(-\vec p)&=A(\vec p) \\\nonumber B(-\vec p)&=-B(\vec p). \end{aligned}$$ $\phi_{A,B}$ and $\rho_{A,B}$ are respectively the so-called weak and strong phases, which do (do not) change sign in the CP-conjugate process[^2]. The phase $\pi/2$ in the second term could have been absorbed in $\rho_B$, but is instead pulled out for convenience. It always occurs in the relevant interference term between the parity even and odd amplitudes for $\langle \cal T \rangle$ when one sums over the spins [@Gasiorowicz]. For the example considered in this paper, it is just the explicit factor of $i$ occurring in the trace of $\gamma^5 \gamma_\mu \gamma_\nu \gamma_\alpha \gamma_\beta$. Using (\[eqn:two\_amps\]), it is then straightforward to show that $$\label{eqn:exp_value} \langle{ \cal T} \rangle = 2 {\cal K} \left[ \sin(\rho_A-\rho_B) \cos(\phi_A-\phi_B)+ \cos(\rho_A-\rho_B) \sin(\phi_A-\phi_B) \right],$$ where ${\cal K} $ is proportional to the phase space integral $ \int dPS({\cal T} A B)$. The first (second) term in (\[eqn:exp\_value\]) requires a nonzero difference between the strong (weak) phases. Using CP, the corresponding amplitude for the antiparticle decay is $$\label{eqn:conj_amps} \langle \vec p_i\!^c | H | \vec p_0\!^c \rangle =A(-\vec p_i\!^c) e^{i(\rho_A - \phi_A)} + B(-\vec p_i\!^c) e^{i(\frac{\pi}{2}+\rho_B - \phi_B)},$$ up to an irrelevant overall phase associated with our CP conventions. Comparing with (\[eqn:two\_amps\]) and using the parities of $A$ and $B$ in (\[eqn:ABparity\]), one sees that $\langle\overline{ \cal T} \rangle$ differs from $\langle \cal T \rangle$ by an overall sign (due to the fact that the observable is P-odd) and by $\phi_{A,B} \rightarrow - \phi_{A,B} $, $$\label{eqn:conj_value} \langle{\overline{ \cal T}} \rangle = -2 {\cal K} \left[\sin(\rho_A-\rho_B) \cos(\phi_A-\phi_B)- \cos(\rho_A-\rho_B) \sin(\phi_A-\phi_B) \right].$$ In particular, the T-odd term can be isolated by summing the particle and antiparticle asymmetries $$\label{eqn:sum_value} \langle{ \cal T} \rangle +\langle{\overline{\cal T}} \rangle= 4 {\cal K} \cos(\rho_A-\rho_B) \sin(\phi_A-\phi_B).$$ In this paper we will present a manifestation of these general considerations in the particular supersymmetric cascade decays $$\tilde{t} \rightarrow t + \tilde{N}_a \rightarrow t + l^+ + l^- + \tilde{N}_1,$$ and their CP conjugates $\tilde{t}^c \rightarrow t^c + \tilde{N}_a \rightarrow t^c + l^- + l^+ + \tilde{N}_1$, where $\tilde{N}_a$ (which we usually take to be $\tilde N_2$) is assumed to be on-shell. These reactions have enough independent momenta and, as we shall show below, lead to non-vanishing expectation values for the triple product, Eq. (\[eqn:TripleProduct\]), made of the top and di-lepton momenta. $\cal T$ and $\overline{\cal T}$ refer respectively to the observables $-M_{\tilde{t}}~\vec{p}_t \cdot \left(\vec{p}_{l^+} \times \vec{p}_{l^-}\right)$ and $-M_{\tilde{t}}~\vec{p}_{t^c} \cdot \left(\vec{p}_{l^-} \times \vec{p}_{l^+}\right)$. It proves useful to formulate the discussion in terms of a dimensionless parameter embodying the CP asymmetry. This parameter is closely related to the triple product with the added advantage of allowing for a straightforward evaluation of the number of events needed. As shown in Fig. \[fig:z-plane-angle\], in the rest frame of the on-shell $\tilde{N}_a$, the incoming $\tilde{t}$ and outgoing top define a $z$-axis. Momentum conservation forces the outgoing anti-lepton, lepton and the LSP to define a plane. A non-zero expectation value of $\cal T$ implies a non-zero average angle between the plane and the $z$-axis. We therefore define the asymmetry parameter $$\label{eqn:eta_def} \eta = \frac{N_+ - N_-}{N_+ + N_-} = \frac{N_+ - N_-}{N_{total}},$$ where $$\label{eqn:N+N-} N_+ = \int_0^{1} \frac{d\Gamma}{d\cos\theta} d\cos\theta, \qquad N_- = \int_{-1}^{0} \frac{d\Gamma}{d\cos\theta} d\cos\theta.$$ ![The reaction geometry in the rest frame of $\tilde{N}_a$ for $\tilde{t} \rightarrow t + l^+ + l^- + \tilde{N}_1$. $\theta$ is the angle between $\vec p_t$ and $ \vec{p}_{l^+} \times \vec{p}_{l^-}$. For $\tilde{t}^c \rightarrow t^c + l^- + l^+ + \tilde{N}_1$, $\theta$ is the angle between $ \vec{p}_{t^c}$ and $ \vec{p}_{l^-} \times \vec{p}_{l^+}$.[]{data-label="fig:z-plane-angle"}](z-plane-angle.eps) Below, we derive an exact expression for $\eta$ in the neutralino’s rest frame. Clearly, $\eta$ is not a relativistically invariant variable. If the LSP escapes detection it is impossible to reconstruct the neutralino’s rest frame. Therefore, $\eta $ can only be constructed in the lab frame (i.e., detector frame), which inevitably affects the signal. In section \[sec:ExpLim\] we show that this lack of knowledge of the correct frame gives rise to a dilution factor $\mathcal{D}$, similar to that encountered in $B$ physics experiments, $$\eta_{exp} = \mathcal{D} \eta_{th}.$$ Assuming Gaussian statistics, the number of events needed to make a statistically significant measurement is given by $$\label{eqn:nstat} N= \frac{1}{\eta_{exp}^2} = \frac{1}{\mathcal{D}^2 \eta_{th}^2}.$$ The dilution factor, as its name implies, leads to an increase in the number of events needed. In what follows we evaluate the asymmetry parameter $\eta_{th}$ and show that in certain kinematical regimes it can be rather large $\mathcal{O}(1)$. Also, we find that the dilution factor $\mathcal{D}$ need not be very small and in fact does not present a very serious obstacle. CP-violation in the stop cascade decay via an on-shell slepton {#sec:on-shell} ============================================================== In this section we compute the expectation value of $\epsilon_{\mu\nu\alpha\beta} p_{\tilde{t}}^\mu p_{t}^\nu p_{l^+}^\alpha p_{l^-}^\beta$ for the cascade decay $\tilde{t} \rightarrow t + l^+ + l^- + \tilde{N}_1$ via the two diagrams shown in Fig. \[fig:Mab\] and written explicitly in the appendix. ![The Feynman diagrams corresponding to the matrix elements $i\mathcal{M}_a$ (left) and $i\mathcal{M}_b$ (right). The full matrix element is $i\mathcal{M} = i\mathcal{M}_a + i\mathcal{M}_b$.[]{data-label="fig:Mab"}](Mab.eps) The relativistically invariant expectation value is given by $$\label{eqn:EVdef} \langle\epsilon_{\mu\nu\alpha\beta}~ p_{\tilde{t}}^\mu ~p_{t}^\nu p_{l^+}^\alpha ~ p_{l^-}^\beta ~\rangle =\frac{ \int d\Gamma \epsilon_{\mu\nu\alpha\beta}~ p_{\tilde{t}}^\mu ~p_{t}^\nu~ p_{l^+}^\alpha~ p_{l^-}^\beta~}{\int d\Gamma},$$ where $p_{\tilde{t}}$, $p_{t}$, $p_{l^+}$ and $p_{l^-}$ are the four-momenta of the squark, quark, anti-lepton and lepton, respectively, and $d\Gamma$ is the differential decay width. Assuming that the neutralino $\tilde{N}_a$ is on-shell the expectation value can be evaluated exactly; the details are presented in the appendix. Considering only flavor diagonal interactions, the numerator in Eq. (\[eqn:EVdef\]) is $$\begin{aligned} \int d\Gamma \epsilon_{\mu\nu\alpha\beta}~ p_{\tilde{t}}^\mu ~p_{t}^\nu~ p_{l^+}^\alpha~ p_{l^-}^\beta~ &= \frac{1}{3} \frac{M_{\tilde{N}_a}^4 ~|\vec{p}_t|^2}{256\pi^3} \left(\frac{M_{\tilde{N}_a}}{\Gamma_{\tilde{N}_a}}\right) \left( \int\frac{dPS_2}{2M_{\tilde{t}}} \right) \\\nonumber &\times\left(|g_L^{qa}|^2 - |g_R^{qa}|^2\right)\\\nonumber &\times \left(2~\textrm{Im}\left(g_R^{la*}g_L^{la*}g_R^{l1}g_L^{l1} \right)+\frac{M_{\tilde{N}_1}}{M_{\tilde{N}_a}} \textrm{Im}\left[ \left(g_R^{la*}\right)^2 ~\left(g_R^{l1}\right)^2 + R\leftrightarrow L \right]\right) \\\nonumber &\times \int dx_+ dx_- f(x_+,x_-),\end{aligned}$$ where $$|\vec{p}_t|^2 = \frac{1}{4M_{\tilde{N}_a}^2} \left( (M_{\tilde{t}}^2-M_{\tilde{N}_a}^2-M_t^2)^2 - 4M_{\tilde{N}_a}^2 M_t^2 \right).$$ The dimensionless function $f(x_+,x_-)$ is given by $$\begin{aligned} f(x_+,x_-) &= \left( 1 -\mu_1-x_+ - x_-+x_+x_-\right)\left(x_+ + x_-+\mu_1-1 \right) \\\nonumber &\times \frac{\left((1-\mu_{\tilde{l}} -x_+)(1-\mu_{\tilde{l}} -x_-) + \mu_{\tilde{l}}^2 \gamma_{\tilde{l}} \right) }{\left((1-\mu_{\tilde{l}} -x_+)^2 + \mu_{\tilde{l}}^2 \gamma_{\tilde{l}}\right)\left((1-\mu_{\tilde{l}} -x_-)^2 + \mu_{\tilde{l}}^2 \gamma_{\tilde{l}}\right) },\end{aligned}$$ where $$\mu_{\tilde{l}} = \frac{M_{\tilde{l}}^2}{M_{\tilde{N}_a}^2} , \quad \mu_1 = \frac{M_{\tilde{N}_1}^2}{M_{\tilde{N}_a}^2} \quad \text{and} \quad \gamma_{\tilde{l}} = \frac{\Gamma_{\tilde{l}}^2}{M_{\tilde{l}}^2}.$$ To arrive at an expression for the expectation value we need the total width of the reaction. In the narrow-width limit the computation is straightforward and can be carried out analytically. The result is given by $$\begin{aligned} \label{eqn:totalW} \int d\Gamma &=& \frac{M_{\tilde{N}_a}^2}{256\pi^2}\frac{1}{\Gamma_{\tilde{l}} M_{\tilde{l}}} \left( \frac{ M_{\tilde{N}_a}}{\Gamma_{\tilde{N}_a}}\right)\left(\int\frac{dPS_2}{2M_{\tilde{t}}}\right) ~~ \frac{1}{\mu_{\tilde{l}}} \left(\mu_{\tilde{l}}-\mu_1\right)^2 ~\left(1-\mu_{\tilde{l}}\right)^2 \\\nonumber \\\nonumber &\times& ~ \left(|g_L^{l1}|^2 + |g_R^{l1}|^2 \right) ~\left(|g_L^{la}|^2 + |g_R^{la}|^2 \right) \\\nonumber \\\nonumber &\times& \left( (M_{\tilde{t}}^2-M_{\tilde{N}_a}^2-M_t^2)\left(|g_L^{qa}|^2 + |g_R^{qa}|^2\right) + 4 M_t M_{\tilde{N}_a} \textrm{Re}(g_L^{qa} g_R^{qa*})\right).\end{aligned}$$ Clearly, the general expression for the expectation value is fairly complicated. It is instructive to consider some limits where it simplifies. For example, if we assume there is no suppression from the kinematical factors (e.g. $(1-\mu_{\tilde{l}})\sim1$) in Eq. (\[eqn:totalW\]), and considering the limit where $\tilde{N}_a$ is a pure wino and $\tilde{N}_1$ a pure bino, the final expression is, $$\begin{aligned} \label{eqn:EpsilonRes} \langle\epsilon_{\mu\nu\alpha\beta}~ p_{\tilde{t}}^\mu ~p_{t}^\nu~ p_{l^+}^\alpha ~ p_{l^-}^\beta ~\rangle &=& \frac{1}{24}M_{\tilde{t}}^2M_{\tilde{N}_a}^2~\left(\frac{1}{\pi}\frac{\Gamma_{\tilde{l}}}{M_{\tilde{l}}}\right) \sqrt{\mu_1} \int dx_+ dx_- f(x_+,x_-) \\\nonumber\\\nonumber &\times& \sin\left(2\varphi_L^{l1}-2\varphi_L^{la}\right)\end{aligned}$$ where we have written the complex couplings as $g_L^{l1}=|g_L^{l1}|\exp(i\varphi_L^{l1})$, etc. This expression is suppressed by the width of the slepton and one additional phase-space factor. This is simply a reflection of the fact that the signal is the ratio of an off-shell process to an on-shell one, i.e., the two diagrams in Fig. \[fig:Mab\] cannot simultaneously be on-shell except for a set of measure zero. The integral over $f(x_+,x_-)$ is of order unity and cannot enhance the signal. If the LSP is mostly a bino then the slepton decay width is roughly $$\label{eqn:sleptonWidth} \frac{1}{\pi} \frac{\Gamma_{\tilde{l}}}{M_{\tilde{l}} }\sim \frac{\alpha_e}{\pi} \sim \frac{1}{300}.$$ In order to reliably estimate the number of events needed to reach experimental sensitivity one must form a dimensionless quantity, such as the asymmetry variable presented in the previous section, Eq. (\[eqn:eta\_def\]). The dimensionful phase-space factors in Eq.(\[eqn:EpsilonRes\]) roughly cancel out in such an observable. Therefore we expect that the quantity in (\[eqn:sleptonWidth\]) gives us a good order of magnitude estimate for the number of events needed. Without even taking experimental limitations into account we need at least $10^5$ events to reach statistical significance. However, if one is close to the decay threshold such that $M_{\tilde{N}_a}^2-M_{\tilde{l}}^2 \lesssim M_{\tilde{l}}\Gamma_{\tilde{l}}$, the decay rate is suppressed and the asymmetry is enhanced. A more likely possibility is a spectrum where the slepton is forced to be off-shell. In this case there is no width suppression. We explore this possibility in the next section and show that indeed the signal is greatly enhanced. Stop cascade decay via an off-shell slepton {#sec:off-shell} =========================================== In this section we consider the case where $M_{\tilde{l}}>M_{\tilde{N}_a}$. The neutralino may decay through the 3-body channel $\tilde{N}_a\rightarrow l^+ + l^- + \tilde{N}_1$ via an off-shell slepton. While most of the results hold for general $\tilde{N}_a$, to simplify the discussion we will often take $\tilde{N}_a$ to be approximately wino. In general, there may be additional decay paths to consider, and in particular the neutralino can decay directly into $\tilde{N}_1$ and a $Z$-boson. If $M_{\tilde{N}_a}-M_{\tilde{N}_1}>M_Z$ then the $Z$ is on-shell and this channel dominates over the 3-body mode. However, if $M_{\tilde{N}_a}-M_{\tilde{N}_1}< M_Z$ then the $Z$ is off-shell and this reaction might compete with the diagram involving an off-shell slepton. Which is dominant is a detailed question depending on the spectrum. The coupling $ \bar{\tilde{N}}_a \tilde{N}_1 Z$ is a result of mixing with the higgsino component. For $a=2$ it is therefore governed by the size of the $\mu$ term compared with the gaugino masses $M_1$ and $M_2$ and is decoupled in the limit of large $\mu$. Our major goal is to illustrate the possibility of measuring a CP-violating effect rather than to exhaustively examine all of parameter space. In what follows we will therefore ignore the possible contribution of the $ \bar{\tilde{N}}_a \tilde{N}_1 Z$ vertex. In the more general case the interference with the $Z$ diagram could enhance or reduce the effect. We expect the signal to have no parametric suppression as in the case of an on-shell decay discussed in the previous section. Also, to evaluate the number of events needed we concentrate on the asymmetry parameter $\eta$ defined in Eq. (\[eqn:eta\_def\]). The details are very similar to the previous section except that the interference terms in the width cannot be neglected. Therefore, for integrated luminosity $\mathcal{L}$ the total number of events is given by $$\label{eqn:Total} \frac{N_{\rm{total}}}{\mathcal{L}} = \int \frac{dPS_4}{2M_{\tilde{t}}} \left(|\mathcal{M}_a|^2 + |\mathcal{M}_b|^2 + 2\textrm{Re}(\mathcal{M}_a\mathcal{M}_b^*) \right).$$ The evaluation of the phase-space integrals is presented in the appendix. The difference between the number of events in the upper and lower hemispheres is given by $$\begin{aligned} \label{eqn:Difference} &\frac{N_+ - N_-}{\mathcal{L}} = \frac{1}{256\pi^3}\left(\frac{M_{\tilde{N}_a}}{\Gamma_{\tilde{N}_a}}\right) \left(\int\frac{dPS_2}{2M_{\tilde{t}}} \right)\left(\frac{|\vec{p}_t|}{M_{\tilde{N}_a}}\right)\left(|g_R^{q}|^2 - |g_L^{q}|^2 \right)\\\nonumber &\times \left( M_{\tilde{N}_a} M_{\tilde{N}_1} \textrm{Im}\left[ \left(g_R^{la*}\right)^2 \left(g_R^{l1}\right)^2 + R\leftrightarrow L\right]+ 2M_{\tilde{N}_a}^2\textrm{Im}\left( g_R^{la*}g_L^{la*}g_R^{l1}g_L^{l1}\right) \right) \\\nonumber &\times \int dx_+ dx_- \frac{\Bigl(\left(1-\mu_1-x_+ - x_-+x_+x_-\right)\left(x_+ + x_- +\mu_1-1 \right)\Bigr)^{1/2}} {\left(1-x_+-\mu_{\tilde{l}}\right)\left(1-x_--\mu_{\tilde{l}}\right)}.\end{aligned}$$ The integrals evaluate to $$\begin{aligned} \label{eqn:notIntTerms} \int \frac{dPS_4}{2M_{\tilde{t}}} \left(|\mathcal{M}_a|^2 + |\mathcal{M}_b|^2\right) &= \frac{1}{256\pi^3}\left(\frac{M_{\tilde{N}_a}}{\Gamma_{\tilde{N}_a}}\right) \left(\int\frac{dPS_2}{2M_{\tilde{t}}} \right) \\\nonumber &\times \left(|g_L^{l1}|^2 + |g_R^{l1}|^2 \right) \left(|g_L^{la}|^2 + |g_R^{la}|^2 \right)\\\nonumber &\times \left( \left(M_{\tilde{t}}^2-M_{\tilde{N}_a}^2-M_t^2 \right)\left(|g_L^{qa}|^2 + |g_R^{qa}|^2 \right) + 4M_t M_{\tilde{N}_a}~\textrm{Re}\left(g_L^{qa} g_R^{qa*}\right)\right) \\ \nonumber &\times \int_0^{1-\mu_1} dx \frac{x^2(1-x-\mu_1)^2}{1-x}\frac{1}{(1-x-\mu_{\tilde{l}})^2} \end{aligned}$$ and $$\begin{aligned} \label{eqn:IntTerms} \int \frac{dPS_4}{2M_{\tilde{t}}} \left(2\textrm{Re}(\mathcal{M}_a\mathcal{M}_b^*) \right) &= \frac{1}{256\pi^3}\left(\frac{M_{\tilde{N}_a}}{\Gamma_{\tilde{N}_a}}\right) \left(\int\frac{dPS_2}{2M_{\tilde{t}}} \right) \\\nonumber &\times \left(\left(M_{\tilde{t}}^2-M_{\tilde{N}_a}^2-M_t^2 \right) \left(|g_L^{qa}|^2 + |g_R^{qa}|^2 \right) - 4M_t M_{\tilde{N}_a}~\textrm{Re}\left(g_L^{qa} g_R^{qa*}\right) \right) \\\nonumber &\times \int dx_+ dx_- \frac{1}{(1-x_+-\mu_{\tilde l})(1-x_--\mu_{\tilde l})} \\\nonumber &\times \left( \sqrt{\mu_1}\left(\mu_1+x_++x_- - 1\right) \textrm{Re}\left(g_L^{l1}g_L^{l1} g_L^{la*}g_L^{la*} + g_R^{l1}g_R^{l1} g_R^{la*}g_R^{la*} \right) \right. \\\nonumber &~~~~ -\left. 2(1-x_+-x_-+x_+x_- - \mu_1) \textrm{Re}\left(g_L^{l1}g_L^{l1} g_R^{la*}g_R^{la*}\right) \right).\end{aligned}$$ The entire expression is relativistically invariant, except for the limits in Eq. (\[eqn:N+N-\]) used to derive Eq. (\[eqn:Difference\]), which are computed in the rest frame of $\tilde{N}_a$. However, since $N_+$ ($N_-$) involves an integration over the entire upper (lower) hemisphere, these expressions are still invariant under boosts in the stop’s direction which do not flip the direction of the top. In particular, the asymmetry parameter is unmodified when boosting to the rest frame of the stop. This is an important fact. It implies that the signal one constructs in the lab is only degraded by one’s ignorance of the initial boost of the stop in the lab frame. In the case of an off-shell slepton the asymmetry variable $\eta$ is an $\mathcal{O}(1)$ number. The exact expression is given by the ratio of Eq. (\[eqn:Difference\]) to Eq. (\[eqn:Total\]). There are several limiting cases where the final result is extremely simple. In particular, in the case where $\tilde{N}_a$ is a pure wino and $\tilde{N}_1$ is a pure bino, assuming there are no strong kinematical suppressions and $\mu_{\tilde{l}} \gg 1$, the expression simplifies to $$\begin{aligned} \label{eqn:SimpleEta} \eta = \frac{\sqrt{\mu_1}}{2}\left(\frac{F(\mu_1)}{G_1(\mu_1) + G_2(\mu_1) \cos(2\Delta\varphi)}\right) ~\sin(2\Delta\varphi),\end{aligned}$$ where we expressed the complex couplings as $g_L^{la} = |g_L^{la}| ~\exp(i\varphi_L^{la})$ and $$\Delta \varphi = \varphi_L^{l1}-\varphi_L^{la}.$$ In the approximation of ignoring slepton mixings this is just the difference between the original phases of the neutralino masses before they were absorbed into the couplings. The kinematic functions in Eq. (\[eqn:SimpleEta\]) are given by $$\begin{aligned} F(\mu_1) &= \int dx_+ dx_- \Bigl( \left(1 -\mu_1-x_+-x_- +x_+x_-\right)\left(x_++x_-+\mu_1 -1\right) \Bigr)^{1/2} \\ G_1(\mu_1) &= \frac{1}{6}\left( 1-8\mu_1 + 8\mu_1^3 - \mu_1^4\right) - \mu_1^2 \log(\mu_1^2) \\ G_2(\mu_1) &= \frac{\sqrt{\mu_1}}{6} \Bigl((1-\mu_1)(1+10\mu_1+\mu_1^2) + 6\mu_1(1+\mu_1) \log(\mu_1) \Bigr)\end{aligned}$$ In Fig. \[fig:etaVsmu1\] we plot the asymmetry parameter $\eta$ vs. $\mu_1$ for several choice of $\Delta \varphi$. As claimed above, when the slepton is off-shell the asymmetry $\eta$ can be very large, proportional to the phase times an $\mathcal{O}(1)$ number as shown in Eq. (\[eqn:SimpleEta\]). For example, if we take $M_{\tilde{N}_1}/M_{\tilde{N}_a} \gtrsim 0.7$ we find that, ignoring experimental limitations, the number of events needed to make a determination of CP-violation in this cascade decay is approximately $$N = \frac{1}{\eta_{th}^2} \sim \frac{100}{\sin^2(2\Delta\varphi)}.$$ In Table \[tbl:rate\] we present the $\tilde{t}_L\tilde{t}_L^c$ production cross-section for several choices of the stop mass. We also show the actual number of $t~\ell^+~\ell^-$ events, taking into account the branching ratio for the reaction $\tilde{t} \rightarrow t + \tilde{N}_a \rightarrow t + l^+ + l^- + \tilde{N}_1$, for an integrated luminosity of $\mathcal{L} = 300 ~fb^{-1}$ and for a possible upgrade with $\mathcal{L} = 1~ab^{-1}$. For a stop mass below $800{~\mathrm{GeV}}$ the prospects for such a measurement look promising. $M_{\tilde{t}_L}$ $\sigma~(fb)$ ---------------------- --------------- ------------------ $500{~\mathrm{GeV}}$ $300$ $7300 ~~(24000)$ $800{~\mathrm{GeV}}$ $20$ $560 ~~(1800)$ $1{~\mathrm{TeV}}$ $4$ $120 ~~(400)$ $1.2{~\mathrm{TeV}}$ $1$ $30 ~~(100)$ : The production cross-section for $\tilde{t}_L{\tilde{t}}_L^c$ is shown in the middle column. The branching ratio for the reaction $\tilde{t} \rightarrow t + \tilde{N}_a \rightarrow t + l^+ + l^- + \tilde{N}_1 $ was calculated using $M_{\tilde{l}} = 300{~\mathrm{GeV}}$, $M_{\tilde N_2}=140{~\mathrm{GeV}}$, $M_{\tilde N_1}=100{~\mathrm{GeV}}$, and assuming that the gluino and squarks are sufficiently heavy to have little effect. (Under these assumptions and wino/bino dominated $\tilde N_{2,1}$ the branching ratio for $\tilde t \rightarrow t \tilde N_2$ is slightly less then $1/3$ because of the top’s mass, and those for $ \tilde N_2\rightarrow e^+ e^- \tilde N_1$ or $\mu^+ \mu^- \tilde N_1$ are about $1/6$ each.) The number of $t~ \ell^+~\ell^-$ events is then presented in the last column for two different integrated luminosities. The effective number of events is doubled if one combines the $t \ell^+ \ell^-$ and $t^c \ell^- \ell^+ $asymmetries.[]{data-label="tbl:rate"} In the next section we take into account the experimental difficulties in making such a determination. We propose ways of overcoming these limitations and try to evaluate the corresponding reduction in signal sensitivity. ![The asymmetry parameter $\eta$ is plotted against $\mu_1 = M_{\tilde{N}_1}^2/M_{\tilde{N}_a}^2$ for several choices of the CP-phase $\Delta \varphi$. As expected, $\eta$ diminishes as the CP-phase decreases.[]{data-label="fig:etaVsmu1"}](etaVsMu1.eps) Experimental limitations {#sec:ExpLim} ======================== In this section we consider the degradation of the signal due to experimental limitations. First, we address an issue already mentioned above, namely that the triple product is measured in the lab frame, and if the LSP escapes detection there is no way to reconstruct the rest frame of the stop. In other words, the asymmetry parameter $\eta$ was computed in the neutralino frame, which cannot be reconstructed. Let’s imagine an event where the momenta are such that in the neutralino’s rest frame we have, $$\vec{p}_{t} \cdot (\vec{p}_{l^+}\times\vec{p}_{l^-}) = p_{t}^z \left(p_{l^+}^x p_{l^-}^y - p_{l^+}^y p_{l^-}^x\right) > 0$$ This is a contribution to $N_+$. This quantity is still positive even in the stop’s rest frame since the boost is only along the $z$-axis and it cannot flip the direction of the top momenta. If the stop was produced at rest in the lab frame the signal would be unaltered. However, the stop itself is in general boosted with respect to the lab frame. An arbitrary boost can turn this contribution to $N_+$ in the stop’s frame into a contribution to $N_-$ in the lab frame. It can either flip the sign of $\vec p_{t}$ or it can change the transverse orientation of $\vec p_{l^+}$ with respect to $\vec p_{l^-}$. The stop - anti-stop pair is produced mainly via gluon fusion and so, owing to the gluon distribution function, the stops are produced very close to threshold. However, the overall center of mass can be quite boosted with respect to the lab. Therefore, while the stop has very little transverse momenta, it does carry a non-negligible momentum along the beam direction. In Fig. \[fig:stop\_beta\] we used Pythia [@Sjostrand:2006za] to produce a plot of the distribution of stop longitudinal velocity in the lab frame for several choices of stop mass. ![The stop velocity distribution in the lab frame for several choices of stop’s mass. All distributions are normalized to unit area. As the stop’s mass increases the distributions peak at a lower $\beta$, in agreement with expectations based on the PDF.[]{data-label="fig:stop_beta"}](stopBetaDist.eps) Therefore, we must account for the possible flip of $N_+$ into $N_-$ (and vice-versa) due to the initial boost of the stop. If we denote by $w$ the probability flip then the asymmetry parameter $\eta$ in the lab frame is given by $$\eta_{lab} = \frac{N_+^{(lab)} - N_-^{(lab)}}{N_+^{(lab)} + N_-^{(lab)} }= \mathcal{D}\left( \frac{N_+^{(\tilde{N})} - N_-^{(\tilde{N})}}{N_+^{(\tilde{N})} + N_-^{(\tilde{N})} }\right),$$ where the dilution factor $\mathcal{D}$ is simply $$\mathcal{D} = 1- 2 w.$$ One might expect that $w\rightarrow 0$ as the stop’s mass increases since the initial boost is diminished. However, this limit holds true only if the mass difference $m_{\tilde{t}} - m_{\tilde{N}_a}$ remains fixed. When this mass difference increases, $w$ increases as well. To understand this point, notice that when the difference $m_{\tilde{t}} - m_{\tilde{N}_a}$ increases, all the momenta defining $N_+$ are on average increased. So, while it is true that it becomes harder to change the sign of $\vec p_{t}$, it is easier to change the orientation of $\vec p_{l^+}$ with respect to $\vec p_{l^-}$. In Fig. \[fig:dilution\_fac\] we plot the probability for a flip, $w$, as a function of the stop’s mass (keeping $m_{\tilde{t}} - m_{\tilde{N}_a}$ fixed) as well as a function of the mass difference itself (keeping $m_{\tilde{t}}$ fixed). From Fig. \[fig:dilution\_fac\] it is clear that the dilution factor does not present a very serious problem. Unless there is a very large splitting between the stop and neutralino masses the probability of flip is about $w\gtrsim 0.33$. This will translate into a dilution factor of about $\mathcal{D}^2 \lesssim 0.1$ which represents an increase in the number of events needed of about an order of magnitude. ![In the left pane the probability $w$ of $N_+$ in the neutralino’s rest frame to flip into $N_-$ in the lab frame (or $N_-$ into $N_+$) is plotted as a function of the stop mass keeping $m_{\tilde{t}} - m_{\tilde{N}_a}$ fixed. In the right pane, $w$ is plotted as a function of the mass difference, keeping $m_{\tilde{t}}$ fixed.[]{data-label="fig:dilution_fac"}](flipVsMstop.eps "fig:") ![In the left pane the probability $w$ of $N_+$ in the neutralino’s rest frame to flip into $N_-$ in the lab frame (or $N_-$ into $N_+$) is plotted as a function of the stop mass keeping $m_{\tilde{t}} - m_{\tilde{N}_a}$ fixed. In the right pane, $w$ is plotted as a function of the mass difference, keeping $m_{\tilde{t}}$ fixed.[]{data-label="fig:dilution_fac"}](flipVsDM.eps "fig:") There are several related issues concerning the efficiency of identifying the top or anti-top, determining its charge[^3] and momentum, determining whether the $l^+ l^-$ from the decay is associated with the initial $\tilde t$ or the $\tilde t^c$ (assuming they are pair produced), separating the cascade leptons from others in the process, etc. These questions, and how they affect the number of identified events, will require an extensive and careful numerical simulation. While we do not attempt such a study here but leave it for future research, we would like to make a number of comments relevant for such a study. A favorable situation is that in which $\tilde N_1$ and $\tilde{N}_2$ are dominantly bino and wino, respectively, and the gluino is heavy. Then the dominant decays of the $\tilde t_L$ should be into $b~ \tilde C^+$ and $t~ \tilde N_2$, with relative rates $\sim 2 : 1$. When one of the stops decays to a chargino and the other cascades, it may optimistically be possible to identify the $t$ or $t^c$ and determine its charge by tagging on the charge of the lepton from the chargino decay (especially if it is of a different flavor from the dilepton). In this case, one can combine the asymmetries from $\tilde t$ and $\tilde t^c$ decays; i.e., in analogy to Eq. (\[eqn:eta\_def\]) define $$\label{eqn:etasum_def} \eta_{sum} = \frac{N_+(\tilde t)+N_+(\tilde t^c) - N_-(\tilde t)-N_-(\tilde t^c) }{N_+(\tilde t)+N_+(\tilde t^c) + N_-(\tilde t)+N_-(\tilde t^c) },$$ where the $\theta$ angles for the $\tilde t$ and $\tilde t^c$ events are defined in the caption to Fig. \[fig:z-plane-angle\]. It is easy to show that the theoretical expectation for $\eta_{sum}$ is the same as for $\eta$ (except for removing the strong phase term), while effectively doubling the number of available events, i.e., the required $({\mathcal{D}^2 \eta_{th}^2})^{-1}$ is the total number of $t ~\ell^+\ell^-$ and $t^c \ell^-\ell^+$ cascades. Events in which the associated top decays semi-leptonically provide another handle on its identification and charge, but at the expense of possible additional confusion about which lepton is from the top, and missing the neutrino momentum, which renders the reconstruction of the top’s momentum impossible. None of the observables above really require the top’s momentum, but only its direction in the lab frame. If the top is highly boosted, it may not be necessary to fully reconstruct its momentum to infer its direction in the lab frame. More generally, the reduction in signal due to an imprecise determination of the top’s direction is a detailed numerical question involving an event simulator which we leave for future investigation. There may also be a non-negligible number of events in which the $\tilde t$ decays to $t~\tilde N_2$ and the $\tilde t^c$ to $t^c \tilde N_1$, or vice versa. Assuming that one determines the charge of $t$ or $t^c$ by its leptonic decay, there is still the ambiguity of whether the dilepton is correlated with the top or anti-top. One possibility is to simply include the wrong pairing (calculating $\cal T$ ($\overline{\cal T}$) for the pairing with the $t$ ($t^c$)), and argue that on average it does not contribute to any CP violating observable. This effectively sums the asymmetries from the $t~ \ell^+\ell^-$ and $t^c \ell^-\ell^+$ cascades, but without gaining the factor two statistical advantage discussed above in the case of one chargino decay. It is possible that the combinatorics can be resolved with a more sophisticated analysis involving isolation cuts, energy cuts, etc. For example, if the anti-stop decays directly into an anti-top and the LSP, the anti-top is on average more energetic than the top coming from the other branch. Ordering the jets according to their $p_T$ may reduce the combinatorics and help identify the correct pairing. Again, such schemes will require a careful numerical simulation. Another possibility is to use bottom quarks instead of tops, i.e., consider the reaction $\tilde{b} \rightarrow b + \tilde{N}_2 \rightarrow b + l^+ + l^- + \tilde{N}_1$. In this case one could reconstruct the full $b$ momentum. However, the efficiency of directly determining the $b$ charge is low. These issues could be resolved if the opposite side $\tilde b$ or $\tilde b^c$ decays to $t~\tilde C^-$ or $t^c~\tilde C^+$, but it may be difficult to know whether one began with stops or sbottoms if they are close in mass. For example, if one observed $t~ b^c l^{\prime -} \ell^+\ell^-$, the $\ell^+\ell^-$ could be associated either with $\tilde{t} \rightarrow t ~\ell^+~\ell^-$ or with $\tilde{b}^c\rightarrow b^c~\ell^-~\ell^+$. The asymmetries expected from each possibility would be the same for a wino-dominated $\tilde{N}_2$. Similar to the discussion of $t^c\tilde{N}_1$ above, one could count each event twice, once for each possibility, assuming that the wrong pairing does not contribute to the asymmetry. The required sum of identified $t$ and $b^c$ cascades is twice the expression in Eq. (\[eqn:nstat\]), but this number could include the $t^c$ and $b$ cascades if the $t^c b \ell^{\prime +} \ell^-\ell^+$ events are combined appropriately. A third possibility is to use the asymmetry between quarks and anti-quarks in the parton distribution function (PDF) of the proton. When considering stop pair-production the dominant mode is gluon fusion since there are practically no tops in the proton PDF. In this case, stops and anti-stops are produced in equal amounts. However, when considering the production of $\tilde{u}$’s and $\tilde{d}$’s there are many more relevant channels. The valence quarks play a significant role and associated production ($g+q\rightarrow \tilde{g}+\tilde{q}$) can dominate. Since valence quarks in a proton-proton collider are mostly quarks and not anti-quarks, it is considerably more likely to produce a squark than an anti-squark. Therefore, in effect, we know that we are observing the reaction and not its CP conjugate. There are several problems with such an approach. First, we must include the contribution from all partons (mostly $u$’s and $d$’s). This is in principle easy to incorporate into the present calculation and amounts to a trivial addition of the different contributions. The more serious problem is the existence of multijets in the event and a reduction of the signal due to combinatorics. There is also the problem of the strong phases discussed in Section \[sec:GeneralDiscussion\], which in principle contribute to the expectation value of the triple-product. One source involves the exchange of a photon between the two leptons in the cascade. However, this is of $\mathcal{O}(\alpha/\pi)$, and not competitive unless the CP-violating phases are small. Another is associated with the phases in the slepton propagators from their finite width. This leads to a non-trivial effect in the on-shell case for fixed lepton energies, but vanishes when integrated over their energies, as commented in the appendix. It is negligible in the more interesting off-shell case. In general, the strong phase effect can be eliminated by combining the asymmetries for $\tilde t$ and $\tilde t^c$, as in Eq. (\[eqn:sum\_value\]) or (\[eqn:etasum\_def\]). However, if one relies on the asymmetry between quarks and anti-quarks in the PDF, one cannot form such a combination. In this case, one must rely on the theoretical estimate that the strong phases effects are small. Conclusions {#sec:conclusions} =========== The triple product and the related asymmetry parameter observable presented in this paper are sensitive to CP-phases in the cascade decay of stops, $\tilde{t} \rightarrow t + l^+ + l^- + \tilde{N}_1$. The phase combination that appears in this reaction involves the phase difference between the wino-slepton-lepton and bino-slepton-lepton (more generally, $\tilde N_2$ and $\tilde N_1$) couplings. As pointed in the introduction, this phase combination is not bounded directly by EDM experiments since it does not require a higgsino insertion and is therefore independent of the $\mu$-parameter’s phase. In the case of an on-shell cascade decay, the signal is too small to be observable, requiring more than $10^5$ events to reach experimental sensitivity. However, if the spectrum is such that the reaction proceeds via an off-shell slepton, the signal is greatly enhanced. We find that about $(10^2-10^3)/\sin^2(2\Delta\varphi)$ events are needed to constrain the CP-phase $\Delta\varphi$. This number may improve dramatically if some experimental difficulties discussed in the text are resolved or may increase if these turn out to be more severe. At any rate, this number is low enough to be taken seriously as a viable observable for probing some of the MSSM’s CP-phases at the LHC. Rough estimates for the required number of events are given in Table \[tbl:rate\]. For large CP phases the effect may be observable for stop mass as large as $800{~\mathrm{GeV}}$ prior to a luminosity upgrade and even higher thereafter. A possible luminosity upgrade and a favorable spectrum will place the signal well within the experimental sensitivity and help probe a combination of the phases which is currently inaccessible via the EDM experiments. Our goal has been to illustrate the general possibility and point out the difficulties, not to examine the full parameter space. A more systematic study, including a full numerical simulation of the events and detector performances, would be very useful. Similar effects might also be observable in other channels which may cover different regions of parameter space. **Acknowledgments**: We would like to thank G. Kane and T. Han for useful discussions in the early stages of this project. The work of L.W. and I.Y. is supported by the National Science Foundation under Grant No. 0243680 and the Department of Energy under grant \# DE-FG02-90ER40542. P.L is supported by the Friends of the IAS and by the NSF grant PHY-0503584. The work of G.P. was supported in part by the Department of Energy \# DE-FG02-90ER40542 and by the United States-Israel Bi-national Science Foundation grant \# 2002272. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. A derivation of the asymmetry parameter $\eta$ in the neutralino’s rest frame {#app:EtaComp} ============================================================================= In this appendix we give the details of the calculation of the asymmetry $\eta$ in the neutralino’s rest frame (defined in Eq.(\[eqn:eta\_def\])). A computation of the expectation value $\langle \epsilon_{\mu\nu\alpha\beta}\; p_{\tilde t}^\mu p_t^\nu p_{l^+}^\alpha p_{l^-}^\beta~\rangle$ is a straightforward modification of the derivation below. The differential decay width for the reaction is $$d\Gamma = \frac{\sum_{spin}|\mathcal{M}|^2}{2M_{\tilde t}} dPS_4,$$ where $\mathcal{M}$ is the invariant amplitude. The Feynman rule for the sfermion-fermion-neutralino coupling is $$\vcenter{\includegraphics[scale=0.75]{rule.eps}} \hspace{-28em} \quad = \quad\quad\quad i\left(G^{f_L}_{isl}\,P_L+G^{f_R}_{isl}\,P_R\right).$$ Our notation follows that of [@Drees:2004jm], where the explicit expressions for the $G$’s in terms of the mixing matrices can be found. The invariant amplitude consists of two parts, corresponding to the Feynman diagrams of Fig. \[fig:Mab\]. (There are two diagrams contributing to the process as a result of the Majorana nature of the neutralinos). Ignoring the masses of the external leptons and allowing them to have different flavors the two parts are $$\begin{aligned} \label{eqn:MaMb} i\mathcal{M}_a &=& i \bar{u}(p_t) \left(G^{t_R*}_{t\tilde{t}a}P_L + G^{t_L*}_{t\tilde{t}a}P_R \right) \left(\frac{-\slashed{q} + M_{\tilde{N}_a}}{q^2-M_{\tilde{N}_a}^2} \right) \left(G^{\,l_R*}_{jka}P_L + G^{\,l_L*}_{jka}P_R \right) v(p_{l^-_j}) \nonumber\\&& \left(\frac{1}{k_1^2 - M_{\tilde{l}_k}^2}\right) \bar{u}(p_{\tilde N_1}) \left(G^{\,l_L}_{ik1}P_L + G^{\,l_R}_{ik1}P_R \right) v(p_{l^+_i}) \nonumber\\ i\mathcal{M}_b &=& -i \bar{u}(p_t) \left(G^{t_R*}_{t\tilde{t}b}P_L + G^{t_L*}_{t\tilde{t}b}P_R \right) \left(\frac{-\slashed{q} + M_{\tilde{N}_b}}{q^2-M_{\tilde{N}_b}^2} \right) \left(G^{\,l_L}_{ilb}P_L + G^{\,l_R}_{ilb}P_R \right) v(p_{l^+_i})\nonumber \\&& \left(\frac{1}{k_2^2 - M_{\tilde{l}_l}^2}\right) \bar{u}(p_{\tilde N_1}) \left(G^{\,l_R*}_{jl1}P_L + G^{\,l_L*}_{jl1}P_R \right) v(p_{l^-_j}),\end{aligned}$$ where $q=p_{\tilde{t}}-p_t$, $k_1=p_{l^+_i}+p_{\tilde N_1}$, $k_2=p_{l^-_j}+p_{\tilde N_1}$, and the masses are taken to be real (i.e., all the complex phases are absorbed into the couplings). In deriving \[eqn:MaMb\] we have used some of the identities of Appendix C of [@Chung:2003fi]. The asymmetry $\eta$ depends on the terms proportional to the Levi-Civita tensor $\epsilon_{\mu\nu\alpha\beta}$. Terms in the reaction containing a non-vanishing $\epsilon_{\mu\nu\alpha\beta}$ can only come from the interference terms and must contain 4 independent vectors. We find $$\begin{aligned} &&2\,\textrm{Re}\sum_{\rm spin} \mathcal{M}_a \mathcal{M}_b^* \supset 4\, \textrm{Im}\,\left(a_R - a_L\right) \epsilon_{\mu\nu\alpha\beta}~ p_{\tilde{t}}^\mu ~p_t^\nu~ p_{l^+_i}^\alpha~ p_{l^-_j}^\beta~\\ &&\times\left[ \frac{1}{q^2-M_{\tilde{N}_a}^2-i\Gamma_{\tilde{N}_a} M_{\tilde{N}_a}}\cdot \frac{1}{q^2-M_{\tilde{N}_b}^2+i\Gamma_{\tilde{N}_b} M_{\tilde{N}_b}}\cdot \frac{1}{k_1^2-M_{\tilde{l}_k}^2-i\Gamma_{\tilde{l}_k} M_{\tilde{l}_k}}\cdot \frac{1}{k_2^2-M_{\tilde{l}_l}^2+i\Gamma_{\tilde{l}_l} M_{\tilde{l}_l}} + {\rm c.c} \right],\nonumber\end{aligned}$$ where $a_R$ is given by $$\begin{aligned} a_R &=&-q^2\left(G^{t_L*}_{t\tilde{t}a}G^{t_L}_{t\tilde{t}b}G^{\,l_R*}_{jka} G^{\,l_L*}_{ilb}G^{\,l_L}_{ik1}G^{\,l_R}_{jl1}\right)\\ \nonumber &+&M_{\tilde{N}_a}M_{\tilde N_1}\left(G^{t_R*}_{t\tilde{t}a}G^{t_R}_{t\tilde{t}b}G^{\,l_R*}_{jka}G^{\,l_R*}_{ilb} G^{\,l_R}_{ik1}G^{\,l_R}_{jl1}\right)\\ \nonumber &+& M_{\tilde{N}_b}M_{\tilde N_1}\left(G^{t_R*}_{t\tilde{t}a}G^{t_R}_{t\tilde{t}b}G^{\,l_L*}_{jka}G^{\,l_L*}_{ilb} G^{\,l_L}_{ik1}G^{\,l_L}_{jl1}\right)\\ \nonumber &+&M_{\tilde{N}_a}M_{\tilde{N}_b}\left(G^{t_R*}_{t\tilde{t}a}G^{t_R}_{t\tilde{t}b}G^{\,l_R*}_{jka}G^{\,l_L*}_{ilb} G^{\,l_L}_{ik1}G^{\,l_R}_{jl1}\right),\end{aligned}$$ and $a_L$ is simply given by $a_R$ with $L\leftrightarrow R$. In principle, there is also a term proportional to $\textrm{Re}\ (a_R - a_L)$ from the finite width part of the slepton propagators (see Section \[sec:conclusions\]). In the flavor diagonal case this term is proportional to the difference between the $l^+$ and $l^-$ energies, and vanishes when integrated over phase space. If we neglect all the off-diagonal mixing matrix elements, and take the neutralino $\tilde{N}_a=\tilde{N}_b$ to be on-shell ($q^2=M_{\tilde{N}_a}^2$), the expression simplifies to $$\begin{aligned} \label{eqn:Im_diag} \textrm{Im}\,(a_R-a_L) &= 2M_{\tilde{N}_a}^2\left(|g^{qa}_R|^2 - |g^{qa}_L|^2 \right) \textrm{Im}\,\left( g^{la*}_Rg^{la*}_L g^{l1}_Rg^{l1}_L \right)\\\nonumber & + M_{\tilde{N}_a} M_{\tilde N_1} \left(|g^{qa}_R|^2 - |g^{qa}_L|^2 \right) ~ \textrm{Im}\,\left[\left(g^{la*}_R \right)^2 \left(g_R^{l1}\right)^2 +\left(g^{la*}_L \right)^2 \left(g_L^{l1}\right)^2\right],\end{aligned}$$ where we have defined $$G^{t_R}_{t\tilde{t}a}\equiv g^{qa}_R,\quad G^{\,l_R}_{ika}\equiv g^{la}_R,\quad G^{\,l_R}_{ik1}\equiv g_R^{l1},$$ and similarly for the left handed couplings. Writing each coupling constant as $g=|g|e^{i\varphi}$ (\[eqn:Im\_diag\]) can be expressed in terms of the complex phases $i\varphi$ as $$\begin{aligned} \textrm{Im}\,(a_R-a_L) &= 2M_{\tilde{N}_a}^2\left(|g_R^{qa} |^2 - |g_L^{qa} |^2 \right) | g_R^{la}|\, | g_L^{la}|\, |g_R^{l1}|\, |g_L^{l1}| \sin\left( \varphi_R^{l1} + \varphi_L^{l1} -\varphi_R^{la} - \varphi_L^{la})\right) \\\nonumber & + M_{\tilde{N}_a} M_{\tilde N_1} \left(|g_R^{qa} |^2 - |g_L^{qa} |^2 \right) \left[|g_R^{la}|^2\, |g_R^{l1}|^2\sin\left(2(\varphi_R^{l1} -\varphi_R^{la})\right) + (R\rightarrow L) \right].\end{aligned}$$ The event geometry in the neutralino’s rest frame is depicted in Fig. \[fig:z-plane-angle\]. The incoming stop and outgoing top define a $z$-axis with the top pointing in the positive direction. This $z$-axis is in an arbitrary orientation with respect to the lab frame’s beam pipe-line axis (since the stop is a scalar its decay is isotropic). Momentum conservation in the neutralino’s rest frame forces the di-lepton and the LSP to lie in the same plane. In other words, the di-lepton defines an orthogonal to the plane, $$\hat{n} = \hat{p}_{l^+} \times \hat{p}_{l^-},$$ where $\hat{n}$ itself is oriented with respect to the $z$-axis, $$\hat{n}\cdot \hat{p}_t = \cos\theta.$$ We say that $\hat{n}$ is in the upper-hemisphere ($N_+$) if $\cos\theta > 0$ or the lower-hemisphere ($N_-$) if $\cos\theta < 0$. A non-zero expectation value for $\vec{p}_t \cdot (\vec{p}_{l^+}\times\vec{p}_{l^-})$ translates into a non-zero expectation value for $N_+ - N_-$. As far as this difference is concerned the only relevant part of the amplitude is the one involving the $\epsilon_{\mu\nu\alpha\beta}$ piece. We are left with evaluating the integral $$\begin{aligned} N_+&\propto& \int_{\theta=0}^{\pi/2} dPS_4 \left[ 4\, \textrm{Im}\,\left(a_R - a_L\right) \epsilon_{\mu\nu\alpha\beta}~p_{\tilde{t}}^\mu ~p_t^\nu~ p_{l^+}^\alpha~ p_{l^-}^\beta\right] \\\nonumber &\times& \frac{1}{(q^2-M_{\tilde{N}_a}^2)^2+\Gamma_{\tilde{N}_a}^2M_{\tilde{N}_a}^2}\, {\rm Re} \left( \frac{1}{k_1^2-M_{\tilde{l}}^2-i\Gamma_{\tilde{l}} M_{\tilde{l}}}\cdot \frac{1}{k_2^2-M_{\tilde{l}}^2+i\Gamma_{\tilde{l}} M_{\tilde{l}}} \right).\end{aligned}$$ A similar expression holds for $N_-$ only with the limits on the integrals being $(\pi/2, \pi)$. In the neutralino’s rest frame $\vec{p}_{\tilde t} = \vec{p}_t$, and therefore $$\begin{aligned} \epsilon_{\mu\nu\alpha\beta}~ p_{\tilde t}^\mu ~p^\nu_t~ p_{l^+}^\alpha~ p_{l^-}^\beta &= E_{\tilde t}\, \vec{p}_t \cdot \left( \vec{p}_{l^+}\times \vec{p}_{l^-}\right) - E_t\, \vec{p}_{\tilde t} \cdot \left( \vec{p}_{l^+}\times \vec{p}_{l_-}\right) \\ \nonumber &= M_{\tilde{N}_a}\, \vec{p}_t \cdot \left( \vec{p}_{l^+}\times \vec{p}_{l^-}\right) \\ \nonumber &= M_{\tilde{N}_a}\, |p_t|\, |p_{l^+}|\, |p_{l^-}| \cos\theta \sin\phi,\end{aligned}$$ where $\cos\theta$ is defined as above and $\phi$ is the angle between $\vec{p}_{l^+}$ and $\vec{p}_{l^-}$. The four-body phase space can be written as $$\begin{aligned} dPS_4(p_{\tilde t} \rightarrow p_t+p_{l^+}+p_{l^-}+p_{\tilde N_1}) &= dPS_2(p_{\tilde t}\rightarrow p_t + q)~ \frac{dq^2}{2\pi} \\ \nonumber &\times dPS_3(q\rightarrow p_{l^+}+p_{l^-}+p_{\tilde N_1}),\end{aligned}$$ where the 2-body phase space integral is given by $$dPS_2(p_{\tilde t}\rightarrow p_t +q) = (2\pi)^4\delta^{(4)}(p_{\tilde t}- p_t -q) \frac{d^3p_t}{(2\pi)^32E_t}\frac{d^3q}{(2\pi)^32E_q}$$ The 3-body phase-space integral can be written in terms of dimensionless variables (see for example [@Barger:1987nn]) $$dPS_3(q \rightarrow p_{l^+}+p_{l^-}+p_{\tilde N_1}) = \frac{M_{\tilde{N}_a}^2}{256\pi^3} dx_+ dx_- ~d\cos\theta,$$ where $$x_+ = \frac{2 E_{l^+}}{M_{\tilde{N}_a}} \quad \text{and} \quad x_- = \frac{2 E_{l^-}}{M_{\tilde{N}_a}},$$ and the limits of integration are $$0<x_-<1-\mu_1,\quad 1-\mu_1-x_-<x_+<1-\frac{\mu_1}{1-x_-},$$ where $\mu_1 = M_{\tilde N_1}^2/M_{\tilde{N}_a}^2$. Despite its appearance, the integration domain is symmetric over $x_+ \leftrightarrow x_-$. In the narrow-width approximation, the neutralino’s propagator is $$\frac{1}{(q^2-M_{\tilde{N}_a}^2)^2+\Gamma_{\tilde{N}_a}^2M_{\tilde{N}_a}^2} \rightarrow \frac{\pi}{\Gamma_{\tilde{N}_a} M_{\tilde{N}_a}} \delta (q^2 - M_{\tilde{N}_a}^2),$$ and the $dq^2$ integration can be done trivially, setting $q^2 = M_{\tilde{N}_a}^2$ everywhere else in the expression. Conservation of momentum fixes the angle between $\vec{p}_{l^+}$ and $\vec{p}_{l^-}$ to be $$x_+ x_- \sin\phi = 2\left((1-\mu_1-x_+-x_- +x_+x_-)(x_++x_++\mu_1-1) \right)^{1/2}.$$ Noting that $k_1^2 = (q-p_{l^-})^2 = M_{\tilde{N}_a}^2(1-x_-)$ and $k_2^2 = (q-p_{l^+})^2 = M_{\tilde{N}_a}^2(1-x_+)$, the expression for $N_+$ simplifies to $$\begin{aligned} N_+ &\propto& \left(\int\frac{dPS_2}{2M_{\tilde t}}\right) \frac{1}{256\pi^3}\frac{M_{\tilde{N}_a} |\vec{p}_t|}{\Gamma_{\tilde{N}_a}M_{\tilde{N}_a}}\,\textrm{Im}\,(a_R-a_L)\\ \nonumber &\times & \int_{\cos\theta = 0}^{1}\cos\theta d\left(\cos\theta\right) \int dx_+ dx_- \frac{x_+ x_- \sin\phi}{\left(1-x_+-\mu_{\tilde{l}}\right)\left(1-x_--\mu_{\tilde{l}}\right)},\end{aligned}$$ where $\mu_{\tilde l} = M_{\tilde l}^2/M_{\tilde{N}_a}^2$. Integrating over $\cos\theta$, we arrive at the result quoted in the text for the difference $N_+ - N_-$ for a given integrated luminosity $\mathcal{L}$, $$\begin{aligned} \frac{N_+ - N_-}{\mathcal{L}} &= \frac{1}{256\pi^3}\left(\frac{M_{\tilde{N}_a}}{\Gamma_{\tilde{N}_a}}\right) \left(\int\frac{dPS_2}{2M_{\tilde t}} \right) \left(\frac{|\vec{p}_t|}{M_{\tilde{N}_a}}\right) \textrm{Im}\,(a_R-a_L) \\\nonumber &\times \int dx_+ dx_- \frac{\Bigl((1-\mu_1-x_+-x_-+x_+x_-)(x_++x_++\mu_1-1) \Bigr)^{1/2}} {\left(1-x_+-\mu_{\tilde{l}}\right)\left(1-x_--\mu_{\tilde{l}}\right)}.\end{aligned}$$ For the total number of events $N_+ + N_-$ we must include the entire amplitude $$\frac{N_{\rm total}}{\mathcal{L}} = \int \frac{dPS_4}{2M_{\tilde t}} \left(|\mathcal{M}_a|^2 + |\mathcal{M}_b|^2 + 2\textrm{Re}(\mathcal{M}_a\mathcal{M}_b^*) \right).$$ Following the same path outlined above it is straightforward to arrive at the result quoted in Eqs.(\[eqn:notIntTerms\]) and (\[eqn:IntTerms\]). [^1]: Our conventions are $g_{\mu\nu}={\rm diag} (1,-1,-1,-1)$ and $\epsilon_{0123}=-1$. [^2]: For a more thorough discussion of these phases see the excellent textbooks [@Branco:1999fs; @Bigi:2000yz]. [^3]: Averaging over both the top and anti-top processes, which correspond to $\langle{ \cal T} \rangle -\langle{\overline{\cal T}}\rangle$ in Section \[sec:GeneralDiscussion\], cancels the CP-violating effects and just leaves the strong phase contribution.
--- abstract: 'There is an observational correlation between astrophysical shocks and non-thermal particle distributions extending to high energies. As a first step toward investigating the possible feedback of these particles on the shock at the microscopic level, we perform particle-in-cell (PIC) simulations of a simplified environment consisting of uniform, interpenetrating plasmas, both with and without an additional population of cosmic rays. We vary the relative density of the counterstreaming plasmas, the strength of a homogeneous parallel magnetic field, and the energy density in cosmic rays. We compare the early development of the unstable spectrum for selected configurations without cosmic rays to the growth rates predicted from linear theory, for assurance that the system is well represented by the PIC technique. Within the parameter space explored, we do not detect an unambiguous signature of any cosmic-ray-induced effects on the microscopic instabilities that govern the formation of a shock. We demonstrate that an overly coarse distribution of energetic particles can artificially alter the statistical noise that produces the perturbative seeds of instabilities, and that such effects can be mitigated by increasing the density of computational particles.' author: - Thomas Stroman - Martin Pohl - Jacek Niemiec - Antoine Bret bibliography: - 'refs.bib' title: 'COULD COSMIC RAYS AFFECT INSTABILITIES IN THE TRANSITION LAYER OF NONRELATIVISTIC COLLISIONLESS SHOCKS?' --- Introduction {#sec:p3int} ============ Shocks form in a wide variety of astrophysical environments, from planetary bow shocks in the heliosphere to colliding clusters of galaxies. The presence of nonthermal particle populations in some of these environments is inferred from radio, X-ray, and $\gamma$-ray observations , and a number of theoretical mechanisms link strong shocks (such as those found in young supernova remnants) to particle acceleration processes [e.g., @1983RPPh...46..973D; @1983ApJ...270..537W]. An understanding of the nonlinear coupling between the shock, the energetic particles, and the spectrum of any excited waves is essential to the proper interpretation of the observational signatures of shocked systems as well as to a correct understanding of the origin of cosmic rays. In most cases the density of the shocked medium is sufficiently low for collisions between particles to be infrequent; such collisionless shocks are mediated through collective electromagnetic effects. The thickness of the shock transition region and the nature of the microphysics occurring therein may play a key role in the injection efficiency of thermal particles into the acceleration processes thought to accelerate cosmic rays to high energy [@1978MNRAS.182..147B; @1978MNRAS.182..443B; @2002PhPl....9.4293S]. In addition, efficient particle acceleration associated with some shock environments raises the question of the extent to which a significant cosmic-ray contribution to the local energy density can influence the structure of the shock front itself. This may be of particular importance in starburst galaxies, where massive stars and frequent supernovae increase the availability of sites for particle acceleration. The emission of TeV $\gamma$-rays from some nearby starburst galaxies suggests an abundance of cosmic rays that significantly exceeds the values observed locally [@2009Natur.462..770V]. A number of mechanisms exist whereby cosmic rays may play a role in shaping the shock and its environment. A significant cosmic-ray contribution to the pressure upstream of a shock is understood to result in substantial modification to the shock environment [@1984ApJ...277..429E]; this effect is expected to produce an effective shock compression ratio that is perceived as being larger by cosmic rays of higher energy, possibly hardening the local cosmic-ray source spectrum [@2001RPPh...64..429M]. Under some conditions, the current carried by cosmic-ray ions diffusing ahead of the shock may excite instabilities in the upstream interstellar medium that lead to large-amplitude magnetic turbulence [@2004MNRAS.353..550B; @2011MNRAS.417.1148L; @2011ApJ...736..157R; @2011ApJ...738...93N] or other substantial alterations to the upstream environment, such as the heating of thermal electrons [e.g., @2008ApJ...684..348R]. However, these effects occur on spatial scales much larger than the thickness of the subshock and may operate quite independently of the processes occurring therein. In this paper, we turn our attention to those processes within the subshock. In order to quantify or constrain the effect of a “spectator” population of cosmic rays (that is, any cosmic rays present that have already been accelerated, locally or elsewhere) on the formation and evolution of nonrelativistic collisionless subshocks, we design simulations to explore the initial linear and subsequent nonlinear growth of instabilities that contribute to shaping the shock transition region. We restrict our focus to the interpenetration layer, in which two counterstreaming plasma shells overlap in space. This is fertile ground for the growth of well-known and well-studied instabilities [@2003ApJ...596L.121S; @2004ApJ...608L..13F; @2009ApJ...699..990B], and the two-plasma system will respond accordingly. We then repeat our simulations with an added cosmic-ray component, a plasma consisting of highly relativistic particles, so we may compare the systems at both the early and late stages of their evolution and identify whether the presence of the cosmic rays has any effect. The scale of the instabilities dictates the computational approach to modeling them. Hydrodynamical models of shocks are incapable of accurately resolving length scales much smaller than a particle mean free path and must therefore approximate subshocks as discontinuities, which is inadequate for our purposes. A self-consistent kinetic approach is necessary for a more complete exploration of the relevant small-scale physical processes, but simulating the entire subshock thickness is prohibitively costly. We elect a simulation that represents only a homogeneous portion of the subshock interior, and therefore we cannot account for a number of instabilities that arise from spatial gradients or particle reflection. Here we use particle-in-cell (PIC) simulations in two spatial dimensions to model the interaction of counterstreaming plasma flows in the presence of a third, hot plasma of energetic particles. The PIC technique is a (particle-based) kinetic approach to modeling the self-consistent evolution of an arbitrary distribution of charged particles and electromagnetic fields and thus is well suited to the nonlinear development of unstable plasma systems. In order to isolate the subshock instabilities from larger-scale spatial effects such as those arising from systematic charge separation, all distribution functions are initially homogeneous with respect to position. We consider the simplified case in which two electron–ion plasmas, not necessarily of equal density, move nonrelativistically through each other. This movement may also be parallel to a uniform magnetic field. We first observe the evolution of the drift velocities, field amplitudes, particle distributions, and wave spectra in the case when cosmic rays are negligible. We then include cosmic rays whose energy density is an order of magnitude below the kinetic energy in the bulk flow of the plasmas. We also explore the case of an artificially high energy density in cosmic rays, exceeding the value of equipartition with the bulk flows, to gain insight into effects that may be too small to arise in the case intended to represent a more realistic environment but that may nevertheless appear in regions where energetic particles are unusually abundant. However, our results suggest that even a high abundance of cosmic rays is not sufficient on its own to produce a significant deviation from the usual evolution of the instabilities shaping a shock. Objectives and approach ======================= To constrain the extent to which cosmic rays might influence the physics of the shock transition layer, we perform a series of simulations. Using the benchmarks of system behavior outlined in Section \[ssec:p3benchmarks\], we characterize the response of the counterstreaming-plasma system to the cascade of instabilities arising from interactions among the various plasma components in the absence of cosmic rays. We then repeat the simulations with cosmic rays present, so as to facilitate the side-by-side comparison of the various benchmarks. The parameter choices for the systems that we explore in this manner are described in detail in Section \[ssec:p3config\], and our simulation model and its implementation are described in Section \[ssec:p3setup\]. Then, a test of selected parameter configurations against the predictions from analytical studies of related systems is described in Section \[ssec:p3comp\]. Following a brief discussion, we conclude in Section \[sec:p3conc\]. Parameter-space configurations {#ssec:p3config} ------------------------------ The physical system under consideration is modeled as one homogeneous electron–ion plasma moving relative to another. These plasmas may in general have different densities. In addition, a uniform magnetic field may be aligned with the flow direction. Finally, a population of cosmic rays may be present, at rest in bulk in the center-of-momentum frame of the two plasmas. This is the reference frame chosen for the simulation, so the two plasmas flow in opposite directions; we adopt the nomenclature of “stream” and “counterstream” to distinguish between them. By our convention, the stream’s velocity is in the $-x$-direction, antiparallel to the guiding magnetic field when one is present. Our primary simulations are sensitive to variations in three parameters: the density ratio between the stream and the counterstream, the strength of the guiding magnetic field, and the energy density in the cosmic rays. The inter-plasma density ratio $w\equiv n_{s}/n_{\rm cs}$ takes three values and their respective designations: the “symmetric” case $w=1$, the “intermediate” case $w=0.3$, and the “dilute” case $w=0.1$. The velocity of the stream is fixed at ${\bf v_s}=-0.2c {\bf \hat{x}}$, while the velocity of the counterstream obeys the relation $n_{\rm cs}{\bf v_{cs}}+n_s{\bf v_s}=0$; in the dilute-stream case, the relative flow speed is therefore $0.22c$. Likewise, the density of the counterstream is fixed, so the stream density alone varies; the total electron density $n_e$ and thus the electron plasma frequency $\omega_{\rm pe}=\sqrt{e^2 n_e/\epsilon_0 m_e}$ (where $\epsilon_0$ is the vacuum permittivity, and $n_e=n_{s}+n_{\rm cs}$ is the cumulative electron density) is largest in the symmetric case and reduced by a factor $\sqrt{1.1/2}$ in the dilute case. The magnetic field $B_{0,x}$ may be absent, or present at either of two amplitudes given by the ratio of the electron cyclotron frequency $\Omega_e= e B_{0,x}/m_e$ to the electron plasma frequency, $b\equiv \Omega_e/\omega_{\rm pe}$. The “absent” magnetic field refers to $b=B_{0,x}=0$. The values designated “weak” and “strong” correspond to $b=0.01$ and $b=0.1$, respectively; in a plasma of electron density of, e.g., $n_e \sim 1$ cm$^{-3}$, a magnetic field of $\sim 300$ $\mu$G is necessary for $b=0.1$. Since the speed of Alfvén hydromagnetic waves $v_A\equiv b\,c/\sqrt{1+m_i/m_e}$ is at most $0.014c$ for our choice of $m_i/m_e=50$, all of the plasma collisions we consider have Alfvénic Mach numbers significantly larger than unity (up to $\infty$ in the case when $b=0$). Note that because the electron plasma frequency is not independent of the density ratio $w$ in our simulations, neither is the absolute magnetic-field amplitude corresponding to a particular value of $b$. All simulations include cosmic-ray particles consisting of electrons and ions, initialized according to an isotropic distribution function and a single speed (Lorentz factor $\gamma_{\rm CR}=50$) whose statistical weight $w_{\rm CR}\equiv n_{\rm CR}/n_{s}$ is adjusted to three levels: “negligible” when $w_{\rm CR}\gamma_{\rm CR}=10^{-8}$, “present” when $w_{\rm CR}\gamma_{\rm CR}=10^{-3}$, and “abundant” when $w_{\rm CR}\gamma_{\rm CR}=10$. Since $w_{\rm CR}$ is defined in terms of the stream density $n_s$, the absolute energy density in cosmic rays also varies with the density ratio $w$, being 10 times larger in the symmetric case than the dilute case for each value of $w_{\rm CR}$. Neglecting the contribution of electrons, the bulk kinetic energy in the stream plus counterstream is of order $n_{\rm cs}m_i v_s^2 w(1+w)/2$, while the cosmic-ray energy density is of order $n_{\rm CR}\gamma_{\rm CR}m_i c^2=25w_{\rm CR}\gamma_{\rm CR}w n_{\rm cs}m_i v_s^2$, where we have used the relation $v_s/c=0.2$. Thus, the ratio of cosmic-ray energy density to bulk kinetic energy density is $50 w_{\rm CR}\gamma_{\rm CR}/(1+w)$: considerably larger than unity for the “abundant” case and a few percent in the “present” case. Simulation setup {#ssec:p3setup} ---------------- Our simulations employ a modified version of the relativistic electromagnetic PIC code TRISTAN [@1993cspp.book......M], updated for parallel use with MPI, operating in two spatial dimensions, with three-component velocity and field vectors (2D3V), with periodic boundary conditions. The charge-conserving current deposition routine of @2003CoPhC.156...73U and the field update algorithm with fourth-order accuracy from @2004JCoPh.201..665G are the most prominent additions, as well as digital filtering of electric currents to suppress small-scale noise via an iterative smoothing algorithm. The primary set of simulations, 27 in total, was conducted on a spatial grid in the $x$–$y$ plane of size $280\lambda_{\rm se}\times 180\lambda_{\rm se}$ (periodic boundary conditions in $x$ and $y$, with elongation in the flow direction $x$), where $\lambda_{\rm se}\equiv c/\omega_{\rm pe}=10\Delta$ is the electron skin depth, set to 10 grid cells of length $\Delta$. The electron plasma frequency $\omega_{\rm pe}$ is determined from the sum of the stream and counterstream electron densities only and thus depends on $w$ but not on $w_{\rm CR}$. Supplementary high-resolution simulations in which $\lambda_{\rm se}=30\Delta$ were performed on a grid of more cells but representing a smaller physical region, $128\lambda_{\rm se}\times 96\lambda_{\rm se}$. The time step $\delta t$ was chosen such that $\omega_{\rm pe}^{-1}\approx 22\delta t$ for the $\lambda_{\rm se}=10\Delta$ simulations, or $\omega_{\rm pe}^{-1}\approx 66\delta t$ for the high-resolution $\lambda_{\rm se}=30\Delta$ simulations. Six separate particle populations from three plasmas are modeled: the “stream” moving in the $-x$-direction, the “counterstream” moving in the $+x$-direction, and the energetic particles representing cosmic rays. Within each plasma, ions and electrons of charge $\pm e$ and mass ratio $m_i/m_e=50$ have equal charge density and a common drift velocity so that the entire setup has no net current and no charge imbalance. The artificially low mass ratio expedites the simulations, but may change some modes [@2007GeoRL..3414109H]. Quite a few of these possibly modified modes are not included here, partly because we do not consider the spatial structure of subshocks, partly because they do not fit onto the computational grid, and partly because we do not simulate situations involving an oblique or a perpendicular large-scale magnetic field. In any case, the analytical treatment of instabilities presented in Section \[ssec:p3comp\] accounts for the small mass ratio, and in that sense our approach is consistent, at least for the linear phase. Each cell in a primary (high-resolution supplementary) simulation is initialized with a total of 90 (120) computational particles: 20 stream ions, 20 counterstream ions, and 5 (20) cosmic-ray ions; and an electron for each ion. The physical density of each plasma is manipulated through the assignment of the appropriate statistical weights, $w$ and $w_{\rm CR}$, to the various particle species. The stream and counterstream, viewed from their respective rest frames at $v_s=-0.2c\hat{x}$ and $v_{\rm cs}=w\times 0.2 c \hat{x}$, are described by a Maxwell–Boltzmann distribution in which the electrons’ most probable speed is given by $v_{{\rm th},e}=0.01c$ and the ions are in equilibrium with the electrons. The cosmic rays, whose rest frame is the simulation frame $v_{\rm CR}=0$, are isotropic and each is initialized with Lorentz factor $\gamma_{\rm CR}=50$, regardless of whether it is an ion or an electron. Placing the cosmic-ray population at rest in the center-of-momentum frame minimizes streaming in the collision zone, and therefore reduces known cosmic-ray-driven instabilities , which can be independently studied [e.g., @2008ApJ...684.1174N; @2009MNRAS.397.1402L; @2009ApJ...698..445O; @2009ApJ...706...38S; @2009ApJ...694..626R; @2010ApJ...711L.127G]. We are interested in whether or not cosmic rays can modify instabilities operating at subshocks, and therefore suppress cosmic-ray streaming instabilities . Behavioral benchmarks {#ssec:p3benchmarks} --------------------- To provide a basis for comparison among different cosmic-ray densities $w_{\rm CR}$ for each combination of plasma density ratio $w$ and magnetic-field amplitude $b$, we select the following attributes of the system for study: the drift velocity of each particle population, the instantaneous root-mean-square amplitude of the parallel and perpendicular components of the electric and magnetic field over the entire simulation domain, the effective temperature of each stream or counterstream particle species, and the spectrum of excited wave modes in the magnetic field. As there is no initial bulk motion in the perpendicular directions, only the parallel component $V_x$ of drift velocity is considered. The electric field amplitudes are presented in units of the scaling electric field $E_\omega\equiv\omega_{\rm pe}\,c\,m_e/e$ (equivalent to the field at which the electric energy density is half the electrons’ rest-mass energy density, $\epsilon_0 E^2/2=N_e m_e c^2/2$); the magnetic field multiplied by $c$ is expressible in the same units. The particle distributions may not remain strictly Maxwellian throughout the duration of the simulation. As a surrogate for temperature, therefore, the mean random kinetic energy of the electrons and ions of the stream and counterstream is calculated by determining the systematic velocity component within a $10\Delta\times 10\Delta$ region and eliminating this local bulk motion via an appropriate Lorentz transformation; the mean post-transformation Lorentz factor $\gamma'$ corresponds only to the random motion. Finally, we will explore the effect of cosmic rays on the time evolution of the spectral decomposition of the perpendicular (out-of-plane) magnetic field $B_z$ into its spatial Fourier components, both parallel (wave number $k_x=k_\parallel$) and perpendicular to the drift (wave number $k_y=k_\perp$). Comparison with analytical beam–plasma predictions {#ssec:p3comp} ================================================== As a test that our simulation results were consistent with theory, we applied the methods of @2009ApJ...699..990B to selected stream–counterstream configurations without cosmic rays. Whether magnetized or not, beam–plasma systems (in which a fast, dilute “beam” plays a role comparable to that of our stream, with the dense “plasma” representing our counterstream) are susceptible to a host of both electrostatic and electromagnetic instabilities. For flow-aligned wave vectors, electrostatic modes such as two-stream or Buneman are likely to grow. In the direction normal to the flow, the filamentation instability (sometimes referred to as “Weibel”) is usually excited as well. Finally, modes with wave vectors oriented obliquely are likewise unstable, so that the unstable spectrum is eventually at least two dimensional. The full spectrum has been first evaluated solving the exact dispersion equation in the cold approximation, accounting thus for a guiding magnetic field as well as finite-mass ions. It turns out that for the present configuration, ions play a very limited role (in the linear phase) and are not responsible for any unstable modes which would not be excited if they were infinitely massive. Unlike settings exhibiting nonresonant modes, for example [@2004MNRAS.353..550B; @2009ApJ...706...38S], where a single proton beam is considered without electrons moving [*at the same speed*]{}, we are here dealing with a plasma-shell collision, where protons and electrons are comoving. As a result, the effect of finite-mass ions is simply a first-order correction to the electronic spectrum. The dispersion equation displays the very same branches, and the growth rate is altered by a quantity proportional to $\mathcal{O}(m_e/m_i)$. The dispersion equation for arbitrarily oriented modes with the flow along the $x$-axis reads [@bret:120501] $$(\omega^2\epsilon_{xx}-k_y^2c^2)(\omega^2\epsilon_{yy}-k_x^2c^2)- (\omega^2\epsilon_{xy}-k_yk_xc^2)^2 =0,$$ where the tensor elements $\epsilon_{\alpha\beta}$ are given by $$\label{eq:epsi_general} \epsilon_{\alpha \beta }(\mathbf{k},\omega) = \delta _{\alpha \beta } +\sum_j\frac{\omega_{pj}^2}{\omega^2}\int d^3p \, \frac{p_{\alpha }} {\gamma(\mathbf{p}) }\frac{\partial f_j^0}{\partial p_{\beta }} +\sum_j\frac{\omega_{pj}^2}{\omega^2}\int d^3p\, \frac{p_{\alpha }p_{\beta }}{\gamma(\mathbf{p})^2 } \frac{\mathbf{k}\cdot \left(\frac{\partial f_j^0} {\partial \mathbf{p}}\right)}{m_j\omega -\mathbf{k}\cdot \mathbf{p}/\gamma(\mathbf{p}) } ,$$ and the sum runs over the species involved in the system. For each species, the distribution function $f_j^0$ includes both the stream and the counterstream, and the corresponding plasma frequency $\omega_{pj}$ is calculated using the sum of their densities. The results in the cold-plasma limit are evaluated considering Dirac’s delta distribution functions. Though lengthy, calculations are straightforward. Setting $k_y=k_\perp=0$ in the equations above allows to derive the dispersion equation for flow-aligned modes such as the two-stream ones. Setting $k_x=k_\parallel=0$ gives the dispersion equation for the filamentation modes. We present analytical results for the symmetric and the diluted beam cases. While such expressions can be derived considering either $w=1$ or $w\ll 1$, expressions valid for any density ratio $w$ are much more involved (when they exist). This explains why the results below for $w=1$ cannot be derived from those with $w\ll 1$. For flow-aligned wave vectors, the most unstable wave vector $k_{\parallel,m}$ and maximum growth rate $\gamma_m$ read (with evaluation corresponding to the configuration displayed in Figure \[fig:p3bret\]) $$\begin{aligned} k_{\parallel,m}\lambda_{\rm se}\frac{\Delta v}{c} & = & \frac{\sqrt{3}\sqrt{1+m_e/m_i}}{\sqrt{2} \Gamma_s^{3/2}}=1.19,\nonumber \\ \frac{\gamma_m}{\omega_{\rm pe}} &=& \frac{\sqrt{1+m_e/m_i}}{\sqrt{2} \Gamma_s^{3/2}}=0.68,~~\mathrm{symmetric~case},\end{aligned}$$ and $$\begin{aligned} k_{\parallel,m}\lambda_{\rm se}\frac{\Delta v}{c}&=& \sqrt{1+w}=1.04,~~w\ll 1\nonumber \\ \frac{\gamma_m}{\omega_{\rm pe}} &=& \frac{\sqrt{3(1+w)}}{2^{4/3}}\frac{w^{1/3}(1+m_e/m_i)^{1/3}}{\Gamma_s}=0.31,~~\mathrm{dilute~case},\end{aligned}$$ where $\Gamma_s=\left(1-{v_s}^2/c^2\right)^{-1/2}$ is the bulk Lorentz factor of the stream, which moves in the simulation frame with speed $v_s=0.417c/(1+w)$. As seen in Figure \[fig:p3bret\], panels (a) and (f), oblique modes dominate for the mildly relativistic conditions of the simulation. The full-spectrum maximum growth rate is thus slightly larger than the numerical values calculated above for modes propagating along the flow. For wave vectors normal to the flow, the growth rate reaches its maximum for $k_\perp=\infty$, with $$\frac{\gamma_m}{\omega_{\rm pe}}=2\frac{v_s}{c}\sqrt{\frac{1+m_e/m_i}{\Gamma_s}}=0.41,~~\mathrm{symmetric~case},$$ and $$\frac{\gamma_m}{\omega_{\rm pe}}= \frac{v_s}{c}\sqrt{\frac{w(1+w)(1+m_e/m_i)}{\Gamma_s}}=0.12,~~\mathrm{dilute~case}.$$ These filamentation data have been calculated neglecting the magnetic field, which is small ($\Omega_e=0.01\omega_{\rm pe}$) in the present setup. Electrostatic unstable modes propagating along the flow are rigorously insensitive to the flow-aligned magnetic field. As long as $m_e/m_i\ll 1$, the two-dimensional linear spectra computed with or without finite-mass protons are indistinguishable. The hot spectra, accounting for the $v_{{\rm th},e}=0.01c$ thermal spread for the electrons, have then been calculated considering infinitely heavy protons and using the fluid approximation described in @2006PhPl...13d2106B. The predicted growth rates for the two-dimensional $k_\parallel$,$k_\perp$ wave-vector space are plotted in Figure \[fig:p3bret\], panels (b) and (g) for the dilute and symmetric streams, respectively, when the stream velocity is $0.417c$ (corresponding to a bulk Lorentz factor $\Gamma_{\rm rel}=1.1$) [*relative to*]{} the counterstream. To better accommodate the parameters available for the calculations, it was necessary to make slight adjustments to the simulations we performed for comparison, leading to an increase in both the ion mass and, in the dilute case, the relative drift speed between the plasmas. The calculations also make predictions for modes at very large wavelengths in both spatial dimensions. We therefore repeated the early stage of our simulation with the comparison parameters $m_i/m_e=100$ and $\Gamma_{\rm rel}=1.1$, on an enlarged grid of $3840\Delta\times3840\Delta$, and $\lambda_{\rm se}=30\Delta$. To make a comparison with the analytical predictions, we extract the two-dimensional ${\bf k}$ spectrum at times separated by only a short interval intended to capture the earliest linear growth of the instabilities, and compute the average growth rate, which is plotted in Figure \[fig:p3bret\] panels (c)–(e) and (h)–(j) for the dilute and symmetric cases, respectively. The agreement is satisfactory, both qualitatively and quantitatively, and the dominant modes are correctly rendered. As expected, the cold theoretical spectrum saturates at high $k_\perp$ while the hot version displays a local extremum for an oblique wave vector, as the kinetic pressure prevents the pinching of high-$k_\perp$ small filaments [@2002PhPl....9.2458S]. Moreover, for high $k_\perp$, the PIC spectrum describes wavelengths only a few cells long, small enough to be affected by the smoothing algorithm. This explains why these modes’ growth is slower than expected. An electron skin depth of several hundred cells would almost certainly provide sufficient separation from the filtering length for the plots to agree better with the theoretical ones in this region, but such a simulation could not be large enough, or run long enough, to observe the later evolution without prohibitive computational expense. Modes with $k_\perp=0$ are purely electrostatic, and produce no magnetic field. This is why their growth rate is much better rendered when measuring the $E$ spectrum rather than the $B$ spectrum. Conversely, modes with $k_\parallel=0$ are mostly electromagnetic with a very small phase velocity, which explains why their growth is only evident in the $B$ spectrum. Results {#sec:p3res} ======= The qualitative behavior of the counterstreaming-plasma system in the absence of cosmic rays exhibits a dependence on the density ratio $w$ particularly in the earliest stages of evolution, and at later times the impact of the magnetic field $b$ becomes prominent. Although the details differ from one simulation to the next, the general behavior is similar to that seen in the three-dimensional simulations of @2004ApJ...608L..13F, in which the collision of electron–ion plasmas is characterized by the formation of current channels first in the electrons, and later in the ions, which proceed to merge and grow. In our experiments, this growth of the channels and associated structure in the magnetic field eventually reaches a size comparable to the simulation domain, at which point the imposed periodic boundary conditions prevent further enlargement. However, the time necessary for this to occur—hundreds or thousands of $\omega_{\rm pe}^{-1}$—may exceed the residence time of a given particle in the subshock. The benchmark behavior for our simulations of negligible-cosmic-ray energy density is plotted in Figure \[f-na\] (magnetic field absent), Figure \[f-nw\] (weak magnetic field), and Figure \[f-ns\] (strong magnetic field); the three columns of each plot correspond to the symmetric, intermediate, and dilute density ratios $w$. The role of the density ratio is prominent in the action of the two-stream instability on the drift velocity of the counterstreaming electron populations [@1999ApJ...526..697M]. For all considered values of magnetic field $b$, there is an abrupt deceleration of the electrons, both the stream and counterstream, when $20 < \omega_{\rm pe}t < 30$. In the $w=1$ symmetric case, this initial deceleration strips the electrons of nearly 90% of their relative drift, but in the dilute case, less than half the drift is removed. The electric and magnetic fields are amplified substantially at this point, but the reduced electron drift suppresses further immediate amplification. The ions are slower to respond, and their evolution differs qualitatively from that of the electrons: they form spatially alternating long-lived channels of current: as in lanes of vehicular traffic, ions moving one direction become spatially segregated from those moving the opposite direction, and this greatly lengthens the time for the ion drift velocities to converge, though considerable heating of the ions takes place prior to any significant systematic deceleration. The convergence of ion drift velocities reveals the primary effect of the magnetic field: the ion speeds remain well separated throughout the simulation lifetime in the absent or weak magnetic-field cases, but the strong magnetic field brings them together within roughly $10^4 \omega_{\rm pe}^{-1}$ for all density ratios. It may be that at least in this two-dimensional simulation, the transverse motion necessary for separating the ions into current channels is slightly inhibited by the guiding magnetic field, increasing the extent to which the counterstreaming ion populations are forced to interact. However, it is worth noting once more that the late-term behavior of the simulations is suspect on account of the structure size becoming comparable to the domain boundaries, an artificial upper limit. Behavior including cosmic rays {#ssec:p3cr} ------------------------------ For the configurations we considered, the presence of cosmic rays does not appear to result in any significant deviations from the behavior observed in their absence. When their energy density is given a value intended to represent conditions typical of the Galactic disk, no differences from the negligible-cosmic-ray configuration are observed. When cosmic rays are given an exaggerated abundance, subtle effects do appear in the simulation, but upon inspection they are disregarded for one of two reasons: they can be dismissed as numerical effects arising from the finite number of cosmic-ray particles employed in our simulation, or else they result in minor quantitative changes in the evolution that are of greatest prominence only when the effect of the periodic boundaries is already non-negligible. When the cosmic-ray weight is comparable to that of the plasma particles, their speed (approximately $c$) maximizes the amplitude of current-density fluctuations resulting from the statistically expected local departures from homogeneity. In order to verify that the observed differences resulted from our choice of representation and not from some underlying physics, we repeated an “abundant” simulation with a tenfold increase in computational particles representing the same physical cosmic-ray density. Figure \[fig:p3crcp\] illustrates via the electromagnetic field amplitudes that the statistical noise levels arising at the earliest times saturate at a level $\sqrt{10}$ lower when cosmic rays are represented by 50 particles per cell instead of 5, bringing both the noise level and the detailed time evolution into better agreement with the “negligible” case. A further significant increase in the computational particle count is too expensive for direct comparison with the simulations in Figure \[fig:p3crcp\]. Using a smaller computational grid, a simulation with 500 cosmic-ray particles per cell (leaving the other plasmas at their original 20 per cell) illustrates the continuation of the trend observed with 50 per cell. Nevertheless, the remaining difference in non-noise behavior is already nearly imperceptible at just 50 computational particles per cell. This effect is paralleled in the other aspects of the system’s evolution in which abundant cosmic rays appeared to result in minor differences, such as drift velocities and wave spectra. Discussion and conclusions {#sec:p3conc} ========================== Motivated by an interest in possible effects of cosmic rays on the physics governing the development of collisionless astrophysical shocks, we have performed several multidimensional PIC simulations of counterstreaming plasmas with various density ratios and magnetic-field strengths, both with and without a background population of energetic cosmic rays. This initially homogeneous environment is intended to represent the interior of the subshock, or shock transition layer. Before cosmic rays are added to the picture, the system resembles the subject of numerous beam–plasma or interpenetrating-plasma studies, where the initial behavior of the system can be understood in terms of known instabilities. Most prominently, the counterstreaming electron populations are the first to interact, via symmetric or asymmetric two-stream instabilities, as seen in, e.g., @1999ApJ...526..697M and @2004ApJ...608L..13F. In particular, the three-dimensional simulations of unmagnetized electron–ion plasma collisions by @2004ApJ...608L..13F demonstrate the formation of merging and growing current filaments in first the electrons and subsequently the ions, and the nuanced relationships among the various populations. One limitation of our approach is the reduced electron–ion mass ratio, $m_e/m_i=1/50$. @2010PhPl...17c2109B have explored the effect of the mass ratio on the hierarchy of unstable modes in beam–plasma systems. By stating that a mass ratio different from 1/1836 should not change the nature of the most unstable mode, they were able to articulate a criterion defining the largest acceptable mass ratio. In the present case, the unstable linear spectrum depends very weakly on the mass ratio, when no cosmic rays are introduced. As stated in Section \[ssec:p3comp\], finite-mass ions do not add any extra unstable branches to the dispersion equation. As a result, the Bret–Dieckmann criterion is necessarily fulfilled in our configuration without cosmic rays. Turning now to the simulations including cosmic rays, we found no significant cosmic-ray effects with our current mass ratio within the cosmic-ray population. It is thus likely that a real mass ratio would result in an even weaker effect, leaving our conclusions unchanged. Even at just 50 times the electron mass, however, the behavior of the ions is markedly different. For the most part, the electrons produce and respond to turbulent small-scale electromagnetic fields that serve to mix their distribution functions; the electrons act in concert as a single population while the ions of the stream are still easily distinguished from the ions of the counterstream on account of the filaments. Only when the filamentary structures have enlarged until relatively few repetitions are contained within the simulation plane do the ions begin to converge, and in some of our simulations that remains speculative on account of their finite duration. While it would be possible to extend the simulations ostensibly to observe the eventual convergence, this would be of limited value without simultaneously increasing the dimensions of the simulation domain to mitigate the effect of the periodic boundary conditions. An appreciable increase in size, however, may require us to abandon the simplicity of our present configuration by taking into account large-scale spatial variations, perhaps involving a clear distinction between upstream and downstream regions and an unstable charge-separation layer resulting from differences in the electrons’ and ions’ evolution. When we compare the system evolution in the presence of cosmic rays with that in their absence, we find that for the physical configurations studied, cosmic rays do not introduce a statistically significant departure from the unperturbed results described in Section \[sec:p3res\]. This may be a consequence of the comparatively large mean free path and characteristic timescale for evolution of the cosmic-ray distribution: even with the amplification of electric and magnetic fields within the transition layer, cosmic rays of modest energy apparently do not couple to the dynamics of thermal electrons and ions in any appreciable way. We surmise that at least for unmagnetized or parallel subshocks, the impact of cosmic rays—even when their energy density is unusually large—on the instabilities mediating the subshock transition is negligible. Cosmic rays may still indirectly affect various properties of the shock, by modifying the upstream environment from its quiescent characteristics [@2009ApJ...706...38S]: a shock will of necessity propagate differently through a turbulent, heated upstream medium—perhaps with a greatly amplified magnetic field—from the comparatively clean case of a uniform, cold interstellar medium in a gently fluctuating Galactic magnetic field. Irrespective of the cosmic-ray abundance, the rapid development of turbulence in the shock transition layer and the associated heating of the electrons in particular may provide an enlarged pool of candidates for injection into the standard diffusive shock acceleration mechanism. The combinations of parameters we explored did not yield any appreciable effects that could be attributed to the presence of cosmic rays. While we chose parameters intended to be relevant to nonrelativistic astrophysical shocks in environments where the presence of cosmic rays is suspected, it is nevertheless possible that in other, more exotic environments, cosmic rays may yet play some unforeseen role in the subshock microphysics. Future simulations in three dimensions and assigning additional degrees of freedom to the magnetic field and the cosmic-ray population may uncover effects that eluded the present analysis. However, a departure from the quasiparallel-shock configuration will also introduce effects not observed here, such as the development of magnetosonic waves, which can be more sensitive to the values of the mass ratio or other simulation parameters; this in turn may affect the dissipation of excited unstable modes [see, e.g., @2007GeoRL..3414109H]. In any case, the dearth of differences between even the grossly exaggerated cosmic-ray energy density and the system in which it was negligible provides a sense of reassurance that the physics of perhaps a majority of astrophysical shock-forming instabilities can in principle be understood without invoking some direct microscopic interference by a spectator population of cosmic rays. This research was supported in part by the National Science Foundation both through TeraGrid resources provided by NCSA [@Teragrid] and under Grant No. PHY05-51164. The work of A.B. is supported by projects ENE2009-09276 of the Spanish Ministerio de Educación y Ciencia and PEII11-0056-1890 of the Consejería de Educación y Ciencia de la Junta de Comunidades de Castilla-La Mancha. The work of J.N. is supported by MNiSW research project N N203 393034, and The Foundation for Polish Science through the HOMING program, which is supported by a grant from Iceland, Liechtenstein, and Norway through the EEA Financial Mechanism. M.P. acknowledges support through grant PO 1508/1-1 of the Deutsche Forschungsgemeinschaft. ![image](fig1){width="6.2in"} ![image](fig2){width="6in"} ![Selected results from the “negligible-weak” family of simulations. The initial magnetic field is set such that the electron cyclotron frequency is $\Omega_e\equiv e\,B/m_e=0.01\omega_{\rm pe}$. See the caption to Figure \[f-na\] for a detailed description of each plot.[]{data-label="f-nw"}](fig3){width="6in"} ![Selected results from the “negligible-strong” family of simulations. The initial magnetic field is set such that the electron cyclotron frequency is $\Omega_e=0.1\omega_{\rm pe}$. See the caption to Figure \[f-na\] for a detailed description of each plot.[]{data-label="f-ns"}](fig4){width="6in"} ![image](fig5){width="6.2in"}
--- abstract: 'We show that trial-to-trial variability in sensory detection of a weak visual stimulus is dramatically diminished when rather than presenting a fixed stimulus contrast, fluctuations in a subject’s judgment are matched by fluctuations in stimulus contrast. This attenuation of fluctuations does not involve a change in the subject’s psychometric function. The result is consistent with the interpretation of trial-to-trial variability in this sensory detection task being a high-level meta-cognitive control process that explores for something that our brains are so used to: subject-object relational dynamics.' author: - | [^1]  and Avner Wallach\ ** bibliography: - 'PsyRC.bib' title: | Relational Dynamics in Perception:\ Impacts on trial-to-trial variation --- Trial-to-trial variation in responses to repeated presentations of the same weak sensory object is noticeable in practically every cognitive modality. These fluctuations, which have been labeled “internal”, “unexplained” or “inherent” noise, are correlated over extended timescales [@WERTHEIMER:1953kx; @Gilden:2001vn; @Monto:2008ys] and inversely related to the degree of stimulus-determinability [@CONKLIN:1956zr]. Over the years since the inception of psychophysics, there has been a shift in how the source of trial-to-trial variation at threshold is explained. At present-time, the concept of noisy neural response dynamics that “poses a fundamental problem for information processing” is dominant [@Faisal:2008ly]. Yet there is something very unnatural in the way traditional psychophysical studies of sensory detection - studies that expose extensive trial-to-trial variation at threshold - are set up. In real-life situations, when encountering a weak sensory stimulus that deserves attention, we try to “do something about it”. Consider the set of operations performed by the average man over 50 confronted with a barely detectable printed text: tilting the page, exposing it to enhanced light conditions, etc. The stimulus itself becomes dynamic. If the barely detected stimulus originates from another subject, we (for instance) might lean forward or ask that other person to raise his voice or to present the object in a more favorable manner. Again, the stimulus itself becomes dynamic. Indeed, in real-life situation, our attempts to “do something about” the barely detected stimulus *impacts on the stimulus dynamics*, although not necessarily on our capacity to detect it. Thus, natural perception involves an expectation of the perceiver for an ongoing coupling between his actions and the threshold-level stimulus dynamics. In that sense, natural perception is relational. This is real life, but in standard psychophysical experiments the situation is different: in these experiments much effort is invested by the experimentalist to control the conditions so that the threshold-level stimulus remains static. The subject might actively explore various features of the stimulus (“active sensing”), but any given stimulus feature in these standard psychophysical designs, remains the same regardless of the subject’s behavior. Hence no feedback between the subject’s actions and the stimulus dynamics is involved, and perception becomes non-relational. In view of the above, and encouraged by old and recent analyses that reveal rich temporal structure and non-independence in response fluctuations at threshold over extended timescales [@CONKLIN:1956zr; @Gilden:2001vn; @Monto:2008ys; @Marom:2010ve and references therein], we turned to examine the possibility that trial-to-trial variation in responses to repeated presentations of the same weak sensory object do not reflect an inherent noise that constrains sensory acuity and information processing. Rather, we hypothesize that most of the observed variability in responses to weak stimuli is due to an active cognitive exploratory process, seeking for a coupling between the stimulus dynamics and subject’s behavior. To test this hypothesis we have used a generic feedback loop control algorithm, endowing a visual stimulus in a detection task the capacity to on-line match its contrast to the subject’s performance, while “clamping” the performance at a predefined (mostly 0.5) probability of detection. We show that once such relations are established (i.e., as long as the control algorithm is active), trial-to-trial variability is dramatically diminished, breaking the apparent limits of inherent noise, while keeping detection threshold and sensitivity (as reflected in the psychometric function) unchanged. This result points at the possibility of trial-to-trial variability in sensory detection of weak stimuli being a high-level meta-cognitive control process that explores for something that life trained us to expect: subject-object relational (or, coupled) dynamics. Methods {#methods .unnumbered} ======= All the experiments and their analyses were performed within a Wolfram’s *Mathematica 7.0* environment; the software package is available on request from S.M. Psychophysical detection task {#psychophysical-detection-task .unnumbered} ----------------------------- Fourteen healthy volunteers (six females), graduate students and post-docs at the age of 27-40 year, were the subjects of this study. Unless indicated otherwise, the basic visual detection task is as follows (see Figure \[Task\]): A random 500x500 background raster of black and white pixels, occupying 135x135 millimeters, was presented in the center of a flat Apple 24 inch screen. A single session was composed of 500 presentation trials of the raster, randomized in each trial. The raster remained on screen for half a second in each trial. A smaller foreground raster of 70x70 randomized grayscale pixels was embedded within the background raster area. The gray-level (denoted $x$) of the $i$-th pixel in the $n$-th trial was determined by a uniformly distributed random number ($0\leq r_{i,n} \leq 1$) such that $$x_{i,n} = \left\{ \begin{array}{l l} 0 \text{ (white)} & \quad \text{for $r_{i,n} < 0.5$}\\ 1 \text{ (black)} & \quad \text{for $r_{i,n} > C_{n}$}\\ r_{i,n} & \quad \text{for $0.5 \le r_{i,n} \le C_{n}$}\\ \end{array} \right.$$ where $C_{n}$, referred to as “contrast”, was calculated on a trial-by-trial basis as described in the next section. With this procedure the general pattern of background black-and-white scatter is present also within the foreground, while the range $[0.5,C_{n}]$ of foreground grayscale serves as a control variable. Of the 500 trials in a session, 50 randomly introduced sham trials did not include any foreground object. The position of the foreground object in each trial was randomized. After a trial, subjects were asked to press one of two keys, signifying whether they detected or not the foreground object. Each trial immediately followed the subject’s response to the preceding trial; no time limit was set for the subject to produce an answer. ![[]{data-label="Task"}](PsyRCFigure1.pdf){width="6"} Stimulus control algorithm {#stimulus-control-algorithm .unnumbered} -------------------------- The experimental design was adopted from a recently introduced Response Clamp methodology for analysis of neural fluctuations (Wallach et al., *Arxiv preprint arXiv:1008.1410*), with modifications that enabled its application to the present behavioral setting. A Proportional-Integral-Derivative (PID) controller was realized in Wolfram’s *Mathematica 7.0* environment. The input to the controller is the error signal, $$e_{n}=P_{n}^{*}-\widetilde{P}_{n}$$ where $P_{n}^{*}$ and $\widetilde{P}_{n}$ are the desired and actual detection probabilities (calculated as explained below) at the $n$-th trial, respectively. The output of the controller is generally composed of three expressions, $$y_{n}=g_{P}e_{n}+g_{I}\sum_{i=0}^{n}e_{i}+g_{D}(e_{n}-e_{n-1})$$ where $g_{P},g_{I}$ and $g_{D}$ are the proportional, integral and derivative gains, respectively; $g_{P}$ was set to 1.0, $g_{I}$ to 0.02, and $g_{D}$ to either 0.02 or 0 (with no appreciable effect). Finally, the contrast $C_{n}$ equals the controller’s output plus some baseline: $$C_{n}=y_{n}+C_{baseline},$$ where $C_{baseline}=0.5$. Calculation of detection probability {#calculation-of-detection-probability .unnumbered} ------------------------------------ Response probability was estimated on-line as follows: Let $s_{n}$ be an indicator function, so that $s_{n}=1$ if the subject detected the $n$-th foreground stimulus and $s_{n}=0$ otherwise. We define $\pi(n)$ as the probability of the subject to detect a foreground stimulus at trial $n$. We can estimate this probability using all past responses $\{s_{i}\}_{i=1}^{n}$, by integrating them with an exponential kernel, $$\widetilde{P}_{n}=\widetilde{P}_{0} e^{-\frac{n}{\tau}}+\sum_{i=1}^{n}s_{i}(1-e^{-\frac{1}{\tau}})e^{-\frac{n-{i}}{\tau}}\text{,}$$ where $\tau$ is the kernel’s decay-constant. To compute this on-line, we used the recursive formula: $$\widetilde{P}_{n}=s_{n}(1-e^{-\frac{1}{\tau}})+\widetilde{P}_{n-1}e^{-\frac{1}{\tau}}\text{,}$$ setting $\tau=10$ trials, and $\widetilde{P}_{0}=0.5$. Closed-Loop, Replay and Fixed contrast modes {#closed-loop-replay-and-fixed-contrast-modes .unnumbered} -------------------------------------------- In the basic design, each subject was exposed to three experimental sessions denoted *closed-loop*, *replay* and *fixed*. The first session was always a *closed-loop* session, whereas the second and third were *replay* and *fixed* sessions, introduced in an alternating order to different subjects. A 10 minutes break was given after the first and second sessions. In the *closed-loop* session, the desired response probability ($P_{n}^{*}$) was kept constant ($P_{n}^{*}= 0.5$, unless indicated otherwise in the main text) and the control algorithm operated as explained above, updating the contrast ($C_{n}$) of the foreground object from one trial to the next based on the error signal ($e_{n}$). The series of 450 $C_{n}$ values produced in this *closed-loop* session (500 trials minus the 50 sham trials), served for the generation of the foreground objects in the *replay* session. Thus, in the *replay* session the control algorithm was disconnected, yet we were able to record the responses of the subject to exactly the same series of contrasts presented in the *closed-loop* session, but now in an open-loop context; detached from the trial-by-trial coupled observer’s - observed dynamics. In the *fixed* session, the average contrast calculated from the series of above mentioned contrasts was used for all presentations, thus omitting stimulus variance altogether. This *fixed* session allowed us to estimate the impact of stimulus fluctuations on response dynamics. Results {#results .unnumbered} ======= The nature of the detection task is demonstrated in Figure \[Task\]: The left panel shows a background raster only. The middle panel shows a barely detectable foreground object (indicated in the top-left field). The right-hand panel demonstrates an obvious foreground object. The probability of false positive detection, calculated from responses of all subjects to the 50 sham trials was negligible (.017, SD $=.029$, $n=8$), indicating that the subjects did not tend to report detection when they did not really see something. The average response time was around 1 second per trial, slightly longer in the *closed-loop* session (1.03 sec, SD $=1.3$) compared to *replay* and *fixed* sessions (0.89 sec, SD $=1.6$ and 0.81 sec, SD $=1.0$, respectively). Response time distributions in all three sessions had a long right tail (coefficient of skewness: 11, 17 and 19, respectively). ![[]{data-label="MainObservation"}](PsyRCFigure2.pdf){width="5"} The main observation is shown in Figure \[MainObservation\], where the responses of one subject (left column) and the group of eight subjects (right column) that were tested in *closed-loop* (dark blue, top row), *replay* (purple, middle row) and *fixed* (a kind of yellow, bottom row) sessions are plotted. Let us start with panel (a) of Figure \[MainObservation\], describing the results obtained in the *closed-loop* session of one individual. The desired detection probability $P_{n}^{*}$ (see methods) was set to 0.5, and the control algorithm updated the contrast $C_{n}$, trial-by-trial, as indicated by black dots (righthand y-axis of Figure \[MainObservation\](a)). The resulting estimated detection probability ($\widetilde{P}_{n}$, dark blue, lefthand y-axis) of that individual gradually approached the desired value, albeit fluctuating about it. Also note the expected anti-correlation of $C_{n}$ and $\widetilde{P}_{n}$. Panel (b) of Figure \[MainObservation\] shows the average performance (and standard deviation) of all eight subjects that participated in such a *closed-loop* session, showing that the controller converges within ca. 100 trials, and succeeds in “clamping” the detection probability at around 0.5, as preset. Figure \[MainObservation\](c) shows the performance of the same individual whose data is shown in panel (a), but now in the *replay* session, where the $C_{n}$ series obtained in the *closed-loop* session (black dots) is “replayed”, regardless of the subject’s responses. Under these conditions the control algorithm is shut down and stimulus contrast is completely decoupled from the subject’s behavior. Note the emergence of large slow fluctuations around the preset 0.5 detection probability; despite the fact that the stimulus series is practically identical to that shown in panel (a), the performance in (c) is very different. Panel (d) of Figure \[MainObservation\] shows the average performance (and standard deviation) of all eight subjects that participated in the *replay* session. And finally, as demonstrated in Figure \[MainObservation\](e), when the average $C_{n}$ of the subject - whose data is shown in panels (a) and (c) - is used in the *fixed* session, where all 450 trials have identical contrast (black line of panel (e)), large slow fluctuations and drift are observed. Panel (f) of Figure \[MainObservation\] shows the average performance (and standard deviation) of all eight subjects that participated in these *fixed* session. ![[]{data-label="Overall"}](PsyRCFigure3.pdf){width="5"} The group statistics of Figure \[MainObservation\](b, d and f) are summarized in Figure \[Overall\](a). Clearly, the best performance is obtained when relational dynamics are allowed between $\widetilde P_{n}$ (the performance of the observer) and $C_{n}$ (the contrast series). One possible explanation to this result is that under these different experimental conditions there is a change in the sensitivity of the subject to the stimulus. Figure \[Overall\](b) shows the psychometric functions for *closed-loop* and *replay* sessions, calculated by averaging the responses (1’s and 0’s), for each of the eight subjects, in different contrast ($C_{n}$) bins. (Note that average response thus calculated is not the same quantity as detection probability $\widetilde P_{n}$; the latter takes into account the *temporal order* of responses.) Clearly, these psychometric functions show that both threshold and sensitivity extracted from the responses of all the subjects, are practically identical in *closed-loop* and *replay*. However, the richness of the dynamics and the marked differences between *closed-loop* and *replay* modes seen in Figure \[MainObservation\], are practically averaged out when the data are collapsed to standard psychometric functions of the kind shown in Figure \[Overall\](b). The importance of instantaneous coupling between the observer’s behavior and the stimulus dynamics is demonstrated in Figure \[Overall\](c), where a *closed-loop* mode is instantly switched to a *fixed* mode, by disconnecting the controller and using a constant contrast value (average of the $C_{n}$ series over a time segment depicted by a black bar). Figure \[Overall\](c) shows data of a single subject, in three different preset $P^{*}_{n}$ values (0.25, 0.5 and 0.75). As soon as the coupling of observer’s-observed dynamics is disconnected, the variance of detection probability markedly increases and slow correlations seem to emerge, at all $P^{*}_{n}$ tested. Interestingly, as shown in the averaged histograms of Figure \[Overall\](d), within the *fixed* phase of the experiment of panel (c) the detection probabilities seem to have a “binary” preference towards 1 or 0 (for the cases of $P^{*}_{n}=0.75$ and $P^{*}_{n}=0.25$, respectively). This preference stands in contrast to the symmetric case of $P^{*}_{n}=0.5$ (e.g. Figure \[Overall\](a)). Concluding remarks {#concluding-remarks .unnumbered} ================== Two basic observations are presented here. The *first* is that trial-to-trial variability in sensory detection of a weak visual stimulus is dramatically diminished when rather than presenting a stimulus contrast that is independent of the subject’s ongoing actions, fluctuations in a subject’s judgment are matched by fluctuations in stimulus contrast. Clearly, this result reaffirms that trial-to-trial fluctuations are not “noise” in the strict sense of being independent of each other. Moreover, the significant difference, between features of fluctuations measured when dynamic observer-observed relations exist, and those measured in the absence of such coupled dynamics, calls for re-examination of the way psychophysical experiments are conducted. Indeed, measuring temporal fluctuations of a psychophysical function under open-loop conditions, where there is no relation between subject performances and sensory object contrast dynamics, is a most un-natural setting. Here we implemented an adaptive algorithm (PID) borrowed from control theory in order to couple the observer-observed dynamics. While the PID control algorithm has theoretical advantages in the present context, there exist many other adaptive psychophysical procedures [@Treutwein:1995qf] that are, actually, in use when experimentalists attempt to identify points of interest on psychometric functions. We propose to substantially extend their use in order to expose the dynamics of perception under more natural experimental conditions. The *second* basic observation is that the above diminishing of trial-to-trial fluctuations by coupling between observer-observed dynamics, is not accompanied by a change in sensory sensitivity to the input. Taken together, the two basic observations suggest that trial-to-trial variability in sensory detection of weak stimuli might reflect a high-level control process. As pointed out by Wertheimer (1953), trial-to-trial variation at threshold was generally attributed, in the early days of psychophysics, to uncontrolled experimental conditions, with the assumption that the subject is stable. Response fluctuations, however, were soon shown to be non-independent [@verplanck1952nonindependence]; that is - successive responses to repeated presentations of the same threshold stimulus (in auditory, visual and somatosensory modalities) are correlated over timescales ranging from seconds to days [@WERTHEIMER:1953kx]. These long range trial-to-trial response correlations were then interpreted as reflecting a meta-cognitive *guessing* process that is active where there is no possibility for stimulus-determination; the assumption being that “guesses are more likely to be influence by preceding responses (success and failures) than are sensory judgments” [@CONKLIN:1956zr]. Later years brought with them a more reductionistic focus on neural sources of “noise” and “short-term plasticity” that may account for observed trial-to-trial response variability [@Faisal:2008ly]. Viewed from this historical angle, the results presented here pull the pendulum back to the meta-cognitive pole, offering an interpretation according to which - when facing a weak stimulus - subjects vary their response patterns, seeking to establish relations between their actions and the dynamics of stimulus features. This interpretation, in its broader sense, goes far beyond psychophysics of weak stimulus detection; it touches upon what psychologists try to tell us over the past fifty years on the developing mind , when we care to listen. [^1]: Corresponding author (shimon.marom@gmail.com)
--- author: - Ping Nang Ma - Lode Pollet - Matthias Troyer title: 'Measuring the equation of state of trapped ultracold bosonic systems in an optical lattice with in-situ density imaging' --- [99]{} I. Bloch, J. Dalibard, W. Zwerger, Rev. Mod. Phys. [**80**]{}, 885-964 (2008). Q. Zhou, Y. Kato, N. Kawashima, N. Trivedi, Phys. Rev. Lett. [**[103]{}**]{}, 085701 (2009). T. L. Ho, Q. Zhou, Nature Phys. [**[6]{}**]{}, 131 (2009). D. M. Weld, P. Medley, H. Miyake, D. Hacul, D. E. Pritchard, W. Ketterle, Phys. Rev. Lett [**[103]{}**]{}, 245301 (2009). J. Ruostekoski, C. J. Foot, and A. B. Deb, Phys. Rev. Lett. [**103**]{}, 170404 (2009). J.-S. Bernier, T.-L. Dao, C. Kollath, A. Georges, P. S. Cornaglia, cond-mat/arxiv:0912.3840 (2009). M. Köhl, Phys. Rev. A [**[73]{}**]{} 031601 (2006). R. Jördens, L. Tarruell, D. Greif [*[et al.]{}*]{}, Phys. Rev. Lett. [**104**]{}, 180401 (2010). N. Gemelke, X. Zhang, C. L. Hung, and C. Chin, Nature [**[460]{}**]{}, 995 (2009). W. Bakr, J. Gillen, A. Peng, S. Foelling, M. Greiner, Nature [**[462]{}**]{}, 74 (2009). P. Z[ü]{}rtz, T. Langen, T. Gericke, A. Koglbauer, and H. Ott, Phys. Rev. Lett. [**103**]{}, 080404 (2009). W. Bakr, [*et al.*]{}, arXiv:1006.0754, to appear in Science (2010). J. F. Sherson, C. Weitenberg, M. Endres, M. Cheneau, I. Bloch, and S. Kuhr, arXiv:1006.3799 (2010). B. Capogrosso-Sansone, E. Kozik, N. V. Prokof’ev, B. V. Svistunov, Phys. Rev. A [**75**]{}, 013619 (2007). F. Gerbier, S. F¬olling, A. Widera, O. Mandel, and I. Bloch, Phys. Rev. Lett. [**96**]{}, 090401 (2006). S. Trotzky, L. Pollet, F. Gerbier, U. Schnorrberger, I. Bloch, N. V. Prokofev, B. Svistunov, M. Troyer, cond-mat/arxiv:0905.4882 (2009). S. Wessel, F. Alet, M. Troyer, G. G. Batrouni, Phys. Rev. A [**70**]{}, 053615 (2004). L. Pollet, N. V. Prokof’ev, and B. V. Svistunov, Phys. Rev. Lett. [**104**]{}, 245705 (2010). Q. Zhou, T. L. Ho, cond-mat/arxiv:0908.3015v2 (2009). C. Sanner, E. J. Su, A. Keshet, R. Gommers, Y. Shin, W. Huang, W. Ketterle, cond-mat/arxiv:1005.1309 (2010). T. Müller, B. Zimmermann, J. Meineke, J.-P. Brantut, T. Esslinger, H. Moritz, cond-mat/arxiv:1005.0302 (2010). D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, P. Zoller, Phys. Rev. Lett [**81**]{}, 15 (1998). D. Jaksch, [*[Bose-Einstein Condensation and Applications]{}*]{}, PhD thesis, Universit[ä]{}t Innsbruck (1999). M. Greiner, [*[Ultracold Quantum Gases in three-dimensional Optical Lattice Potentials]{}*]{}, PhD thesis, Ludwig-Maximilians-Universitat Munchen (2003). P. N. Ma, [*[Numerical exact simulations of actual-size bosonic optical lattice systems]{}*]{}, MPhil thesis, HKU (2009). A. L. Fetter, J. D. Walecka, [*[Quantum Theory of Many-Particle Systems]{}*]{}, $3^{rd}$ ed., Dover (2003). N. V. Prokof’ev, B. V. Svistunov, I. Tupitsyn, Sov. Phys. - JETP [**[87]{}**]{}, 310 (1998). L. Pollet, K. V. Houcke, S. M. A. Rombouts, J. Comp. Phys. [**[225]{}**]{}, 2249 (2007). S. Nascimb[è]{}ne [*et al.,*]{}, Nature [**463**]{} 1057 (2010). V. W. Scarola, L. Pollet, J. Oitmaa, M. Troyer, Phys. Rev. Lett [**[102]{}**]{}, 135302 (2009). C.-L. Hung, X. Zhang, N. Gemelke, and C. Chin, Phys. Rev. Lett. [**104**]{}, 160403 (2010). F. Alet *et al.*, J. Phys. Soc. Jpn. Suppl **74**, 30 (2005); A. F. Albuquerque *et al.*, J. of Magn. and Magn. Materials **310**, 1187 (2007); <http://alps.comp-phys.org>.
--- abstract: 'Super-Earths belong to a class of planet not found in the Solar System, but which appear common in the Galaxy. Given that some super-Earths are rocky, while others retain substantial atmospheres, their study can provide clues as to the formation of both rocky planets and gaseous planets, and - in particular - they can help to constrain the role of photo-evaporation in sculpting the exoplanet population. GJ 9827 is a system already known to host 3 super-Earths with orbital periods of 1.2, 3.6 and 6.2 days. Here we use new HARPS-N radial velocity measurements, together with previously published radial velocities, to better constrain the properties of the GJ 9827 planets. Our analysis can’t place a strong constraint on the mass of GJ 9827 c, but does indicate that GJ 9827 b is rocky with a composition that is probably similar to that of the Earth, while GJ 9827 d almost certainly retains a volatile envelope. Therefore, GJ 9827 hosts planets on either side of the radius gap that appears to divide super-Earths into pre-dominantly rocky ones that have radii below $\sim 1.5 R_\oplus$, and ones that still retain a substantial atmosphere and/or volatile components, and have radii above $\sim 2 R_\oplus$. That the less heavily irradiated of the 3 planets still retains an atmosphere, may indicate that photoevaporation has played a key role in the evolution of the planets in this system.' author: - | K. Rice$^{1,2}$[^1], L. Malavolta$^{3,4}$, A. Mayo$^{5,6,7}$, A. Mortier$^{8,9}$, L.A. Buchhave$^{10}$, L. Affer$^{11}$, A. Vanderburg$^{12,13,14}$, M. Lopez-Morales$^{13}$, E.  Poretti$^{15,16}$, L. Zeng$^{17}$, A.C. Cameron$^9$, M. Damasso$^{18}$, A. Coffinet$^{19}$, D. W. Latham$^{13}$, A.S. Bonomo$^{18}$, F. Bouchy$^{19}$, D. Charbonneau$^{13}$, X. Dumusque$^{19}$, P. Figueira$^{20,21}$, A.F. Martinez Fiorenzano$^{15}$, R.D. Haywood$^{13,14}$, J. Asher Johnson$^{13}$, E. Lopez$^{23,24}$, C. Lovis$^{19}$, M. Mayor$^{19}$, G. Micela$^{11}$, E. Molinari$^{15,22}$, V. Nascimbeni$^{4,3}$, C. Nava$^{13}$, F. Pepe$^{19}$, D. F. Phillips$^{13}$, G. Piotto$^{4,3}$, D. Sasselov$^{13}$, D. Ségransan$^{19}$, A. Sozzetti$^{18}$, S. Udry$^{19}$, C. Watson$^{25}$\ \ $^1$[SUPA, Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh, EH93HJ, UK]{}\ $^2$[Centre for Exoplanet Science, University of Edinburgh, Edinburgh, UK]{}\ $^3$INAF - Osservatorio Astronomico di Padova, Vicolo dell’Osservatorio 5, 35122 Padova, Italy\ $^4$Dipartimento di Fisica e Astronomia “Galileo Galilei", Universita’di Padova, Vicolo dell’Osservatorio 3, 35122 Padova, Italy\ $^5$Astronomy Department, University of California, Berkeley, CA 94720, USA\ $^6$National Science Foundation Graduate Research Fellow\ $^7$Fulbright Fellow\ $^8$Astrophysics group, Cavendish Laboratory, University of Cambridge, J.J. Thomson Avenue, Cambridge CB3 0HE, UK\ $^9$Centre for Exoplanet Science, SUPA, School of Physics and Astronomy, University of St Andrews, St Andrews KY169SS, UK\ $^{10}$DTU Space, National Space Institute, Technical University of Denmark, Elektrovej 327, DK-2800 Lyngby, Denmark\ $^{11}$INAF - Osservatorio Astronomico di Palermo, Piazza del Parlamento 1, I-90134 Palermo, Italy\ $^{12}$Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712\ $^{13}$Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138, USA\ $^{14}$NASA Sagan Fellow\ $^{15}$INAF - Fundación Galileo Galilei, Rambla José Ana Fernandez Pérez 7, 38712 Breña Baja, Spain\ $^{16}$INAF - Osservatorio Astronomico di Brera, Via E. Bianchi 46, 23807 Merate, Italy\ $^{17}$Department of Earth and Planetary Sciences, Harvard University, Cambridge, MA 02138, USA\ $^{18}$INAF - Osservatorio Astrofisico di Torino, Via Osservatorio 20, I-10025 Pino Torinese, Italy\ $^{19}$Observatoire de Genève, Université de Genève, 51 ch. des Maillettes, 1290 Sauverny, Switzerland\ $^{20}$European Southern Observatory, Alonso de Cordova 3107, Vitacura, Santiago, Chile\ $^{21}$Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal\ $^{22}$INAF - Osservatorio di Cagliari, via della Scienza 5, 09047 Selargius, CA, Italy\ $^{23}$NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771, USA\ $^{24}$GSFC Sellers Exoplanet Environments Collaboration, NASA GSFC, Greenbelt, MD 20771\ $^{25}$Astrophysics Research Centre, School of Mathematics and Physics, Queen’s University Belfast, Belfast BT7 1NN, UK\ bibliography: - 'gj9827.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: 'Masses and radii for the three super-Earths orbiting GJ 9827, and implications for the composition of small exoplanets' --- \[firstpage\] Stars: individual: GJ 9827 (2MASS J23270480-0117108, EPIC 246389858, HIP 115752) - Planets and satellites: fundamental parameters - Planets and satellites: composition - Planets and satellites: general - Planets and satellites: detection - Techniques: radial velocities Introduction {#intro} ============ One of the most exciting recent exoplanet results is the discovery that the most common type of exoplanet, with a period less than $\sim 100$ days, is one with a radius between that of the Earth ($1 R_\oplus$) and that of Neptune ($\sim 4 R_\oplus$) [@howard12; @batalha13; @fulton17; @fulton18]. Known as super-Earths, these appear to be common in the Galaxy, but are not found in our Solar System. It also appears that the transition from being preferentially rocky/terrestrial to having a substantial gaseous atmosphere occurs within this size range [@rogers15]. Recent studies [@fulton17; @zeng17; @vaneylen18] have suggested that there is in fact a gap in the radius distribution between 1.5 and $2 R_\oplus$, as predicted by @owenwu13 and @lopez13. Planets tend to have radii less than $\sim 1.5 R_\oplus$ and may be pre-dominantly rocky, or they sustain a substantial gaseous envelope and have radii above $2 R_\oplus$. Super-Earths are, therefore, an important population as they may provide clues as to both the formation of gas giants and the formation of rocky, terrestrial planets. In particular, they can help us to better understand the role that photo-evaporation plays in sculpting the exoplanet population. It has been suggested that super-Earths probably formed with gas envelopes that make up at least a few percent of their mass [@rogers11; @lopezfort14; @wolfgang15]. Those that are sufficiently strongly irradiated could then have lost their atmospheres via photo-evaporation [@lopez12; @owenwu13; @ehrenreich15]. Those that have not been sufficiently strongly irradiated retain their atmospheres. This could then explain the observed gap in the radius distribution [@owenwu17; @fulton17; @lopez18; @vaneylen18]. There may, however, be alternative explanations for this observed radius gap, such as late giant impacts [@inamdar15] or the atmosphere being stripped by the cooling rocky core [@ginzburg16; @ginzburg18]. This means that systems that have super-Earths on either side of this radius gap are particularly interesting. In this paper we present an analysis of one such system, the $K2$ target GJ 9827 (also known as K2-135, EPIC 246389858, or HIP 115752). It is already known to host 3 super-Earths with radii between $1$ and $\sim 2 R_\oplus$, and with orbital periods of 1.21 days, 3.65 days, and 6.21 days [@niraula17; @rodriguez18]. @rodriguez18 and @niraula17 suggest that GJ 9827 b has a radius of $\sim 1.6 R_\oplus$, GJ 9827 c has a radius of $\sim 1.3 R_\oplus$, while GJ 9827 d has a radius of about $2 R_\oplus$. This means that these planets have radii that approximately bracket the radius gap detected by @fulton17 which, as already suggested, makes this a particularly interesting system for studying the origin of this gap. However, neither @rodriguez18 nor @niraula17 could independently estimate the planets masses and so used mass-radius relations [@weiss14; @chen17]. A recent radial velocity analysis [@teske18] has, however, presented mass estimates for the GJ 9827 planets. This analysis was unable to place strong constraints on the masses of GJ 9827 c and d, but suggests that GJ 9827 b has a mass of $\sim 8.2 \pm 1.53 M_\oplus$. With a radius of $\sim 1.64 R_\oplus$ [@rodriguez18; @niraula17], this result would make GJ 9827 b one of the densest known super-Earths. This mass and radius would suggest that GJ 9827 b has an iron core that makes up a significant fraction of its mass, and could indicate that it has undergone a mantle-stripping collision with another body of a similar mass [@Marcus10]. A more recent analysis [@prieto-arranz18], however, suggests that the mass of GJ 9827 b is not as high as suggested by @teske18 and, in fact, may have a composition similar to that of the Earth. This analysis also suggests that GJ 9827 c is also rocky, but that GJ 9827 d may retain a substantial, extended atmosphere. Here we repeat the lightcurve analysis of GJ 9827 using the $K2$ data, which we present in Section \[sec:lightcurveanalysis\]. We also use the $K2$ lightcurve to constrain the stellar activity (Section \[sec:stellaractivity\]). We then carry out a radial velocity analysis using the same radial velocity data as used by @teske18, @niraula17 and @prieto-arranz18, but with an additional 41 new radial velocities from the HARPS-N spectrograph [@cosentino12; @cosentino14]. As we will discuss in Section \[sec:discussion\], we were able to constrain the masses of GJ 9827 b and d to better than “10%" and about “20%", but were not able to place a strong constraint on the mass of GJ 9827 c. We also discuss what these results imply about the typical composition of planets below the radius gap [@fulton17], a particular science goal of the HARPS-N Collaboration. Radial Velocity Observations {#sec:RVs} ============================ HARPS-N spectroscopy {#sec:HNspect} -------------------- We collected a total of 43 radial velocity (RV) spectra of GJ 9827 with the HARPS-N spectrograph (${\rm R} = 115000$) installed on the 3.6-m Telescopio Nazionale Galileo (TNG) at the Observatorio de los Muchachos in La Palma, Spain [@cosentino12; @cosentino14]. We observed GJ 9827 between August 2017 and December 2017 as part of the HARPS-N Collaboration’s Guaranteed Time Observations (GTO) program. Our observational strategy consisted of taking one or two observations per night, separated by 2-3 hours, for several consecutive nights in order to properly sample the RV curve of all the transiting planets. All the observations had an exposure time of 1800s. We eliminated one observation, taken on BJD=2458048.36, as it had an anomalously low signal-to-noise ratio (S/N) of less than 20, and another, taken on BJD=2457991.62, was rejected by the data reduction software because of abnormal flux correction. GJ 9827 has a V-band magnitude of V = 10.25, so, with the exception of the two observations that were eliminated, we obtained spectra with signal-to-noise ratios in the range S/N = 37 - 121 (average S/N = 70), at 550 nm in 30 minute exposures, resulting in an average RV precision of 1.9 . The spectra were reduced with version 3.8 of the HARPS-N Data Reduction Software (DRS), which includes corrections for color systematics introduced by variations in seeing [@cosentino14]. The radial velocities were computed using a numerical weighted mask based on the synthetic spectrum of a K5 dwarf, following the methodology outlined in @baranne96 and @pepe02. The HARPS-N data are presented in Table \[tab:harpsndata\] and the radial velocities are shown in Figure \[fig:HN\_RVs\]. Table \[tab:harpsndata\] also includes some stellar activity indicators. Specifically, the full width at half maximum (FWHM) of the cross-correlation function (CCF), the line Bisector Inverse Slope (BIS), and an activity index derived from the Calcium H and K lines (${\rm S_{HK}}$). ![The HARPS-N radial velocities, first presented here, plotted against time.[]{data-label="fig:HN_RVs"}](Figures/RV_HN.png){width="9.0cm"} Previously published HARPS-N and HARPS spectroscopy {#sec:HSHN} --------------------------------------------------- In our analysis we also use the HARPS-N and HARPS spectra first presented in @prieto-arranz18. This additional dataset includes 23 HARPS-N RV spectra, taken between July 2017 and December 2017, and 35 HARPS spectra taken between August 2017 and October 2017. The HARPS instrument is installed on the 3.6m ESO telescope at La Silla and is very similar to the HARPS-N instrument, already discussed in Section \[sec:HNspect\]. The HARPS and HARPS-N spectra were reduced with the same DRS as the HARPS-N spectra presented in Section \[sec:HNspect\]. The HARPS-N RVs presented by @prieto-arranz18 have an average precision of 1.6  and signal-to-noise ratios in the range 33 - 95 (average S/N = 68), while the HARPS RVs have an average precision of of 1.4  and signal-to-noise ratios in the range 47-100 (average S/N = 76). The @prieto-arranz18 HARPS and HARPS-N data, including stellar activity indicators, can be found in their Tables 2 and 3, respectively. Their HARPS-N and HARPS RVs, together with the new HARPS-N RVs presented here, are shown in the top panel of Figure \[fig:HNHNHSPF\_RVs\]. The filled circles show the new HARPS-N RVs from this study, the open circles show the HARPS-N RVs from @prieto-arranz18, and the open squares show their HARPS RVs. We also correct for the RV offset, but assume that all the HARPS-N data has the same offset[^2], while allowing the HARPS and HARPS-N data to have different offsets. This is discussed in more detail in Section \[sec:rvanalysis\]. Previously published Magellan/PFS and NOT/FIES spectroscopy {#sec:PFSMagSpect} ----------------------------------------------------------- In addition to the HARPS-N and HARPS spectra, we also include in our analysis the Magellan/PFS observations first presented by @teske18. Thirty-six PFS observations were taken between January 2010 and August 2016, using the Planet Finder Spectrograph (PFS: @crane06) on the [*Magellan*]{} II (Clay) Telescope. The resolution was $\sim 80000$ and the exposure times were between 457 s and 900 s. More details can be found in @teske18, and the resulting radial velocities are shown in their Table 1. Similarly, we also include the 7 high-resolution ($R \sim 67000$) spectra taken using the FIbre-fed Échelle Spectrograph (FIES: @telting14) on the 2.6m Nordic Optical Telescope (NOT) of the Roque de los Muchachos Observatory (La Palma, Spain). More details can be found in @niraula17, in which these observations were first presented, and the resulting radial velocities are shown in their Table 2. The bottom panel of Figure \[fig:HNHNHSPF\_RVs\] shows the PFS (blue squares) and FIES (red triangles) radial velocities, together with the new HARPS-N RVs from this study (black filled circles) and the HARPS-N and HARPS RVs presented by @prieto-arranz18 (black open circles and black open squares respectively). These are all corrected for offsets between the datasets based on the best RV fit for the combined datasets discussed in Section \[sec:rvanalysis\]. ![[*Top panel:*]{} The new HARPS-N RVs from this study (black filled circles) together with the HARPS-N and HARPS RVs presented by @prieto-arranz18 (black open circles and black open squares respectively). [*Bottom panel:*]{} PFS (blue squares) and FIES (red triangles) RVs, together with all the HARPS-N and HARPS RVs (black filled circles, black open circles, and black open squares), all corrected for offsets between the datasets.[]{data-label="fig:HNHNHSPF_RVs"}](Figures/RV_HNHNHSPF.png){width="9.5cm"} Stellar parameters {#sec:stellarparameters} ================== Taking advantage of the high S/N, high-resolution spectra obtained using HARPS-N, we re-determined the stellar parameters of GJ 9827 using the Stellar Parameter Classification pipeline (SPC; @buchhave14). The high S/N needed to extract precise RVs means that these spectra are more than adequate for deriving stellar parameters. Using the spectra obtained through the HARPS-N GTO program, we ran the SPC analysis on each individual spectrum, with a prior on the surface gravity from the YY isochrone models [@spada13]. This SPC analysis yielded: $T_{\rm eff} = 4305 \pm 49$ K, $\log g = 4.72 \pm 0.10$ (cgs), \[m/H\] $= -0.50 \pm 0.08$ and $v \sin i < 2$ km s$^{-1}$. The formal uncertainties also take into account the model uncertainties, which primarily stem from model systematics in the ATLAS Kurucz stellar models and degeneracies between the derived parameters when trying to compare observed spectra to model spectra [see e.g., @buchhave12; @buchhave14]. To determine the mass, $M_\star$, and radius, $R_\star$, of GJ 9827, we used the [isochrones]{} Python package [@morton15], which uses both the Mesa Isochrones and Stellar Tracks (MIST: @dotter16) and the Dartmouth Stellar Evolution Database [@dotter08]. In addition to $T_{\rm eff}$, $\log g$ and \[m/H\], we included as priors the AAVSO Photometric All-Sky Survey $B$ and $V$ magnitudes [@henden15], the 2MASS $J$ and $K$ magnitudes [@skrutskie06], the WISE$2$ and $3$ magnitudes [@cutri14], and the [*Gaia*]{} parallax from Data Release 2 [@gaia16; @gaia18]. We also repeat this analysis using the [*Hipparcos*]{} parallax [@vanleeuwen07], to see how this impacts the resulting mass and radius estimates. We used both the MIST and Dartmouth model grids. Posterior sampling was performed using [MultiNest]{} [@feroz08; @feroz09; @feroz13]. In order to investigate the systematic errors on $M_*$ and $R_*$ introduced by the spectrally-derived stellar parameters when dealing with late K and cooler dwarfs, we repeated the analysis using the stellar atmosphere parameters from @niraula17, @teske18, and @rodriguez18. @niraula17 and @teske18 used SpecMatch-Emp [@yee17], the results of which are shown in their Table 3 and Table 2 respectively. The stellar parameters used by @rodriguez18 are shown in their Table 1 and are taken from @houdebine16, who used principal component analysis. The results of our analysis are shown in Figure \[fig:mass\_radius\]. The thin lines show the results using the [*Hipparcos*]{} parallax as a prior, while the thick lines are the results obtained using the [*Gaia*]{} parallax as a prior. We then produce final estimates for each parameter by taking the median and $15.865^{\rm th}/84.135^{\rm th}$ percentiles of the posterior samplings for all of the sets of stellar parameters, and for both the analysis using the [*Gaia*]{} parallax, and the analysis using the [*Hipparcos*]{} parallax. The mass and radius obtained using the [*Hipparcos*]{} parallax as a prior are, $M_\star = 0.60^{+0.03}_{-0.02} M_\odot$ and $R_\star = 0.59^{+0.02}_{-0.02} R_\odot$, while using the [*Gaia*]{} parallax returns $M_\star = 0.606^{+0.020}_{-0.014} M_\odot$ and $R_\star = 0.602^{+0.005}_{-0.004} R_\odot$. It’s clear that the more precise [*Gaia*]{} parallax produces results that are more tightly constrained than those obtained using the [*Hipparcos*]{} parallax. Consequently, we use the results obtained with the *Gaia* parallax for the rest of the analysis presented here. Similarly, using the [*Gaia*]{} parallax, the [isochrones]{} analysis returns $T_{\rm eff} = 4340^{+48}_{-53}$, ${\rm [m/H]} = -0.26 \pm 0.09$, $\log g = 4.66^{+0.015}_{-0.010}$ (cgs), and $A_v = 0.22 \pm 0.11$ for the effective temperature, metallicity, surface gravity, and interstellar reddening, respectively. The $T_{\rm eff}$ and $\log g$ results are consistent with the results from our SPC analysis, but the metallicity is discrepant at $2 \sigma$. It is, however, consistent with some earlier metallicity estimates [@niraula17; @teske18]. The $T_{\rm eff}$, $A_v$ and $R_*$ results are also consistent with results from [*Gaia*]{} Data Release 2 [@gaia16; @gaia18]. The [isochrone]{} analysis also indicates that the star probably has an age of about 10 Gyr, with a lower limit (15.87th percentile) of 5Gyr. ![Mass ($M_*$) against radius ($R_*$) from our analysis using the [isochrones]{} Python package. It includes results using our SPC analysis (purple), but also shows results using stellar parameters from @niraula17 (red), @teske18 (blue), and @rodriguez18 (yellow). We used both the MIST (squares) and Dartmouth (circles) model grids. We also use both the [*Gaia*]{} (thick lines) and [*Hipparcos*]{} (thin lines) parallaxes as priors. The black squares, with black error bars, show the mean $M_*$ and $R_*$. It’s clear that the [*Gaia*]{} parallax, which is more precise than the [*Hipparcos*]{} parallax, produces results that are more tightly constrained. For the rest of the analysis presented here, we will use the mean $M_*$ and $R_*$ determined using the [*Gaia*]{} parallax.[]{data-label="fig:mass_radius"}](Figures/mass_radius_BVJKW234.png){width="9.25cm"} [lll]{} Parameter & Description & Value\ Other & EPIC 246389858 &\ identifiers & HIP 115752 &\ & 2MASS J23270480-0117108 &\ $B$ & APASS Johnson $B$ mag & $11.569 \pm 0.034$\ $V$ & APASS Johnson $V$ mag & $10.250 \pm 0.138$\ $J$ & 2MASS $J$ mag & $7.984 \pm 0.02$\ $K$ & 2MASS $K$ mag & $7.193 \pm 0.02$\ $WISE2$ & $WISE2$ mag & $7.155 \pm 0.02$\ $WISE3$ & $WISE3$ mag & $7.114 \pm 0.017$\ $v \sin i$ & Rotational Velocity (SPC) & $< 2$ km s$^{-1}$\ ${\rm [m/H]}$ & Metallicity (SPC) & $-0.5 \pm 0.08$\ $T_{\rm eff}$ & Effective Temperature (SPC) & $4305 \pm 49$ K\ $\log g$ & Surface Gravity (SPC) & $4.72 \pm 0.1$ (cgs)\ $\pi_{Hip}$ & [*Hipparcos*]{} Parallax (mas) & $32.98 \pm 1.76$\ $\pi_{GAIA}$ & [*Gaia*]{} Parallax (mas) & $33.68 \pm 0.06$\ ${\rm [m/H]}$ & Metallicity ([isochrones]{}) & $-0.26 \pm 0.09$\ $T_{\rm eff}$ & Effective temperature & $4340^{+40}_{-53}$ K\ & ([isochrones]{}) &\ $\log g$ & surface gravity ([isochrones]{}) & $4.66^{+0.015}_{-0.010}$ (cgs)\ $M_*$ & Mass ([isochrones]{}) & $0.606^{+0.020}_{-0.014}$ M$_\odot$\ $R_*$ & Radius ([isochrones]{}) & $0.602^{+0.005}_{-0.004}$ R$_\odot$\ Stellar kinematics {#sec:kinematic} ------------------ Stars presently near the Sun may come from a wide range of Galactic locations. Therefore, stellar space velocity, as a clue to the origin of a star in the Galaxy, is very important. The accurate [*Gaia*]{} parallax (see Table \[tab:stellarparameters\]), combined with the proper motions and the stellar radial velocity, make it possible to derive reliable space velocities for GJ9827. The calculation of the space velocity with respect to the Sun is based on the procedure presented by @johnson87, corrected for the effect of differential galactic rotation [@scheffler87], by adopting a solar Galactocentric distance of 8.5 kpc and a circular velocity of 220 km s$^{-1}$. The correction of space velocity to the Local Standard of Rest is based on a solar motion[^3], $(U, V, W)_{\sun}=(10.0, 5.2, 7.2)$ km s$^{-1}$, as derived from [*Hipparcos*]{} data by @dehnen88. The peculiar space velocity $S$, given by $S=(U^2+V^2+W^2)^{1/2}$, is quoted with all kinematic data in Table \[tab:kinematics\] (with the exception of the [*Gaia*]{} parallax which is included in Table \[tab:stellarparameters\]). GJ 9827, shows kinematic properties typical of the thin disk population. We have calculated the probabilities that the star belongs to a specific population, thick (TD), thin disk (D) or stellar halo (H), following the method used by @bensby04. On account of these probabilities, we find for GJ 9827 a thick-disk to thin-disk probability ratio of $TD/D=0.05$, implying that the star is clearly identified as a thin-disk object (typical threshold for assignment to thin disk being TD/D less than 0.1). Parameter GJ9827 ----------------------------------- ----------------- $\mu_{\alpha}$\[mas/yr\]$^{(1)}$ 376.02$\pm$0.06 $\mu_{\beta}$\[mas/yr\]$^{(1)}$ 216.07$\pm$0.07 $U_{LSR}$ \[km s$^{-1}$\]$^{(2)}$ -49.4$\pm$0.4 $V_{LSR}$ \[km s$^{-1}$\]$^{(2)}$ 22.9$\pm$0.9 $W_{LSR}$ \[km s$^{-1}$\]$^{(2)}$ -18.6$\pm$1.1 S \[km s$^{-1}$\]$^{(2)}$ 57.5$\pm$0.6 : Kinematic data.[]{data-label="tab:kinematics"} References. $^{(1)}$ Gaia Collaboration et al. 2016, 2018; $^{(2)}$ This work (see text). $K2$ Photometry and light curve analysis {#sec:lightcurveanalysis} ======================================== After the failure of the second of its four reaction wheels, the $Kepler$ spacecraft was re-purposed for an extended $K2$ mission to obtain high-precision photometry on a set fields near the ecliptic. GJ 9827 was observed from UT 2016 December 16 until UT 2017 March 04, as part of K2 campaign 12. Our data reduction and analysis techniques is very similar to that described in Sections 2.2 and 4.1 of @mayo18. We also provide a summary of our methods here. We first applied the method developed by @vanderjohns14 and @vanderburg16b in order to remove the roll systematics introduced by the periodic thruster firing of the [*Kepler Space Telescope*]{}. Next, we removed low-frequency variations from the light curve via a basis spline. Then we used the BATMAN transit model [@kreidberg15] to simultaneously fit the transits of all three planets, assuming non-interaction and circular orbits. The latter assumption seems reasonable, given that the system is old enough for tidal circularisation to have occured [@barnes17], and that systems similar to GJ 9827 do tend to have low eccentricities [@vaneylen15]. Additionally, as will be discussed in Section \[sec:rvanalysis\], the RV analysis is also consistent with the planets having circular orbits. The model included four global parameters: baseline flux level, a noise parameter, and two quadratic limb darkening coefficients parameterized according to @kipping13. Unlike @mayo18 we also impose a stellar density prior of $3.92 \pm 0.014$ g cm$^{-3}$, determined using the stellar mass and radius determined in Section \[sec:stellarparameters\]. When imposing this prior, we also assume that the three planets each have circular orbits. Additionally, each planet had five parameters: the [**initial**]{} epoch (i.e. time of first transit), the period, the inclination, the ratio of planetary to stellar radius ($R_p/R_*$), and the semi-major axis normalized to the stellar radius ($a/R_*$). All parameters were given a uniform prior except for each planet’s $R_p/R_*$, for which we assumed a log-uniform prior. We estimated these model transit parameters using [emcee]{} [@foreman13], a Python package which performs Markov chain Monte Carlo (MCMC) simulations with an affine-invariant ensemble sampler [@goodman10]. Using 38 walkers (i.e. twice the number of model parameters), we ran the MCMC process until convergence, which we defined as the point at which the scale-reduction factor [@gelman92] dropped below 1.1 for every parameter. The systematics corrected, normalised, and phase folded lightcurves are shown in Figure \[fig:lightcurveanalysis\]. The results of our lightcurve analysis are shown in Table \[tab:lightcurveanalysis\]. For completeness, the baseline flux level is $1.000 \pm 0.000002$, the noise parameter is $\log(jitter) = -10.11 \pm 0.002$, and the quadratic limb darkening parameters are $q_1 = 0.3999^{+0.2403}_{-0.1642}$ and $q_2 = 0.4372^{+0.3004}_{-0.2173}$. Our results agree well with those in @rodriguez18 and @niraula17, and suggest that GJ 9827 b and d, with radii of $R_{p,b} = 1.577^{+0.027}_{-0.031}$ and $R_{p,d} = 2.022^{+0.046}_{-0.043}$, roughly lie on either side of the radius gap detected by @fulton17. The derived quantities in Table \[tab:lightcurveanalysis\] ($R_p$ and $a$) were determined by sampling the posterior distributions of the dependent quantities, and presenting the median of the resulting distribution with the uncertainties being the difference between this median value and the 16$^{\rm th}$ and 84$^{\rm th}$ percentile values. ![[*Top panel:*]{} $K2$ lightcurve after removing the roll systematics introduced by the periodic thruster fires of the [*Kepler Space Telescope*]{}, but without the removal of the low-frequency variations. [*Middle panel:*]{} $K2$ lightcurve after also removing the low-frequency variations. [*Bottom panel:*]{} Phase-folded lightcurves for planets b, c, and d. The model is shown in red and the residuals are shown in the lower parts of the panel.[]{data-label="fig:lightcurveanalysis"}](Figures/pretty_transit_plot_ep246389858-3.pdf){width="8.5cm"} [lllcc]{} Parameter & Description & GJ 9827 b & GJ 9827 c & GJ 9827 d\ $P$ & Period (days) & $1.20898190^{+0.00000693}_{-0.00000714}$ & $3.6480957^{+0.0000633}_{-0.0000621}$ & $6.2014698^{+0.0000626}_{-0.0000611}$\ $R_p/R_*$ & Radius of the planet in stellar radii & $0.02396^{+0.00037}_{-0.00044}$ & $0.01887^{+0.00034}_{-0.00037}$ & $0.03073^{+0.00065}_{-0.00060}$\ $R_p$ & Radius of the planet (R$_\oplus$)$^a$ & $1.577^{+0.027}_{-0.031}$ & $1.241^{+0.024}_{-0.026}$ & $2.022^{+0.046}_{-0.043}$\ $T_C$ & Time of Transit (BJD-2454833) & $2905.82586^{+0.00026}_{-0.00026}$ & $2909.19930^{+0.00072}_{-0.00073}$ & $2907.96115^{+0.00044}_{-0.00045}$\ $T_{\rm 14}$ & Transit duration (days) & $0.05270^{+0.00093}_{-0.00083}$ & $0.07604^{+0.00154}_{-0.00154}$ & $0.05095^{+0.00147}_{-0.00122}$\ $b$ & Impact parameter & $0.4602^{+0.0352}_{-0.0443}$ & $0.4428^{+0.0415}_{-0.0483}$ & $0.8927^{+0.0071}_{-0.0090}$\ $i$ & Inclination & $86.07^{+0.41}_{-0.34}$ & $88.19^{+0.21}_{-0.18}$ & $87.443^{+0.045}_{-0.045}$\ $a/R_*$ & Semimajor axis in stellar radii & $6.719^{+0.080}_{-0.086}$ & $14.035^{+0.172}_{-0.171}$ & $20.003^{+0.230}_{-0.254}$\ $a$ & Semimajor axis (au)$^b$ & $0.01880^{+0.00020}_{-0.00014}$ & $0.03925^{+0.00042}_{-0.00029}$ & $0.05591^{+0.00059}_{-0.00041}$\ [**Notes.**]{}$^{a}$ Radii are derived using our estimate for the stellar radius, $R_*=0.602^{+0.005}_{-0.004} {R_{\rm \odot}}$, and the ratios $R_{\rm planet}/R_{\rm star}$ determined here. $^b$ Semimajor axes are determined assuming that $M_{\rm s}+m_{\rm p} \cong M_{\rm s}$ and using $a \cong [(M_{\rm s}\cdot G)^{\frac{1}{3}}\cdot P_{\rm p}^{\frac{2}{3}}]/(2\pi)^{\frac{2}{3}} $, where $G$ is the gravitational constant. Stellar activity {#sec:stellaractivity} ================ Characterizing the activity level of the host star, and eventually modelling the activity contribution to the RV, is mandatory for accurate mass determination of small planets, even when the star is just moderately active (e.g., @haywood2018). The $K2$ light curve shows a strong modulation with peak-to-peak amplitude of $\simeq 0.003$ mag, suggesting a non-negligible level of activity for this star. Previous analyses have estimated GJ 9827’s rotation period, but the results are not consistent. @niraula17 suggests a rotation period of $\sim 17$ days, while @rodriguez18 and @teske18 suggest a rotation period of 31 days. Correcting for activity induced signals in the radial velocity data requires an accurate estimate of the star’s rotational period. We use the combined HARPS and HARPS-N dataset to carry out a periodogram analysis of the BIS and the FWHM of the CCF, as computed by the DRS (see Section \[sec:HNspect\]), and the activity index derived from the Calcium H and K lines (${\rm S_{HK}}$, see Table \[tab:harpsndata\] and Tables 2 and 3 in @prieto-arranz18). Specifically, we used the Bayesian formalism for the generalised Lomb-Scargle periodogram first presented by @mortier15. The spectral window of the HARPS and HARPS-N data shows a peak at $\sim$27 days due to the Moon’s sidereal month. This hampers our ability to best exploit our data to derive a reliable measure of the stellar rotational period. When analysing the activity indices, a significant signal is, however, found in the ${\rm S_{HK}}$ data at $\sim$34 days with another peak at around $\sim 15$ days. We also consider correlations between the activity indices and the RVs. The Spearman’s rank correlation coefficients are all below 0.3. Therefore, we also carry out a frequency analysis of the combined HARPS and HARPS-N RV data using the Iterative Sine-Wave Fitting (ISWF) method [@vanicek71]. The power spectra shows clear peaks at $f_b$=0.827 d$^{-1}$ (corresponding to the orbital period of GJ 9827 b, $P_b$=1.209 days) and at $f_d$ = 0.161 d$^{-1}$ (corresponding to the orbital period of GJ 9827 d, $P_d = 6.21$ days). The low-amplitude signal due to GJ 9827 c can be seen in the power spectrum, but it does not stand out above the noise. This frequency analysis also shows peaks at $f$=0.0325 d$^{-1}$, 2$f$, 3$f$. The frequency $f$ corresponds to a period of 30.8 days and is clearly related to the stellar rotation period. This would seem to indicate that the $\sim 15$ day signal seen in the ${\rm S_{HK}}$ data is probably the first harmonic of the stellar rotation period. To better quantify the stellar activity, we carry out an analysis using the $K2$ light curve (see top panel of Figure \[fig:lightcurveanalysis\]) but after removing the points affected by transits. We initially determined the auto correlation of the $K2$ light curve data, computed as described in @mcquillan13[^4]. This converges to a rotational period of 29 days, which is closer to the $31$ days presented in @rodriguez18 and @teske18, than to the $\sim 17$ days suggested by @niraula17. It has, however, been suggested [@angus18] that a Gaussian process (GP) with a quasi-periodic covariance kernel function is a more reliable method to determine the rotational period of active stars. We therefore performed an additional analysis using [PyORBIT]{}[^5] [@malavolta16], a package for modelling planetary and activity signals. This implements the GP quasi-periodic kernel through the [george]{} package [@ambikasaran15]. For the hyper-parameters we follow the mathematical definition introduced by @grunblatt15. Hyper-parameters optimization has been performed using the differential evolution code [pyDE]{}[^6], which provided the starting values for the affine-invariant ensemble sampler [emcee]{} [@foreman13]. We followed the same methodology as described in [@malavolta18]. Since the GP regression typically scales with the third power of the number of data points, we binned the $K2$ light curve every 5 points, while ensuring that this did not alter the overall shape and did not change the auto correlation result. Since there is a data gap between BJD - 2450000 = 7786 and BJD - 2450000 = 7791, we also allow for different offsets and jitters for the two data segments. The GP analysis then suggested a rotational period of $P_{\rm rot} = 28.72^{+0.18}_{-0.22}$ days, a decay timescale of the active regions of $\lambda = 33.17^{+5.90}_{-6.26}$ days, and a coherence scale of $w = 0.146 \pm 0.006$. We also find a covariance amplitude in the $K2$ light curve data of $h_{K2} = 0.00081^{+0.00013}_{-0.00010}$ mag. These values are also presented in Table \[tab:stellaractivity\]. The GP regression therefore produces a result that is consistent with that from the auto correlation of the $K2$ light curve data and with that presented in @rodriguez18 and @teske18. The isochrone analysis also suggests that this star has an age of $\sim 10$ Gyrs, with a lower limit of 5 Gyrs. The stellar kinematics, reported in Section \[sec:kinematic\], indicates that GJ 9827 belongs to the galactic thin disk, but the low metallicity (${\rm [m/H]} = -0.26 \pm 0.09$) is consistent with this being an older member of that population. A rotation period of $\sim 30$ days is consistent with what would be expected for a star of this age [@reiners12]. This would all seem to indicate that the rotation period of GJ 9827 is more likely $\sim 30$ days than the $\sim 17$ days suggested by @niraula17. Therefore we will use the results of the GP regression to correct for the stellar activity induced signals in the radial velocity data. [lll]{} Parameter & Description & Value\ $\sigma_{1, \rm jit, K2}$$^a$ \[mag\] & Jitter & $0.000006^{+0.000003}_{-0.000003}$\ $\sigma_{2, \rm jit, K2}$$^a$ \[mag\] & Jitter & $0.000003^{+0.000003}_{-0.000002}$\ $\gamma_{1, \rm K2}$$^a$ \[mag\] & Offset & $1.000237^{+0.000263}_{-0.000260}$\ $\gamma_{2, \rm K2}$$^a$ \[mag\] & Offset & $0.999427^{+0.000273}_{-0.000268}$\ $P_{\rm rot}$ \[days\] & Rotational period & $28.72^{+0.18}_{-0.22}$\ $\lambda$ \[days\] & Active region & $33.17^{+5.90}_{-6.26}$\ & decay timescale &\ $w$ \[mag\] & Coherence Scale & $0.146^{+0.006}_{-0.006}$\ $h_{K2}$ & Covariance amplitude & $0.00081^{+0.00013}_{-0.00010}$\ $^a$ The terms $\sigma_{1, \rm jit, K2}$ and $\gamma_{1, \rm K2}$ are for the K2 data segment ending at BJD-2450000=7786.075, while $\sigma_{2, \rm K2}$ and $h_{2, \rm K2}$ are for the K2 data segment starting at BJD-2450000=7791.377. RV Analysis {#sec:rvanalysis} =========== The $K2$ light curve analysis, the ISWF analysis of the HARPS and HARPS-N RVs, and the ${\rm S_{HK}}$ index clearly suggest that the stellar activity of GJ 9827 may have non negligible effects on the RVs. The approach that we’ve taken is to assume that the light curve variations and activity signals in the RVs can be described by a GP with the same kernel and with common hyper-parameters, except for the covariance amplitude, $h$, which is specific for each dataset. This approach has been quite successful in confirming and improving mass determination of rocky planets (e.g., @haywood14, @grunblatt15), and it has delivered consistent results with respect to alternative approaches for stellar activity modelling (e.g. @malavolta18). In the context of the GP analyis, we take the combined HARPS and HARPS-N RVs to be a single dataset, with the PFS RVs and FIES RVs making up two other datasets. We do, however, allow for an offset between the HARPS and HARPS-N RVs and for independent jitter terms. We carry out the RV analysis using the [PyORBIT]{} code and, as in Section \[sec:stellaractivity\], assume that the quasi-periodic kernel is the best choice to model RV variations. When modelling activity signals in RVs with the help of Gaussian processes in a Bayesian framework, imposing priors obtained from the $K2$ lightcurve on the hyper-parameters of the GP produces statistically indistinguishable results when compared to modelling the RVs and the lightcurve simultaneously [e.g., @malavolta18]. Hence, rather than modelling the $K2$ light curve and the RVs simultaneously, we use the results from Section \[sec:stellaractivity\] to set priors on the hyper-parameters, with the exception of the amplitude of the covariance $h$. Since the RV intensity of stellar activity depends on the wavelength range of the instrument and the RV extraction technique [e.g., @zechmeister18], for each dataset (combined HARPS and HARPS-N RVs, PFS RVs, and FIES RVs) we used an independent covariance amplitude $h$. For each dataset we also include a jitter term, to compensate for uncorrelated noise not included in the error estimate, and a RV offset. As mentioned above, although we treat the combined HARPS and HARPS-N RVs as a single dataset, we do allow for different offsets and jitter values for the HARPS and HARPS-N RVs. We use uniform priors for both the jitter and the RV offset. We ran two main analyses, one in which the results from Section \[sec:stellaractivity\] exactly define the Gaussian priors on the hyper-parameters, and one in which they guide our choice of priors, but do not precisely define them. Specifically, in the second analysis we use Gaussian priors on $P_{\rm rot}$, $\lambda$, and $w$, with $P_{\rm rot}=35 \pm 10$ days, $\lambda = 36 \pm 15$ days, and $w = 0.15 \pm 0.005$. In the second analysis, we set the $P_{\rm rot}$ prior to the value we would have used if only spectroscopic activity indexes had been used to estimate the stellar rotation period ($\sim 35$ days from the ${\rm S_{HK}}$ index, see Section \[sec:stellaractivity\]), but we make the range wide enough to also incorporate the results from the analysis using the $K2$ lightcurve and to account for the photometry and RVs not being simultaneous in time. In our model we also assume that the orbits of all 3 planets are circular (eccentricity $e = 0$). In multi-planet systems of close-in planets, the eccentricity evolution depends on both tidal interactions and on eccentricity pumping from planet-planet interactions [@bolmont13]. However, given the age of the system ($> 5$ Gyr) there has probably been sufficient time for the orbits of these close-in planets to have been tidally circularised [@barnes17], and there are indications that systems like GJ 9827 do tend to have low eccentricities [@vaneylen15]. We also impose a log-uniform prior on the time of transit centre and a uniform prior on the orbital periods of the 3 planets, taken from the results of the analysis discussed in Section \[sec:lightcurveanalysis\] (see Table \[tab:lightcurveanalysis\]). Our results are shown in Table \[tab:rvanalysis\]. The table shows the quantities derived from the RVs (radial velocity semi-amplitude, $K$, planet mass, $M_p$, and mean density, $\rho$) and also shows the resulting stellar activity indicators, the uncorrelated jitter, and the RV offset for each dataset. The posterior distributions of some of the fitted parameters from Analysis 1 are shown in Figure \[fig:corner\]. For the sake of readability, only the RV semi-amplitude of the planets and the GP hyper parameters are reported. The confidence intervals of the posteriors are computed by taking the 15.87$^{\rm th}$ and 84.14$^{\rm th}$ percentiles of the distribution, except for $K_{\rm c}$ and $h_{\rm FIES}$, for which we report the median and the 84.14$^{\rm th}$ percentile. As discussed above, the two analyses were one in which the activity priors were set by the results of Section \[sec:stellaractivity\], and one in which we used the results of Section \[sec:stellaractivity\] to set the region where we’d expect the activity parameters to lie, but set the priors to have a much broader range than that suggested by the results presented in Section \[sec:stellaractivity\]. Table \[tab:rvanalysis\], shows that the results of these two analyses are consistent. Since the stellar activity indicators derived from the $K2$ lightcurve, presented in Section \[sec:stellaractivity\], probably best represent the stellar activity, we will focus primarily on the results from Analysis 1. [lcc]{} Parameter &\ & Analysis 1$^a$ & Analysis 2$^b$\ **Stellar activity GP model.** & &\ (Does not include those from the & &\ $K2$ analysis presented in Table \[tab:stellaractivity\].) & &\ $h_{\rm HARPS-N, HARPS}$ \[ms$^{-1}$\] & 2.49$^{+0.48}_{-0.39}$ & $2.85^{+0.66}_{-0.51}$\ $h_{\rm FIES}$ \[ms$^{-1}$\] & 1.76$^{+2.67}_{-1.21}$ & $2.09^{+3.64}_{-1.46}$\ $h_{\rm PFS}$ \[ms$^{-1}$\] & 3.73$^{+0.93}_{-1.03}$ & $3.88^{+0.95}_{-0.95}$\ $\lambda$ \[days\] & 34.77$^{+5.57}_{-5.64}$ & $30.97^{+12.47}_{-11.79}$\ $w$ & 0.147$\pm$0.006 & $0.196^{+0.041}_{-0.038}$\ $P_{\rm rot}$ \[days\] & $28.72^{+0.19}_{-0.19}$ & $30.13^{+11.11}_{-2.02}$\ **Uncorrelated jitter** & &\ $\sigma_{\rm jit, HARPS-N}$ \[ms$^{-1}$\] & 0.59$^{+0.40}_{-0.37}$ & $0.59^{+0.40}_{-0.38}$\ $\sigma_{\rm jit, HARPS}$ \[ms$^{-1}$\] & 0.80$^{+0.42}_{-0.44}$ & 0.81$^{+0.41}_{-0.44}$\ $\sigma_{\rm jit, FIES}$ \[ms$^{-1}$\] & 1.23$^{+1.58}_{-0.85}$ & $1.24^{+1.65}_{-0.86}$\ $\sigma_{\rm jit, PFS}$ \[ms$^{-1}$\] & 2.32$^{+1.28}_{-1.18}$ & $2.17^{+1.17}_{-1.04}$\ **RV offset** & &\ $\gamma_{\rm HARPS-N}$ \[ms$^{-1}$\] & 31949.335$^{+0.775}_{-0.753}$ & $31949.473^{+0.970}_{-0.916}$\ $\gamma_{\rm HARPS}$ \[ms$^{-1}$\] & 31948.292$^{+0.907}_{-0.876}$ & 31948.556$^{+1.103}_{-1.027}$\ $\gamma_{\rm FIES}$ \[ms$^{-1}$\] & 31775.640$^{+1.969}_{-1.988}$ & $31775.623^{+2.222}_{-2.192}$\ $\gamma_{\rm PFS}$ \[ms$^{-1}$\] & 0.447$^{+0.984}_{-0.979}$ & $0.533^{+1.019}_{-0.988}$\ **Quantities derived from RVs** & &\ $K_{\rm b}$ \[ms$^{-1}$\] & 4.11$^{+0.40}_{-0.40}$ & $4.10^{+0.37}_{-0.37}$\ $K_{\rm c}$$^1$ \[ms$^{-1}$\] & $0.49 (< 0.87)$ & $0.39 (< 0.74)$\ $K_{\rm d}$ \[ms$^{-1}$\] & 1.97$^{+0.40}_{-0.40}$ & $1.80^{+0.43}_{-0.48}$\ $M_{\rm p,b}$ (${M_\oplus}$) & 4.91$^{+0.49}_{-0.49}$ & $4.90^{+0.45}_{-0.45}$\ $M_{\rm p,c}$$^1$ (${M_\oplus}$) & $0.84 (< 1.50)$ & $0.67 (< 1.27)$\ $M_{\rm p,d}$ (${M_\oplus}$) & 4.04$^{+0.82}_{-0.84}$ & $3.71^{+0.90}_{-0.99}$\ $\rho_{\rm b}$$^2$ \[gcm$^{-3}$\] & $6.93^{+0.82}_{-0.76}$ & $6.90^{+0.76}_{-0.71}$\ $\rho_{\rm c}$$^{1,2}$ \[gcm$^{-3}$\] & $2.42 (< 4.35) $ & $1.93 (< 3.66) $\ $\rho_{\rm d}$$^2$ \[gcm$^{-3}$\] & $2.69^{+0.58}_{-0.57}$ & $2.46^{+0.63}_{-0.66}$\ $^a$ analysis in which we set priors on $P_{\rm rot}$, $\lambda$, and $w$ from the stellar activity analysis using the $K2$ lightcurve only. See Section \[sec:stellaractivity\] and Table \[tab:stellaractivity\]. $^b$ analysis in which we use the results of the activity analysis described in \[sec:stellaractivity\] to guide our choice of priors, rather than using these results exactly. Specifically, we impose Gaussian priors on $P_{\rm rot}$, $\lambda$, and $w$ with $P_{\rm rot} = 35 \pm 10$ days, $\lambda = 36 \pm 15$ days, and $w = 0.15 \pm 0.05$. $^1$ For upper limits, we report the median and the 84$^{\rm th}$ percentile. $^2$ The density was determined by sampling the posterior distributions for the mass and radius, and presenting the median, 16$^{\rm th}$ and 84$^{\rm th}$ percentiles of the resulting distribution. ![image](Figures/EP246389858_corner_20181018.pdf){width="17.0cm"} Figure \[fig:rvHPF\] shows the orbital solutions and RV residuals from Analysis 1, for GJ 9827 b (top panel), GJ 9827 c (middle panel), and GJ 9827 d (lower panel), phased on the period of the corresponding planet and after removing the RV contributions from stellar activity and from the other planets. GJ 9827 b has a RV semi amplitude of $K_b = 4.11 \pm 0.40$ , suggesting a mass of $M_{p,b} = 4.91 \pm 0.49$ M$_\oplus$. The RV semi amplitude, $K_c$, for GJ 9827 c is small and suggests a mass of $M_{p,c} = 0.84$ M$_\oplus$ with an upper limit of $1.50$ M$_\oplus$, while for GJ 9827 d the RV semi amplitude is $K_d = 1.97 \pm 0.40$ with a resulting mass estimate of $M_{p,d} = 4.04^{+0.82}_{-0.84}$ M$_\oplus$. The mass estimate for GJ 9827 b therefore has a precision of better than 10%, while that for GJ 9827 d is close to 20%. ![Orbital solutions and RV residuals for GJ 9827 b (top panel), GJ 9827 c (middle panel), and GJ 9827 d (lower panel), phased on the period of the corresponding planet and with the RV contributions from the other planets removed. The details are discussed in Section \[sec:rvanalysis\], and these figures show the results from Analysis 1.[]{data-label="fig:rvHPF"}](Figures/RV_phase_b_GP_K2priors.png "fig:"){width="9.0cm"} ![Orbital solutions and RV residuals for GJ 9827 b (top panel), GJ 9827 c (middle panel), and GJ 9827 d (lower panel), phased on the period of the corresponding planet and with the RV contributions from the other planets removed. The details are discussed in Section \[sec:rvanalysis\], and these figures show the results from Analysis 1.[]{data-label="fig:rvHPF"}](Figures/RV_phase_c_GP_K2priors.png "fig:"){width="9.0cm"} ![Orbital solutions and RV residuals for GJ 9827 b (top panel), GJ 9827 c (middle panel), and GJ 9827 d (lower panel), phased on the period of the corresponding planet and with the RV contributions from the other planets removed. The details are discussed in Section \[sec:rvanalysis\], and these figures show the results from Analysis 1.[]{data-label="fig:rvHPF"}](Figures/RV_phase_d_GP_K2priors.png "fig:"){width="9.0cm"} Figure \[fig:RV\_GPs\] shows the HARPS-N (filled and open circles), HARPS (open squares), and FIES (red triangles) RVs, together with the best-fit model which includes the planets’ signals and the GP model of the correlated stellar noise (light blue curve). Also shown is the GP solution (dashed blue curve) and its associated uncertainty range (grey shaded region). We don’t, however, show the PFS RVs in Figure \[fig:RV\_GPs\]. What Figure \[fig:rvHPF\] shows is that the RV residuals for some of the PFS data are considerably larger than that for the other datasets. This is most likely because the PFS data covers a long time interval and, during some periods, is insufficiently well sampled to constrain the stellar activity. To test the consequences of this, we carried out two more analyses, both using the same activity priors as used by Analysis 1 in Table \[tab:rvanalysis\]. In one we excluded PFS data that appeared to be insufficiently well sampled to constrain the stellar activity, and in the other we used HARPS-N and HARPS data only. In the first of these analyses, we retained the 6 PFS RVs between BJD=2455428.80 and BJD=2455439.82, the 12 PFS RVs between BJD=2455785.72 and BJD=2455485.70, and the 3 PFS RVs between BJD=2456139.86 and BJD=2456150.83. In both cases, the results were consistent with, and of a similar precision to, those presented in Table \[tab:rvanalysis\]. Consequently, we conclude that the PFS sampling does not significantly influence our results, both in terms of the best estimate or the precision. We also carried out one additional analysis, using the same activity priors as in Analysis 1, in which we relax the constraint that the planet eccentricities are all zero. We do, however, constrain the eccentricities to be less than 0.2, which is based on pure N-body simulations using [mercury6]{} that indicate that this is required for stability. The results from this analysis do allow for the planets to have small eccentricities, but the resulting RV semi-amplitudes, and planet masses, are very close to those produced by the equivalent analysis with circular orbits. The resulting eccentricities are also consistent with $e = 0$ at $2.45\sigma$ which suggests that this result is not significant [@lucy71]. There is therefore no strong evidence to indicate that the orbits are non-circular. ![HARPS-N (filled and open circles), HARPS (open squares), and FIES (red triangles) RVs, together with the best-fit models which includes the planets’ signals and the GP model of the correlated stellar noise (light blue curve). Also shown is the GP solution (dashed blue curve) and its associated uncertainty range (grey shaded region).[]{data-label="fig:RV_GPs"}](Figures/HNHSF_RV_GP_comparison.png){width="9.25cm"} Discussion {#sec:discussion} ========== Our analysis has allowed us to estimate the masses of GJ 9827 b and d with a precision of better than 10% and close to 20%, respectively. We can’t, however, put a strong constraint on the mass of GJ 9827 c. Our analysis suggests an upper limit (84%) for GJ 9827 c’s RV semi-amplitude of $< 1$ . If we assume an Earth-like composition ($M_p \simeq 1.9 M_\oplus$, similar to Kepler-78b, @pepe13), the RV semi-amplitude for this planet would be $\simeq 1$ . This might suggest that GJ9827c is unlikely to have an Earth-like composition. Figure \[fig:mass\_radius\_diagram\] shows GJ 9827 b, c, and d on a mass-radius diagram which also includes all planets with a measured mass and radius from the Extrasolar Planets Encyclopaedia[^7]. The data points are shaded according to the precision of their mass estimate and are color-coded according to their incident flux, relative to that of the Earth. The dashed lines show different compositions, taken from @zeng16, plus one as yet unpublished track for a planet in which H$_2$ makes up 1% of its mass. The figure also shows the Earth and Venus, for reference, and indicates the approximate location of the radius gap [@fulton17]. GJ 9827 b is consistent with having a rocky, terrestrial (Earth-like) composition. The result for GJ 9827 c suggests that it is not consistent with being rocky, and that water could still make up a substantial fraction of its mass. There are, however, indications that non-detections, like that of GJ 9827 c, could return RV semi-amplitudes that are biased low with respect to the real RV semi-amplitudes [e.g., @damasso18]. Therefore, our results cannot be interpreted as strong evidence for GJ 9827 c not being rocky. GJ 9827 b, on the other hand, would seem to be composed mostly of silicates and iron. The best estimate suggests that its iron core makes up about 25% of its mass, similar to that for the Earth and Venus, and the density estimate would seem to rule out GJ 9827 b having H/He on its surface, or the presence of a thick envelope of volatiles. The bulk density of GJ 9827 d, and its location in the mass-radius diagram (Figure \[fig:mass\_radius\_diagram\]), suggests that it probably does retain a reasonably substantial atmosphere, with water potentially making up a substantial fraction of its mass. GJ 9827 would therefore appear to host a super-Earth that is probably rocky (GJ 9827 b) and one (GJ 9827 d) that probably retains a substantial atmosphere. These two planets appear to bracket the radius gap suggested by @fulton17 and @vaneylen18. The stellar fluxes received by GJ 9827 b and c are about 316 and 73 times that received by the Earth, respectively. If they are both rocky, they may still have formed with a composition similar to that of GJ 9827 d, but may have since lost their atmospheres through photo-evaporation [@lopezfort14]. If, however, water still makes up a substantial fraction of GJ 9827 c’s mass, then this could have implications for the formation of this system. It could suggest that GJ 9827 c and d both formed beyond the snowline, with GJ 9287 b forming inside the snowline. Migration could then have produced the configuration we see today. That the system is in a near 1:3:5 resonance [@prieto-arranz18] might be consistent with this scenario. In such a scenario GJ 9827 c could still retain a water-rich atmosphere even at its current level of irradiation [@lopez17]. On the other hand, the stellar flux received by GJ 9827 d is about 36 times that received by the Earth, which may not be sufficient for GJ 9827 d to have lost much of its primordial atmosphere [@owenwu13; @lopez13], whether water-rich or predominantly H/He [@lopez17]. This system may therefore be consistent with photo-evaporation playing a key role in generating the radius gap suggested by @owenwu13 and @lopez13, and first detected by @fulton17. In fact, if those planets above the radius gap typically retain a H/He atmosphere, then a prediction of the photo-evaporation model is that planets just above, and just below, the radius gap should have similar masses, since the envelope should make up only a small fraction of the mass of those just above the gap [@lopez18]. The similar masses of GJ 9827 b and d are intriguingly consistent with this prediction. There are, however, alternative explanations. For example, the luminosity of the cooling core could completely erode light envelopes, while having little impact on heavier envelopes [@ginzburg18]. This would produce a deficit of intermediate-mass planets and, hence, may also explain the observed radius gap [@fulton17]. Systems like GJ 9827 will therefore play a key role in determining which of these scenarios most likely explains this radius gap. ![Mass-Radius diagram for GJ 9827 b, GJ 9827 c, and GJ 9827 d together with all planets with a measured mass and radius from the Extrasolar Planets Encyclopaedia. The dashed lines show different compositions, taken from @zeng16, plus one additional as yet unpublished track for a planet in which H$_2$ makes up 1% of its mass. The data points are shaded according to the precision of their mass estimate and are color-coded according to their incident flux. Also shown are Earth and Venus, for reference, and we indicate the approximate location of the radius gap [@fulton17].[]{data-label="fig:mass_radius_diagram"}](Figures/pt23_figure08_larger_labels.pdf){width="8.5cm"} The composition of planets below the radius gap {#sec:composition} ----------------------------------------------- One of the goals of the HARPS-N Collaboration is to try to determine the typical composition of planets with radii similar to that of the Earth. In particular, are planets below the radius gap first clearly mapped by @fulton17 rocky? It has already been suggested [@rogers15] that most planets above this gap still retain significant envelopes of volatiles, but it’s not yet clear if most planets below the gap are primarily composed of silicates and iron. In Figure \[fig:rescaledM\] we plot the same data as in Figure \[fig:mass\_radius\_diagram\], but scale the planet masses according to the minimum mass they would need, given their radius, in order to be rocky (see composition curves in Figure \[fig:mass\_radius\_diagram\]). As in Figure \[fig:mass\_radius\_diagram\], the data points are shaded according to the precision of their mass estimate and are color-coded according to their incident flux, relative to the Earth. We also show the approximate location of the radius gap [@fulton17]. What Figure \[fig:rescaledM\] shows quite clearly is that those with radii above the gap, including GJ 9827 d, tend to have masses below that required for them to be rocky, while those below the gap tend to have masses above the mass at which they would be rocky. Our estimate for GJ 9827 b suggests that it is clearly rocky. The upper limit for GJ 9827 c suggests that it is not rocky and that it may still retains a reasonable amount of water, and other volatiles. However, as highlighted in @damasso18, there are indications that a result like that for GJ 9827 c could be biased low, so we really can’t rule out that GJ 9827 c is indeed rocky. However, what Figure \[fig:rescaledM\] also shows is that the only other known planet below the radius gap that is inconsistent with being rocky at $1 \sigma$ is Trappist-1f. Trappist-1f is, however, around a very-low-mass star and has a low bolometric irradiation ($M_* \sim 0.08 M_\odot$ and $F_p/F_\oplus \sim 0.382$, @gillon17). It is quite strongly irradiated in the XUV [@wheatley17; @bolmont17], but probably does still retain a volatile-rich envelope [@quarles17]. If GJ 9827 c does indeed still retain a substantial gaseous envelope, then it would be one of the most heavily irradiated planets below the radius gap to do so, and the only one orbiting an FGK star. ![Similar to Figure \[fig:mass\_radius\_diagram\], except the masses are scaled according to the minimum mass they would need, given their radius, to be rocky. As in Figure \[fig:mass\_radius\_diagram\], the data points are shaded according to the precision of their mass estimate and color-coded according to their incident flux, relative to that of the Earth. It also illustrates the location of the radius gap [@fulton17].[]{data-label="fig:rescaledM"}](Figures/pt23_figure09_larger_labels.pdf){width="8.5cm"} Conclusions {#sec:conclusions} =========== Here we present the results of our analysis of the GJ 9827 planetary-system, a system already known to contain 3 super-Earths [@niraula17; @rodriguez18]. We repeat the $K2$ lightcurve analysis and recover planetary radii that are consistent with these earlier analyses. We then carry out an RV analyses using the Magellan/PFS and FIES radial velocities first presented by @teske18 and @niraula17 respectively, the HARPS and HARPS-N radial velocities presented by [@prieto-arranz18], and with 41 additional new RV observations from HARPS-N [@cosentino12]. Although our RV analysis can’t provide a strong constraint on the mass of GJ 9827 c, we can estimate the masses of GJ 9827 b and d with precisions of better than 10% (b) and close to 20% (d). We find that GJ 9827 b is probably rocky, with an iron core, but is unlikely to have a mass as high as suggested by @teske18. GJ 9827 d, on the other hand, almost certainly retains a significant envelope of volatiles. Using HARPS, HARPS-N and FIES RVs, @prieto-arranz18 also estimated the masses of the planets in the GJ 9827 system. They conclude that GJ 9827 b is probably rocky, with an iron core, and that GJ 9827 d still retains an evelope of volatiles, which is consistent with the results presented here. However, our estimates for the mass of GJ 9827 b and GJ 9827 d are inconsistent with their estimates at the 1$\sigma$ level. Our analysis suggests that both GJ 9827 b and GJ 9827 d have higher masses than suggested by @prieto-arranz18. Their estimate for GJ 9827 b is still consistent with an Earth-like composition, but their estimate for GJ 9827 d would seem to suggest a much lower density than is suggested by our analysis. @prieto-arranz18 also claim a 2$\sigma$ detection for the mass of GJ 9827 c, while we can only really set an upper limit. Although our upper limit is consistent, at 1$\sigma$, with their result, their analysis suggests that GJ 9827 c may well be rocky, whereas ours suggests that it probably is not. It would seem quite important to understand this difference, since the composition of GJ 9827 c could constrain where the planets in this system formed. If water makes up a significant fraction of its mass, then that might suggest that the outer planets in this system formed beyond the snowline. If not, then [*in situ*]{} formation is still a possibility [@chiang2013]. It is possible, however, that our non-detection has returned RV semi-amplitudes that are biased low [@damasso18]. GJ 9827 is particularly interesting system since it hosts a rocky super-Earth near the lower boundary of the radius gap detected by @fulton17 and one that retains a substantial atmosphere near the upper boundary of this gap. Consequently, this system could be consistent with the inner-most one being sufficiently strongly irradiated to have lost its atmosphere via photo-evaporation [@lopez17; @owenwu17]. If GJ 9827 d retains a low-mass H/He envelope, rather than a water-rich atmosphere, then GJ 9827 b and d having similar masses is also consistent with the photoevaporation model. However, we can’t yet exclude alternative explanations, such as the luminosity of the cooling core eroding the lighter envelopes [@ginzburg16; @ginzburg18]. Therefore, understanding systems like GJ 9827 will help to determine which scenario is most likely. Our results also have implications for the typical composition of planets below the radius gap detected by @fulton17. Most planets with well-constrained masses below this radius gap have compositions consistent with them being rocky. This is indeed the case for GJ 9827 b, but our analysis can’t rule out that GJ 9827 c still retains a water-rich atmosphere. However, if this is the case, GJ 9827 c would be the one of the most heavily irradiated super-Earths below the radius gap that still retains a substantial volatile envelope. Given the faintness of the star (V=10.3) and the expected RV amplitude ($1$  for a rocky composition), it seems likely that only the next generation of high-precision velocimeters on large telescopes, such as ESPRESSO [@pepe10] or G-CLEF [@szentgyorgyi12], will allow a mass determination that has sufficient precision (better than $\sim 20$%) to uncover its internal composition. As already highlighted by @rodriguez18 and @niraula17, GJ 9827 is bright, and cool, and hence is a potential target for atmospheric characterisation via transit spectroscopy [@seager00]. The expected signal can be calculated from the planet and star’s radii, and the scale-height of the planet’s atmosphere [@vanderburg16c]. Our analysis suggests that if both GJ 9827 d and GJ 9827 c have predominantly H/He envelopes, the atmospheric signal could be as high as a few 100 ppm, which could be detected by the [*Hubble Space Telescope*]{}. However, if their atmospheres are predominantly water, a detection may require the [*James Webb Space Telescope*]{}. That GJ 9827 probably hosts a rocky super-Earth and one that probably retains a substantial atmosphere, and that these two planets bracket the radius gap detected by [@fulton17], makes it a particularly interesting target. [lllllll]{} BJD$_{\rm UTC}$ & RV & $\sigma_{\rm RV}$ & BIS$_{\rm span}$ & FWHM & S$_{\rm HK}$ & $\sigma_{\rm S_{\rm HK}}$\ (d) & (m s$^{-1}$) & (m s$^{-1}$) & (m s$^{-1}$) & (km s$^{-1}$) & (dex) & (dex)\ 2457972.581025 & 31949.19 & 1.66 & 54.56 & 6.16146 & 0.760934 & 0.012149\ 2457973.598897 & 31948.29 & 2.48 & 47.05 & 6.08261 & 0.703956 & 0.018964\ 2457989.650249 & 31941.31 & 1.36 & 44.43 & 6.13164 & 0.690250 & 0.008613\ 2457992.585770 & 31954.88 & 1.25 & 47.19 & 6.13195 & 0.712646 & 0.007336\ 2457993.670628 & 31955.25 & 1.57 & 52.67 & 6.13304 & 0.734660 & 0.010788\ 2457994.574853 & 31953.10 & 2.31 & 44.91 & 6.13740 & 0.716458 & 0.018600\ 2457995.576615 & 31943.43 & 1.37 & 47.27 & 6.13503 & 0.738125 & 0.008665\ 2457996.564509 & 31946.06 & 2.71 & 62.28 & 6.14124 & 0.703724 & 0.025410\ 2457999.535982 & 31958.53 & 1.39 & 46.43 & 6.14647 & 0.744546 & 0.009172\ 2458000.538638 & 31957.38 & 1.47 & 50.08 & 6.15661 & 0.758144 & 0.010064\ 2458001.680612 & 31952.82 & 1.81 & 43.66 & 6.15685 & 0.754857 & 0.013624\ 2458019.448280 & 31940.92 & 2.99 & 43.19 & 6.12897 & 0.680988 & 0.030242\ 2458021.448466 & 31952.15 & 2.14 & 50.83 & 6.12614 & 0.684284 & 0.018574\ 2458021.635511 & 31947.30 & 3.24 & 55.74 & 6.12959 & 0.709314 & 0.035249\ 2458022.469453 & 31948.65 & 1.39 & 44.09 & 6.12319 & 0.674695 & 0.008879\ 2458022.578398 & 31953.05 & 1.49 & 41.74 & 6.12904 & 0.692022 & 0.009983\ 2458025.479998 & 31946.36 & 0.99 & 49.06 & 6.13471 & 0.700214 & 0.004776\ 2458025.580458 & 31946.17 & 1.03 & 48.23 & 6.13419 & 0.709150 & 0.005181\ 2458026.500839 & 31948.19 & 1.91 & 52.53 & 6.14859 & 0.722292 & 0.014968\ 2458026.617340 & 31946.30 & 1.62 & 57.31 & 6.13966 & 0.715850 & 0.011669\ 2458027.436449 & 31948.62 & 3.75 & 49.25 & 6.13452 & 0.761666 & 0.039726\ 2458047.362466 & 31942.68 & 1.61 & 51.38 & 6.13719 & 0.734700 & 0.012005\ 2458047.488963 & 31941.88 & 1.63 & 48.83 & 6.13660 & 0.718712 & 0.011596\ 2458047.572523 & 31942.76 & 2.15 & 55.64 & 6.14504 & 0.719076 & 0.018844\ 2458049.334881 & 31951.87 & 3.86 & 55.60 & 6.13803 & 0.680903 & 0.045143\ 2458049.484976 & 31951.80 & 3.14 & 54.38 & 6.12018 & 0.682102 & 0.031959\ 2458050.373485 & 31948.37 & 2.48 & 51.30 & 6.13536 & 0.642163 & 0.022632\ 2458050.447982 & 31950.23 & 1.81 & 45.18 & 6.12398 & 0.686098 & 0.013886\ 2458050.564769 & 31950.75 & 2.24 & 47.89 & 6.12488 & 0.636900 & 0.019858\ 2458051.552113 & 31949.82 & 2.07 & 42.45 & 6.13584 & 0.675663 & 0.018119\ 2458052.333890 & 31940.43 & 1.21 & 47.34 & 6.13214 & 0.665658 & 0.007353\ 2458052.475119 & 31939.95 & 1.26 & 51.87 & 6.12258 & 0.650656 & 0.007588\ 2458052.551769 & 31941.67 & 1.61 & 50.37 & 6.11954 & 0.680874 & 0.012084\ 2458053.372143 & 31940.36 & 2.33 & 41.75 & 6.11748 & 0.669119 & 0.021128\ 2458088.371319 & 31948.49 & 1.25 & 52.18 & 6.12414 & 0.693171 & 0.006900\ 2458098.320396 & 31948.53 & 1.10 & 49.90 & 6.12255 & 0.702622 & 0.005724\ 2458098.426520 & 31950.12 & 1.40 & 45.10 & 6.11918 & 0.712788 & 0.009317\ 2458102.331265 & 31951.26 & 1.33 & 45.02 & 6.11895 & 0.704454 & 0.008007\ 2458102.409463 & 31950.42 & 1.56 & 50.57 & 6.12284 & 0.696478 & 0.010865\ 2458103.342309 & 31949.71 & 1.39 & 54.92 & 6.10729 & 0.749587 & 0.008941\ 2458103.418424 & 31953.68 & 1.35 & 49.18 & 6.11461 & 0.737197 & 0.008863\ Acknowledgements {#acknowledgements .unnumbered} ================ L.M. and D.M. acknowledge support from INAF/Frontiera through the “Progetti Premiali” funding scheme of the Italian Ministry of Education, University, and Research. Some of this work has been carried out within the framework of the NCCR PlanetS, supported by the Swiss National Science Foundation. A.V. is supported by the NSF Graduate Research Fellowship, grant No. DGE 1144152. This work was performed in part under contract with the California Institute of Technology (Caltech)/Jet Propulsion Laboratory (JPL) funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute (A.V. and R.D.H.). A.C.C. acknowledges support from STFC consolidated grant number ST/M001296/1. D.W.L. acknowledges partial support from the Kepler mission under NASA Cooperative Agreement NNX13AB58A with the Smithsonian Astrophysical Observatory. C.A.W. acknowledges support by STFC grant ST/P000312/1. X.D. is grateful to the Society in Science-Branco Weiss Fellowship for its financial support. This material is based upon work supported by the National Aeronautics and Space Administration under grants No. NNX15AC90G and NNX17AB59G issued through the Exoplanets Research Program. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.The HARPS-N project has been funded by the Prodex Program of the Swiss Space Office (SSO), the Harvard University Origins of Life Initiative (HUOLI), the Scottish Universities Physics Alliance (SUPA), the University of Geneva, the Smithsonian Astrophysical Observatory (SAO), and the Italian National Astrophysical Institute (INAF), the University of St Andrews, Queen’s University Belfast, and the University of Edinburgh. This paper includes data collected by the *Kepler* mission. Funding for the *Kepler* mission is provided by the NASA Science Mission directorate. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5–26555. Support for MAST for non–HST data is provided by the NASA Office of Space Science via grant NNX13AC07G and by other grants and contracts. This research has made use of NASA’s Astrophysics Data System and the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. \[lastpage\] [^1]: Email: wkmr@roe.ac.uk [^2]: We verified that the two HARPS-N datasets were obtained using the same instrumental setup, and analyzed with the same version of the pipeline and using the same RV mask [^3]: In the present work, $\vec{U}$ is defined to be positive in the direction of the Galactic center. [^4]: As implemented in <https://github.com/bmorris3/interp-acf>. [^5]: Version 5, available at <https://github.com/LucaMalavolta/PyORBIT>. [^6]: Available at <https://github.com/hpparvi/PyDE> [^7]: Available at <http://www.exoplanet.eu>
--- address: - | Universidad EAFIT, Department of Finance\ Medellín, Colombia - | Universidad EAFIT, Department of Mathematical Sciences\ Medellín, Colombia author: - - bibliography: - 'library.bib' title: '****' --- Introduction ============ South America has a great variety of geographical conditions and characteristics, represented by the different landscapes, which, together with the latitudinal extension, provides a high climatic diversity on the continent [@Jiminez2005]. Within this climatic diversity, the tropical zone emphasizes, where the great amount of solar energy avoids the existence of strong winters [@Latrubesse2005], and its continental properties prevent the climatic events present in other places of the continent. Colombia is a country where much of its territory has a tropical climate, specifically tropical humid, where rainfall usually exceeds 2000 mm for year. Its climate and weather are affected by factors such as the southern oscillation of the inter-tropical convergence zone (ITCZ), the Pacific and Atlantic oceans, the Amazon basin and the Andes mountain range, among others [@Marin2006]. All this climatic, geographical and biological configuration make Colombia one of the regions with the highest water resources in South America. In order to take advantage of this surface water supply, the colombian electrical system is based on hydroelectric generation in about 80$\%$, where most of the generation stations are located in the Andean, Caribbean and Pacific region. The fact that the Colombian electrical system is focused on hydraulic generation, implies that the presence of changes that affect the hydric resource, causes variations of the level of water in the reservoirs, which translates into fluctuations in the price of electric energy. Therefore, the modeling and forecast of variables of climatic or hydrological order occupies an important place in the decision making of the agents of the electrical system. The main investigations that address the forecast of climatic or hydrological variables of influence for the colombian electric sector are limited. In this sense, [@Salazar1998] use the singular spectral analysis, as well as the method of maximum entropy to adjust models of monthly precipitation for some basins in Antioquia, [@Poveda2002] evaluates different non-linear prediction methods of monthly average flows of 6 important rivers for the generation of electric energy in Colombia, [@Rojo-Hernandez2010] on the other hand, shows the nonlinear dynamics of the flows of the rivers of Colombia using a periodic model of prediction based on the singular spectral analysis. Thus, such investigations has been focused on specific hydrological regions, while using data that do not have high frequency, avoiding the capture of relevant information in the hydrological series like periodicity. Particularly, the climatic and hydrological variability of Subtropical-Tropical South American region shows certain processes represented by low frequency quasi-periodical fluctuations [@Vargas2002]. This situation is not foreign to the contributions of water discharge of the rivers that contribute water to the reservoirs of the colombian electrical system, since, in daily frequency it is evidenced the existence of a fundamental period that is repeated every 3 years, in which there are sub-periods that are repeated every year. The objective of this research is to model and forecast the water discharge of the rivers linked to the electric power generation system in Colombia, which exhibit periodic dynamics in their first logarithmic transformation. In this sense, we use the methodology proposed by [@Monsalve2017], in which we estimate the parameters of one-factor mean stochastic reversion process, where the functional trend follows periodic behavior. For it, the estimation of maximum likelihood is used as well as the Fourier analysis. Subsequently, a period of the process is chosen, namely 2007-2010, with the information of this period we simulate paths for the period 2010-2013 and contrast them with the real data. This paper is organized as follows. In section \[climate\] we explain the main climatic and hydrological conditions of Colombia, as well as a detailed description of the data used. In section \[textural\] we present a textural modeling of the hydrological contributions in the colombian electric system, where we analyze its periodicity and its statistical and distributional properties. In section \[estimation\] we find the parameters using a Gaussian estimation technique that which allows to perform a forecast of medium term. Finally section \[conclusions\] shows the conclusions, comments and future works. Climate and Hydrology in Colombia {#climate} ================================= Climate in Colombia ------------------- The climate refers to atmospheric conditions prevailing in a place during a certain period [@Pabon-Caicedo2001], whose characteristics are defined mainly by the astronomical location of that place, besides aspects such as general circulation, surface characteristics, altitude and exposure, among others [@Koppen1930]. Astronomically, South America extends through approximately $65^{\circ}$ latitude (12 $^{\circ}$N-55 $^{\circ}$S) with a width extending along $45^{\circ}$ longitude (35 $^{\circ}$W-80 $^{\circ}$W). The tropical condition dominates most of the terrestrial surface and due to the small distance between the hottest and coldest months in the tropical parts and the attenuated range in the nontropical parts, there is no significant occurrence of the continental type climate [@Eidt1969], and therefore the oceans are of great importance in most of the region. In this sense, the wind coming from semi-permanent anticyclones linked to the Atlantic and Pacific ocean determines the air circulation. Thus, during the months of january and july these semi-permanent anticyclones change of position, being these seasonal changes fundamental determinants of the synoptic climatology of the region, which in turn take place due to the Intertropical Convergence Zone (ITCZ) where opposite effects of the trade winds of the northern hemisphere and the southern hemisphere are canceled. The climate in the tropical region of South America is also affected by the geographic conditions of the region or surface characteristics, which cause changes in the dynamics of the wind circulation, where the most important is the Andean mountain chain. The Andes, prevent that the winds of the anticyclone of the Pacific South enter in the region, additionally, due to this mountain chain, humid winds from the east can transport heavy rain into the Amazon basin [@Eidt1969]. It is noteworthy that, due to the geographic conditions of the region, two factors, mountain-valley breeze and land-sea breeze, contribute to local climatic variations, which are caused by faster warming in the day and faster cooling of the air at night in the highest places. The pattern of the different tropical climates occupies an important place in the formation of the hydrological cycle of such regions [@Balek1983]. Thus, the most variable element of the tropical climate corresponds to the rainfall, where three types, conventional, cyclonic and orographic are identified [@McGregor1998], from these, the stations are determined, in such a way that, the quantity and the temporal distribution of such rainfalls are important criteria for distinguishing wet and dry sub-climatic zones [@Latrubesse2005]. According to [@Ideam2005], Colombia is a country where most of the region is classified as a per-humid and very humid climate according to the Thornthwaite climatic classification, the main areas included in this climate correspond to the Pacific, Orinoquia, Amazonia, and sectors of the Andean region in most of Antioquia, Caldas, Risaralda and western Santander. In turn, in the foothills of the three mountain ranges and to the south of the Caribbean region the climates are slightly humid, moderately humid and humid; while in the Cundiboyacense plateau, and sectors of the valleys of the high Magdalena and high Cauca, basins of the rivers Chicamocha and Zulia and sectors of the center of the Caribbean region, subhumid climates dry and subhumid humid are the constant. The humid tropical climates are characterized by temperatures ranging from 24 to 30 $^\circ$C with an annual oscillation of around 3 $^\circ$C [@Latrubesse2005]. This situation is not foreign to the Colombian case, since in humid regions of Orinoquia, Pacific, and the Caribbean there are average temperatures near 28 $^\circ$C. In the Andean region temperatures vary according to height above sea level, slope exposure and precipitation regime, thus, for an elevation of 500 meters temperatures are between 25.2 $^\circ$C and 27.2 $^\circ$C for the eastern mountain range eastern slope and the western mountain range eastern slope respectively [@Ideam2_2005]. With respect to the circulation of the wind, Colombia is dominated by the trade winds characterized by its stability and by be weaks in general [@Ideam3_2005]. The highest winds are located in the region of Alta Guajira with average speeds of 6 meters for second, followed by regions such as the Caribbean coast, the north and south of the department of Cesar, among others, while in most of the country speed average annual wind is close to 2 meters for second. The factors described above make Colombia one of the countries with the greatest rainfall in the world, where also, there are inter-annual scale processes such as ENSO (El Niño oscillation of the south) events that influence the country’s rainfall regime. Thus, *El Niño* phenomenon causes a decrease in rainfall and an increase in temperatures, while *La Niña* phenomenon has the opposite effect. The northern and central areas of the Pacific region have the highest rainfall in Colombia with an annual average between 8000 and 10000 mm, followed by the Amazon region with uniform rainfall throughout the year and average precipitations between 3000 and 4500 mm [@Ideam4_2005]. The rains in the Andean region are influenced by aspects such as geography and elevation, thus, the slopes located in the middle of the Magdalena and the middle Cauca, areas of the Eje Cafetero, Antioquia and Santander present the highest rainfalls with levels between 2000 and 4000 mm. It is precisely this geographic condition of the central and western mountain ranges in the Andes that allows the presence of high volumes of rainfall in the extreme south of the Caribbean region (1800 to 2000 mm). Hydrology in Colombia --------------------- The climatic and geographical configuration of Colombia make it one of the countries with greater water wealth of the world. This richness is represented by the extensive surface water network, favorable groundwater storage conditions and the existence of large areas of wetlands, located in the greater part of the surface [@Cabrera2010]. The colombian hydrological regime is characterized by dry and humid periods during the year, where the water supply varies according to the five hydrographic areas of the country, the Caribbean, Magdalena-Cauca, Orinoquia, Amazon and Pacific, as shown in Figure \[fig:1\]. Each one of these areas is conformed by hydrographic zones according to the composition of each region. ![**Hydrographic Areas in Colombia**](fig1){width="120mm"} \[fig:1\] In this sense, the hydrology of the Caribbean region characterized by heterogeneity in its relief, is influenced by the Sierra Nevada of Santa Marta, the basins of the river Catatumbo, Rancheria, León and basin high and low of the Atrato river. In her, the highest hydrological contributions come from the Sinu river in the department of Cordoba and the Atrato river in the departments of Chocó and Antioquia. Thus, the percentage of water supply in this area represents 9.1$\%$ of the national supply during a year, while its water discharge is 5,799 $m^{3}/s$ [@Santos2014]. In the Magdalena-Cauca hydrographic area, there is an important supply of surface waters in the upper, middle and lower basins of the Magdalena river and the Cauca river basin. Thus, rivers such as Suaza, Páez, Cabrera, Saldaña, Coello and Bogotá contribute with important volumes of water to the upper basin of the Magdalena River, while the low basin is fed by rivers such as Gualí, Cimitarra, Lebrija, Chicamocha, Sogamoso, Suárez, among others. In the lower Magdalena, on the other hand, the rivers Cauca and San Jorge converge to the river Magdalena. The Cauca river basin is characterized by a variety of hydro-climatic systems, where the Tarazá, Nechí and Porce rivers provide high volumes of water. The hydric supply in this region equals 13.5$\%$ at the national level, with a water discharge of 8,595 $m^{3}/s$ [@Santos2014]. The Orinoquia region is represented by the high basins of the Arauca and Casanare rivers, where most of its rivers originate in the foothills of the eastern cordillera. In this region, large rivers flow, as the Arauca, Meta, Guaviare, that by their length and volume become navigable during most of the year [@Garcia2010]. The hydric supply is 26.3$\%$ in relation to the national percentage, while its water discharge is 529,469 $m^{3}/s$ [@Santos2014]. The hydrographic area of the Amazon is composed of the basins of the rivers Amazonas, Caquetá, Putumayo, Vaupés and Guainía whose rivers are mighty. This region exhibits an extensive tropical forest and a variety of ecosystems that are configured to create a high biodiversity. The hydric supply is the highest in Colombia representing 37$\%$ of the total, as well as the water discharge with 23,626 $m^{3}/s$ [@Santos2014]. For its part, the Pacific region exhibits the highest rainfall and water yields in the country, which is mainly made up of the basins of the rivers Patía, San Juan, Micay, Baudó and Atrato. The previous conditions make the hydric supply in this region equivalent to 14.1$\%$ of the national total, with a water discharge of 8,980 $m^{3}/s$ [@Santos2014]. Data ---- The objective of this research is to model and forecast the hydrological contributions in the colombian electrical system. For this purpose, we take as reference variable, the water discharge of the rivers linked to the Sistema Interconectado Nacional (SIN), which in turn is the integrative model that allows the operation of the electric sector in Colombia from generation to transmission and distribution of electricity. Thus, the variable water discharge measured in $m^{3}/s$ is obtained from the BI portal of the company XM which is the company that operates and manages the Colombian electricity market. XM therefore receives, collects and manages the information of the rivers that contribute water to the reservoirs of the Colombian electrical system, information that is provided by the companies that own or operate these reservoirs and that intervene in the generation of hydroelectric energy. In this sense, in Colombia, there are about 25 reservoirs that are part of the electrical system, which are located in various places of the national geography and are nourished by different rivers. According to this location five hydrological regions are defined, namely, Antioquia, Caribbean, Center, East and Valle as shown in Figure \[fig:2\]. Antioquia, defined in Figure \[fig:2-2\], is one of the regions with the highest hydrographic representation in the colombian electrical system with important reservoirs such as Peñol, Playas, Miraflores, among others. The Caribbean region represented in Figure \[fig:2-3\], thanks to its geographic, climatic and environmental characteristics, has only the Urrá 1 reservoir, whose water resource comes mainly from the Sinú river. On the other hand, in the Central region (Figure \[fig:2-4\]) rivers such as Betania, Prado and Quimbo flow, among others, allowing the formation of reservoirs bearing the same name. The East hydrological region as shown in Figure \[fig:2-5\] is formed by three reservoirs, Esmeralda, Chuza and Guavio, while the Valle region (Figure \[fig:2-6\]) has the influx of the Calima, Cauca Salvajina, Alto Achincaya and Digua rivers and the Calima, Salvajina and Alto Achincaya reservoirs. Table \[tab:hidros\] specifies in greater detail the water resources in the Colombian SIN for each hydrological region, as well as the methodology and frequency of water discharge measurement of the rivers considered. Most operators or owners of the reservoirs use the water balance or operating balance to measure the water discharge of the rivers that supply to these reservoirs, while some others use direct measurement depending on the reservoir. The water balance is based on the principle of mass conservation, where what enters in the reservoir (water discharge, rainfall, water imports, etc.) less what comes out of the reservoir (for power generation, dumping, water exports, etc.) equals the storage difference in said reservoir, this difference in volume (expressed in cubic meters) is adjusted to the time and other variables to obtain the water discharge. The direct method consists in the dam site measurement, thus, with a station of measurement of the level of the river in the tail of the reservoir, the level of entry and the level transited to the point of dam are calculated, using regression techniques or by area factor. \[tab:hidros\] In this context, we take the water discharge information ($m^{3}/s$) that operators or owners of the reservoirs report daily to the company XM. Subsequently we make an aggregation of all the data, obtaining then the total water discharge for the SIN, to finally make a first logarithmic transformation on the data, which will be the reference variable in the subsequent sections and will be called hydrological contributions. The chosen time window is located between february 5, 2004 and february 5, 2016, giving a total of 6192 observations with daily frequency. The choice of this period is due to the possible existence of periodicity in the data. Stochastic Modeling {#textural} =================== Periodicity ----------- Various natural, physical and financial phenomena are described by processes where periodicity appears implicitly. However, with the exception of experiments in controlled environments the fluctuations of such phenomena are hardly periodic, hence, the almost periodicity becomes important to describe more precisely the dynamics of these phenomena [@Bezandry2011; @Diagana2007]. In terms of climatology, most of the phenomena have a dynamic that presents a regular pattern and is repeated over a fixed period, such as temperature during periods of seasons, levels of precipitation, among others, situation that is not foreign to the Colombian case. In particular, the hydrological contributions appear to exhibit periodic dynamics as shown in Figure \[fig:3\]. Thus, the graphical analysis indicates a fundamental period that is presented every three years for a date close to February 4, as the continuous red lines denote it. In addition, each fundamental period presents sub-periods that are repeated approximately every year, as dotted blue lines denote it. This hypothesis is observed in greater detail overlapping each one of the fundamental periods, as shown in Figure \[fig:4\]. ![**Periodic Decomposition of Hydrological Contributions**](fig3){width="\textwidth"} \[fig:3\] ![**Overlapping Periods for Hydrological Contributions**](fig4){width="\textwidth"} \[fig:4\] The periodogram in spectral analysis is a tool of recurrent use to identify periodicities, this estimates the function of spectral density for a continuous set of frequencies, estimation based on the spectral representation theorem where the autocovariance function and the spectral density function are transformed of Fourier one of the other [@Madisetti1998]. In the case of hydrological contributions, there are no pronounced peaks that indicate possible periods. However, it may be misleading to look for peaks in a periodogram, since, the ordinates at the Fourier frequencies are relatively independent, they are bound to fluctuate and show many small peaks and troughs [@Bloomfield2004]. For this reason, [@Fisher1929] proposed a test of significance of the largest peak in the periodogram through the g-statistic, expressed by [@Wichert2004] such as, $$g=\frac{\max_{k}I(\omega_{k})}{\sum_{k=1}^{[N/2]}I(\omega)_{k}}$$ Where, $I(\omega)_{k}$ denotes the periodogram and $N$ the sample size. Thus, large values of $g$ reject the null hypothesis according to which the process is purely random. For the case of the Colombian hydrological contributions, we assume the methodology proposed by [@Wichert2004], with 4 time series that correspond to the fundamental periods, for each one, the Fisher’s g-statistic and its respective p-value are calculated. The results shown in the Table \[tab:g-estadistico\] indicate that the four selected periods reject the null hypothesis according to which they follow a purely random dynamics, therefore, such periods have a statistically significant periodic component. [ccccccccc]{} **Statistic** & & **2004-2007** & & **2007-2010** & & **2010-2013** & & **2013-2016**\ **g-statistic** & & 0.806814 & & 0.832377 & & 0.771508 & & 0.786318\ **p-value** & & 0.000000 & & 0.000000 & & 0.000000 & & 0.000000\ \[tab:g-estadistico\] In this context, the hydrological contributions in Colombia exhibit a periodic dynamic, with a fundamental period present every 3 years, so the first is located approximately between 5-Feb-2004 and 4-Feb-2007, the second between 5- Feb-2007 and 4-Feb-2010, the third between 5-Feb-2010 and 4-Feb-2013, and the fourth between 5-Feb-2013 and 4-Feb-2016. In addition, within each fundamental period there are sub-periods that are repeated each year during a date close to February 25. Statistical and Distributional Properties ----------------------------------------- Hydrological contributions in Colombia have exhibited an average behavior close to 7.4, where the period that has presented the lowest average level has been between 2013 and 2016, while the period between 2010 and 2013 has had the highest average level, as shown in Table \[tab:descrip\]. In terms of dispersion from the first period (2004-2007) to the third (2007-2010) there is an increase in the variation starting at 0.36 and ending at 0.44. For its part, the Augmented Dickey-Fuller test indicates that the data considered are not stationary and therefore it is necessary to take a first difference. The first difference of hydrological contributions shows that all periods are stationary at around 0%, where the standard deviation is at levels close to 0.18. \[tab:descrip\] On the other hand, the analysis of distributional properties on the difference indicates that for each study period the normality hypothesis is satisfied. To guarantee this hypothesis, we selected test intervals within each fundamental period, in which increased or reduced the level of water in the reservoirs are presented. Therefore, the choice of these intervals was made considering scenarios of drought and abundant rains, that resulted in low and top peaks in the data. The results of the analysis on the selected intervals indicate that the normality hypothesis is accepted by the test of Jarque-Bera as shown in Table \[tab:descrip\]. Figure \[fig:5\] shows the histogram for the interval of each period and its normal theoretical distribution, which results confirm the mentioned previously. To verify if the hydrological contributions revert to the mean we use the Variance Ratio test (VR). In this test the null hypothesis indicates that the data follow a random walk such that the variance of the return of $k$-period is $k$ times the variance of the one-period return and, therefore, the expected value of VR($k$) must be equal to unity for all horizons $k$ [@Malliaropulos1999]. Thus, data will revert to the mean if VR($k$) is significantly lower than the unit at long horizons, $k$ [@Malliaropulos1999]. The results of the Variance Ratio for all periods as shown in Table \[tab:meanrevertion\], indicate that the hydrological contributions revert to the mean since for different horizons. [ccccccccccc]{} \[4\][\*]{}[**Period**]{} & & \[4\][\*]{}[**Statistic**]{} & &\ & & & & **2** & & **4** & & **8** & & **16**\ & & & & & & & & & &\ \[0\][\*]{}[**2004-2007**]{} & & **VR** & & 0.8961\*\*\* & & 0.6541\*\*\* & & 0.4136\*\*\* & & 0.2720\*\*\*\ & & **z-Statistic** & & -3.4387 & & -6.1184 & & -6.5597 & & -5.4731\ & & & & & & & & & &\ \[0\][\*]{}[**2007-2010**]{} & & **VR** & & 0.8439\*\*\* & & 0.6476\*\*\* & & 0.4514\*\*\* & & 0.2991\*\*\*\ & & **z-Statistic** & & -5.1663 & & -6.2331 & & -6.1376 & & -5.2693\ & & & & & & & & & &\ \[0\][\*]{}[**2010-2013**]{} & & **VR** & & 0.8829\*\*\* & & 0.6711\*\*\* & & 0.4746\*\*\* & & 0.2966\*\*\*\ & & **z-Statistic** & & -3.8869 & & -5.7010 & & -5.6760 & & -5.0877\ & & & & & & & & & &\ \[1\][\*]{}[**2013-2016**]{} & & **VR** & & 0.9359\*\*\* & & 0.7396\*\*\* & & 0.5120\*\*\* & & 0.3308\*\*\*\ & & **z-Statistic** & & -1.9284 & & -4.2490 & & -5.1156 & & -4.7732\ \[tab:meanrevertion\] Estimation and Forecast {#estimation} ======================= Model and Method of Estimation ------------------------------ In order to ensure that daily data on the hydrological contributions of the Colombian electrical system can be modeled as a mean reversion process with periodic functional tendency, as defined in [@Monsalve2017], it is necessary to ensure periodicity, normality in the first difference, and reversion to the mean. As shown above, such contributions follow a periodic dynamic with a fundamental period of three years duration and sub-periods of annual frequency. With regard to normality, we find that in each period the normality hypothesis for the difference is satisfied through the Jarque-Bera test. On the other hand, the results of the Variance Ratio test indicate that the variable considered presents reversion to the mean for all the fundamental periods. Thus, the linear stochastic differential equation which characterizes the hydrological contributions is given by, $$\label{ede} dH_{t}=\alpha (\mu(t)-H_{t})d_{t}+\sigma H_{t}^{\gamma}dB_{t}$$ With the initial condition $H_{0}=h$, where $\alpha>0$ is the rate of reversion, $\sigma>0$ is the parameter associated with volatility, $\gamma=0$ are constants that determine the sensitivity of the variance to the level of $H_{t}$, $\left \{B_{t}\right \}_{t\geq 0}$ is a One-dimensional Standard Brownian Motion defined in a probability space $(\Omega,\mathcal{F},\mathbb{P})$, and $\mu(t)$ that is the mean reversion level is defined as a Fourier series of the form, $$\label{mu} \mu(t)=\sum_{k=0}^{n}a_{k}\cos(2\pi tk+\phi_k) \hspace{0.4cm} \text{with} \hspace{0.15cm} n=0,1,2,\cdot\cdot\cdot$$ Where $a_{k}$ is the amplitude parameter and $\phi_{k}$ the phase parameter. In order to estimate the parameters of equation (\[ede\]) and (\[mu\]), the methodology proposed by [@Monsalve2017] is used. Thus, $\mu(t)$ can be expressed in terms of the expected value of $H_{t}$, then (\[ede\]) is defined as, $$\label{eseuler} dH_{t}=\alpha \left(m(t)+\frac{\stackrel{.}{m}(t)}{\alpha}-H_{t}\right)d_{t}+\sigma H_{t}^{\gamma}dB_{t}$$ With $m(t)=E[H_{t}]$ and $\stackrel{.}{m}(t)=\frac{dm(t)}{dt}$. Then using the Euler-Maruyama scheme in (\[eseuler\]) and defining a new variable $Y_{t}$, $$Y_{t}=\frac{H_{t}-H_{t-1}-[\alpha(m_{t-1}-H_{t-1})+\stackrel{.}{m}(t)]\Delta}{H_{t-1}^{\gamma}}=\epsilon_{t} \hspace{0.15cm} ; \hspace{0.4cm} \epsilon_{t} \sim N(0,\sigma^{2}\Delta)$$ Where their maximum likelihood function is given by, $$L(\theta|\left\{Y_{t}\right\})=\left(\frac{1}{2\pi\sigma^{2}\Delta}\right)^{T/2}\cdot\exp\left[\frac{-1}{2\sigma^{2}\Delta}\sum_{i=1}^{T}\left(\frac{H_{i}-H_{i-1}-[\alpha(m_{i-1}-H_{i-1})+\stackrel{.}{m}(i-1)]\Delta}{H_{i-1}^{\gamma}}\right)^{2}\right]$$ Solving the problem of maximization of the previous function we obtain the estimates of $\alpha$ and $\sigma$ such that, $$\begin{split} \hat{\alpha}&=\frac{\sum_{i=1}^{T}\left((H_{i}-H_{i-1}-\stackrel{.}{m}(i-1)\Delta)(m_{i-1}-H_{i-1})/H_{i-1}^{2\gamma}\right)}{\sum_{i=1}^{T}\left[(m_{i-1}-H_{i-1})/H_{i-1}^{\gamma})\right]^{2}\Delta}\\ \\ \hat{\sigma}&=\sqrt{\frac{1}{T\Delta}\sum_{i=1}^{T}\left(\frac{H_{i}-H_{i-1}-[\hat{\alpha}(m_{i-1}-H_{i-1})+\stackrel{.}{m}(i-1)]\Delta}{H_{i-1}^{\gamma}}\right)^{2}} \end{split}$$ Once we have $\hat{\alpha}$, $\hat{\sigma}$ and $\hat{\mu}(t)$, we obtain a better estimate of $\mu(t)$ through the Fourier analysis, such that, $$\label{representation} \hat{\hat{\mu}}_{n}=\sum_{k=o}^{L}a_{k}\cos\left[\frac{2\pi kn}{N}+\phi_{k}\right]$$ Where $L=\frac{N}{2}$ for $N$ even, $L=\frac{N-1}{2}$ for $N$ and, $$\begin{split} a_{k}&=\left|\hat{M}_{k}\right|=\sqrt{R(\hat{M}_{k})^{2}+I(\hat{M}_{k})^{2}}\\ \phi_{k}&=\arg(\hat{M}_{k})=tan^{-1}\left(\frac{I(\hat{M}_{k})}{R(\hat{M}_{k})}\right) \end{split}$$ With $\hat{M}$ as a vector of complex numbers obtained through the Discrete Fourier Transform (DFT) of the signal $\hat{\mu}(t)$, $R(\hat{M})$ the real part of the rectangular representation of $\hat{M}$ and $I(\hat{M})$ the corresponding imaginary part. Thus, we proceed to re-estimate $\alpha$ and $\sigma$ with a mechanism similar to that used to find $\hat{\alpha}$ and $\hat{\sigma}$, such that, $$\begin{split} \hat{\hat{\alpha}}&=\frac{\sum_{i=1}^{T}\left((H_{i}-H_{i-1})(\hat{\hat{\mu}}_{i-1}-H_{i-1})/H_{i-1}^{2\gamma}\right)}{\sum_{i=1}^{T}\left[(\hat{\hat{\mu}}_{i-1}-H_{i-1})/H_{i-1}^{\gamma})\right]^{2}\Delta}\\ \\ \hat{\hat{\sigma}}&=\sqrt{\frac{1}{T\Delta}\sum_{i=1}^{T}\left(\frac{H_{i}-H_{i-1}-\hat{\hat{\alpha}}(\hat{\hat{\mu}}_{i-1}-H_{i-1})\Delta}{H_{i-1}^{\gamma}}\right)^{2}} \end{split}$$ Estimation and Forecast Results ------------------------------- For each one of the fundamental periods of the hydrological contribution in Colombia an estimation process is carried out as defined in the previous section, achieving a correct estimation of both the external parameters $\Theta=[\alpha,\sigma]$ and the internal parameters $\Phi=[a_{k},\phi_{k}]$ that allow to characterize the dynamics of this variable in the selected periods. Due to the periodic dynamics of the sample, a period will be selected to later execute a forecast process one period forward and contrast it with real data. The first step in the estimation process is to find the estimates of $m(t)$ and $\stackrel{.}{m}(t)$ from the discrete data for the hydrological contributions. Thus, $\hat{m}(t)$ which can be obtained through different filters and smoothing techniques, is approximated by the Hodrick-Prescott methodology where the smoothing parameter is $\lambda=40000$, whereas $\hat{\stackrel{.}{m}}(t)$ is obtained through a numerical three-point derivation rule. The smoothing parameter is chosen so that the short-term information is correctly captured while maintaining the trend adjustment. Once we have $\hat{m}(t)$ and $\hat{\stackrel{.}{m}}(t)$ we construct a realization of $Y_{t}$ that allows us to find the first estimate of $\alpha$, $\sigma$ and $\mu(t)$, and then from the Fourier analysis to obtain $\hat{\hat{\alpha}}$, $\hat{\hat{\sigma}}$ and $\hat{\hat{\mu}}(t)$. In the second estimation phase it is necessary to consider an adequate sinusoidal sum, therefore, following methodology proposed by [@Monsalve2017], we choose a number of cosines in the Fourier series that capture sufficient information. Thus, Table \[tab:sumcos\] shows the RMS of $\hat{\hat{\mu}}(t)$ calculated with L sinusoidal sums in the DFT, with respect to $\hat{\hat{\mu}}(t)$ calculated with L-1 sinusoidal sums in the DFT, the results show that as the number of cosines increases, the RMS tends to decrease. For the period between 2004 and 2007 a total of 26 cosines are sufficient, 24 for the period 2007-2010, 21 for the period 2010-2013 and 20 for the period 2013-2016. [ccccccccc]{} **L-sum** & & **2004-2007** & & **2007-2010** & & **2010-2013** & & **2013-2016**\ 1 & & 0.038972111 & & 0.041191016 & & 0.039792972 & & 0.041321361\ 2 & & 0.022651407 & & 0.029692319 & & 0.035853134 & & 0.010057239\ 3 & & 0.005210850 & & 0.024119046 & & 0.018588931 & & 0.007456015\ 4 & & 0.004791647 & & 0.018258372 & & 0.014054704 & & 0.003561170\ 5 & & 0.004053573 & & 0.007316677 & & 0.008052817 & & 0.003360246\ $\cdot$ & & $\cdot$ & & $\cdot$ & & $\cdot$ & & $\cdot$\ 15 & & 0.000045839 & & 0.000090209 & & 0.000092941 & & 0.000262686\ 16 & & 0.000026358 & & 0.000056475 & & 0.000072419 & & 0.000256432\ 17 & & 0.000025099 & & 0.000052266 & & 0.000039861 & & 0.000191860\ 18 & & 0.000022480 & & 0.000041556 & & 0.000024837 & & 0.000160290\ 19 & & 0.000018664 & & 0.000024320 & & 0.000018994 & & 0.000144984\ **20** & & 0.000017204 & & 0.000023895 & & 0.000018010 & & **0.000106069**\ **21** & & 0.000016727 & & 0.000021549 & & **0.000017071** & & 0.000071689\ 22 & & 0.000015768 & & 0.000012234 & & 0.000006973 & & 0.000066206\ 23 & & 0.000014560 & & 0.000010402 & & 0.000006144 & & 0.000058468\ **24** & & 0.000013902 & & **0.000010054** & & 0.000005167 & & 0.000057682\ 25 & & 0.000012391 & & 0.000009897 & & 0.000004493 & & 0.000052665\ **26** & & **0.000011072** & & 0.000009710 & & 0.000003686 & & 0.000047684\ 27 & & 0.000009292 & & 0.000009521 & & 0.000003607 & & 0.000047100\ 28 & & 0.000008372 & & 0.000009348 & & 0.000003416 & & 0.000046930\ 29 & & 0.000007855 & & 0.000008850 & & 0.000003323 & & 0.000045595\ 30 & & 0.000007559 & & 0.000008713 & & 0.000002763 & & 0.000044605\ \[tab:sumcos\] In this sense, Table \[tab:paramex\] shows the results of the estimation of the external parameters $\alpha$ and $\sigma$ in its two phases, estimation and re-estimation, this calculations were done with the proposed methodology for a $\Delta t=\frac{1}{365}$ since the frequency of the data is daily. The reversion rate $\alpha$ presents values between 90 and 125 for the four periods, being higher in the period between 2004 and 2007, and lower between 2013 and 2016, for this case the re-estimation phase gives lower values. For its part, the parameter of volatility $\sigma$ presents values near 3 for the four periods, where the fourth and third period have the highest values respectively. \[tab:paramex\] The Fourier analysis is used for the estimation of $\Phi$ for the four fundamental periods. Thus, Table \[tab:paramint\] shows the results of the indicator $k$, the amplitude parameter $a_{k}$ and the phase angle $\phi_{k}$, where the number of cosines in the sinusoidal sum is chosen according to the RMS criterion defined above in Table \[tab:sumcos\]. \[tab:paramint\] For the forecasting process, the period from 2007 to 2010 is chosen, given the similarity between the rate of reversion re-estimated ($\hat{\hat{\alpha}}$) for this period and the next. Thus, the parameters estimated from this period are taken to simulate paths in a posterior period, which is equivalent to a three years of forecast for hydrological contributions in Colombia. As first step we estimate 10000 paths from the internal and external parameters found, the calculation of these paths was done through a technique of reduction of variance, such that 5000 trajectories were simulated with positive random values while the remaining 5000 trajectories were simulated with negative random values. Then for each point in time we choose the minimum and maximum of simulated paths, construct lower and upper limits, and adhere the real dynamics of hydrological contributions for the period 2010-2013, as shown in Figure \[fig:6\]. The results indicate that the proposed forecast band is adjusted efficiently to the real data, since most observations are contained in such bands, while the dispersion of the bands with respect to the actual data is not extensive. ![**Forecast Area with Minimums and Maximums**](fig6){width="\textwidth"} \[fig:6\] To guarantee efficiency in the forecasting process, we proceeded to form bands made from the best estimate of $\mu$, ($\hat{\hat{\mu}}(t)$), adhering or withdrawing different levels of historical standard deviation of the process, beginning at 0.5 and ending at 2.6. Thus, the upper bands were calculated as ($\hat{\hat{\mu}}(t)+i\sigma_{H_{t}}$) with $i=0.5,\cdots,2.6$ while the lower bands were calculated as ($\hat{\hat{\mu}}(t)-i\sigma_{H_{t}}$) with $i=0.5,\cdots,2.6$. Once we have the bands at different levels of standard deviation, we take the observations of the 10000 trajectories defined with the information of the period 2007-2010 and check point by point how many observations are contained within the bands and how many are outside. We then calculate the probability of each trajectory to be within the bands and obtain the average of such probabilities at different levels of standard deviation, as it is denoted in column 2 of Table \[tab:probab\], column 3 for its part, corresponds to the probability that the real data for the period 2010-2013 are contained in the previously defined bands, while column 4 is only the difference between column 3 and column 2. [ccccccc]{} **Standard D.** & & **Forecast** & & **2010-2013** & & **Difference**\ 0.5 & & 67.80% & & 47.63% & & -20.17%\ 0.6 & & 76.54% & & 55.20% & & -21.34%\ 0.7 & & 83.44% & & 62.86% & & -20.57%\ 0.8 & & 88.69% & & 70.26% & & -18.43%\ 0.9 & & 92.53% & & 76.19% & & -16.34%\ 1.0 & & 95.24% & & 81.93% & & -13.30%\ 1.1 & & 97.07% & & 85.22% & & -11.85%\ 1.2 & & 98.25% & & 88.87% & & -9.38%\ 1.3 & & 98.99% & & 90.97% & & -8.03%\ 1.4 & & 99.44% & & 93.25% & & -6.19%\ 1.5 & & 99.70% & & 94.89% & & -4.81%\ 1.6 & & 99.85% & & 95.62% & & -4.23%\ 1.7 & & 99.93% & & 96.72% & & -3.21%\ 1.8 & & 99.96% & & 97.72% & & -2.25%\ 1.9 & & 99.98% & & 97.99% & & -1.99%\ 2.0 & & 99.99% & & 98.63% & & -1.36%\ 2.1 & & 100.00% & & 98.72% & & -1.27%\ 2.2 & & 100.00% & & 99.09% & & -0.91%\ 2.3 & & 100.00% & & 99.54% & & -0.46%\ 2.4 & & 100.00% & & 99.64% & & -0.36%\ 2.5 & & 100.00% & & 99.91% & & -0.09%\ 2.6 & & 100.00% & & 100.00% & & 0.00%\ \[tab:probab\] The results indicate, as expected, that with greater standard deviations, the probability that the path of the real data and the simulated paths are located in the forecast bands is greater. For the 10000 simulated paths with information from the 2007-2010 period, all data are contained in the confidence band made with 2.1 standard deviations. For its part, the real data for the period 2010-2013 are contained entirely in the confidence band made with 2.6 standard deviations. Note that from 2.1 standard deviations the difference between the probability that the real data and the simulated data are contained in the confidence band is small (-0.91$\%$), therefore it can be affirmed that the forecast method is efficient even when standard deviation levels below 2.1 are considered. Graphically, these results are shown in Figure \[fig:7\]. In this figure, forecast bands are observed at different levels of historical standard deviation of the process, calculated as $\hat{\hat{\mu}}(t)\pm i\sigma_{H_{t}}$ with $i=0.5,\cdots,3$, and contrasted with the real data. Thus, at higher standard deviations, the forecast bands capture most of the observations of the real data, particularly from 2 standard deviations. ![**Bands of Forecast to Different Levels of Standard Deviation**](fig7){width="\textwidth"} \[fig:7\] Conclusions and Comments {#conclusions} ======================== The generation of electric energy in Colombia is mainly based on hydraulic generation due to the hydrological potential of the country. Most of the hydroelectric plants are located in the Andean, Caribbean and Pacific regions, which make use of surface water flows from such regions. However, this hydrological potential is influenced by atmospheric conditions, inter-annual events such as ENSO, environmental change by anthropogenic action, among others, causing the water level in the reservoirs to vary substantially and, therefore, the prices of the electric energy. Under this scenario, modeling and forecasting the variables that affect the hydric resource, is a work of great importance for agents involved in the colombian electricity sector, particularly for generators, since it allows them to efficiently manage the generation process of electric energy. One of the variables of hydrological character with greater relevance corresponds to the water discharge of the rivers that contribute water to the reservoirs of the SIN. Such variable measured in $m^{3}/s$ is a proxy of the hydric supply for each reservoir and for the system in aggregate terms. These water discharge or hydrological contributions in their first logarithmic transformation for the period between 2004 and 2016 have exhibited periodic dynamics in daily frequency with a fundamental period that is repeated every 3 years, with sub-periods that are repeated each year. Using this periodicity we employ the maximum likelihood estimation and the Fourier analysis through the Discrete Fourier Transform (DFT), to estimate the parameters of one-factor mean reversion stochastic process where the functional trend follows to periodic behavior, to subsequently make a forecast of a specific period (2010-2013) with the information of the period immediately preceding. In this sense, the estimation method proposed is efficient and useful to characterize the dynamics of hydrological contributions in the colombian SIN. The forecast for its part, presents results were close to the real data, particularly, the forecast bands constructed with the periodic functional trend and with approximately 2 standard deviations contain the majority of the observations of the real data. The proposed method allows the agents of the Mercado de Energía Mayorista (MEM) to analyze the dynamics of the hydric resource on a daily frequency, to take the best decisions in the management of their assets. In addition, the forecast in question provides a time window of approximately 3 years, giving greater advantages in contrast to traditional forecasting methods. Acknowledgments {#acknowledgments .unnumbered} =============== To the company XM for giving us all the available information regarding the water discharge of the rivers belonging to the SIN. Empresas Públicas de Medellín (EPM), EMGESA, Empresa de Energía del Pacífico (EPSA), Empresa URRÁ and ISAGEN provided information on the water discharge for the rivers that supply water to the reservoirs for which they are owners or operators.
--- author: - Chenchao Xu - Ninghua Wu - 'Guo-Xiang Zhi' - 'Bing-Hua Lei' - Xu Duan - Fanlong Ning - Chao Cao - Qijin Chen bibliography: - '233.bib' title: 'Supplementary Material : Coexistence of nontrivial topological properties and strong ferromagnetic fluctuations in $A_2$Cr$_3$As$_3$ ($A$=Na, K, Rb and Cs)' --- Triply degenerate points in $A_2$Cr$_3$As$_{3}$ ----------------------------------------------- The electronic structures of $A_{2}$Cr$_{3}$As$_{3}$ along $k_{z}$ are illustrated in Fig. \[fig:bs\_all\_kz\] (a-d). All the compounds of this family host TPs along this high symmetry line. In comparison, Fig. \[fig:bs\_all\_kz\] (e-f) shows the mBJ results of Na$_{2}$Cr$_{3}$As$_{3}$ and K$_{2}$Cr$_{3}$As$_{3}$. As mentioned in the main text, the mBJ band structure of Na$_{2}$Cr$_{3}$As$_{3}$ is similar to that of PBE result and the only difference is the TP$_{1}$ (TP$_{2}$) (Fig. \[fig:bs\_all\_kz\] (a)) lies 8 meV above $\epsilon_F$ while 47 meV above $\epsilon_F$ in Fig. \[fig:bs\_all\_kz\] (e). In K$_{2}$Cr$_{3}$As$_{3}$, the $\gamma$ band (PBE) is elevated within mBJ calculations. Consequently, two new TPs (TP$_{1}$ and TP$_{2}$) are created while the two original TPs in PBE calculations still exist but lie below $\epsilon_F$ (i.e., TP$_{1}$ in Fig. \[fig:bs\_all\_kz\] (b) is corresponding to TP$_{3}$ in Fig. \[fig:bs\_all\_kz\] (f)). In PBE calculations, the overwhelming bulk states are around the TPs in K$_{2}$Cr$_{3}$As$_{3}$, Rb$_{2}$Cr$_{3}$As$_{3}$ and Cs$_{2}$Cr$_{3}$As$_{3}$ leading the corresponding surface states submerged into bulk continuum (Fig. \[fig:Ss\_PBE\] (a-c)). In addition, the TPs in Rb$_{2}$Cr$_{3}$As$_{3}$ stay too close and in Cs$_{2}$Cr$_{3}$As$_{3}$ the TPs lie deep below $\epsilon_F$ in PBE calculations and thus we do not perform mBJ calculations. In Fig. \[fig:Ss\_PBE\] (d), we show (010) iso-energy surface state plot at $\epsilon_F+0.15$ eV with disappearance of 3D FS within mBJ calculations, which can match well as the previous ARPES measurements[@ARPES_K233]. Besides, the iso-energy surface state at $\epsilon_F-0.07$ eV (not shown) is similar, only containing two 1D FSs. As is known, the K$_{2}$Cr$_{3}$As$_{3}$ is very air sensitive, and thus a slight off-stoichiometry may lead the 3D FS missing. ![(a-d) The electronic structure of $A_2$Cr$_3$As$_3$ ($A$=Na,K,Rb and Cs) along $k_{z}$ within PBE calculations. (e-f) The electronic structures of Na$_2$Cr$_3$As$_3$ and K$_2$Cr$_3$As$_3$ within mBJ calculations.\[fig:bs\_all\_kz\]](bs_all_kz){width="12"} ![(a-c) The (010) surface state of $A_2$Cr$_3$As$_3$ ($A$=K,Rb and Cs) along $k_{z}$ with triplet points marked as blue dots within PBE calculations. (d) The iso-energy surface state at $\epsilon_F+0.15$ eV of K$_2$Cr$_3$As$_3$ within mBJ calculations \[fig:Ss\_PBE\]](APPENDIX_Ss){width="12"} Mutiorbital RPA Calculation --------------------------- We first obtain the bare electronic susceptibility $\chi_{0}$ with Lindhard formula: $$\nonumber \chi_{0}=-\frac{1}{N_{\mathbf{k}}}\sum_{st}\sum_{\mu\nu\mathbf{k}} \frac{\langle s|\mu\mathbf{k}\rangle \langle \mu\mathbf{k}|t\rangle \langle t|\nu\mathbf{k+q}\rangle \langle \nu\mathbf{k+q}|s\rangle }{\omega+\varepsilon_{\nu\mathbf{k+q}}-\varepsilon_{\mu\mathbf{k}}+i0^{+}}(f(\varepsilon_{\nu\mathbf{k+q}})-f(\varepsilon_{\mu\mathbf{k}})),$$ where $s$,$t$ are orbital indexes and $\mu$, $\nu$ are band indexes. Since the calculation is performed in the paramagnetic state, the spin index of above formula is omitted. We then consider the Hubbard-type Hamiltonian [@PhysRevB.75.224509; @PhysRevB.69.104504]: $$H=\sum\limits_{\mathbf{k}m\sigma}\varepsilon_{km\sigma}c^{\dagger}_{km\sigma}c_{km\sigma} +H_{int}, \nonumber$$ where $H_{int}$ is the interaction part, $$\begin{aligned} H_{int}=U\sum_{is}n_{is\sigma}n_{is\overline{\sigma}} +U'\sum_{i,s,t \neq s}\sum_{\sigma,\sigma'} c^{\dagger}_{is\sigma}c^{\dagger}_{it\sigma'}c_{is\sigma'}c_{it\sigma} \nonumber \\ +J\sum_{i,s,t \neq s}\sum_{\sigma,\sigma'} c^{\dagger}_{is\sigma}c^{\dagger}_{it\sigma'}c_{is\sigma'}c_{it\sigma} +J'\sum_{i,s,t \neq s}c^{\dagger}_{is\sigma}c^{\dagger}_{is\overline{\sigma}} c_{it \overline{\sigma}}c_{it\sigma'} \nonumber\end{aligned}$$ As mentioned in the main text, $U$, $U'$, $J$ and $J'$ denote the intra-orbital Coulomb, inter-orbital Coulomb, Hund’s coupling, and pair hopping interaction respectively. With the above mutiorbital Hamiltonian,The charge and spin susceptibilities are obtained after the summation of bubble diagrams, $$\nonumber [\chi_{c}]_{pq;st}=\frac{[\chi_{0}]_{pq;st}}{I_{wz;pq}+[\chi_{0}]_{wz;uv}[U^{c}]_{uv;pq}}$$ and $$\nonumber [\chi_{s}]_{pq;st}=\frac{[\chi_{0}]_{pq;st}}{I_{wz;pq}-[\chi_{0}]_{wz;uv}[U^{s}]_{uv;pq}},$$ where $\chi_{0}$ is bare electronic susceptibility and $U^{c}$ ($U^{s}$) is interaction matrix of charge (spin) channel. The nonzero element of $U^{c}$ ($U^{s}$)[@Graser_2009; @Graser_2010] are: $$\begin{aligned} [U^{c}]_{ss;ss}=U, [U^{c}]_{ss;tt}=2U'-J, [U^{c}]_{st;st}=2J'-U',[U^{c}]_{st;ts}=J', \nonumber \end{aligned}$$ $$\begin{aligned} [U^{s}]_{ss;ss}=U, \quad [U^{s}]_{ss;tt}=J,\qquad [U^{s}]_{st;st}=U',\qquad [U^{s}]_{st;ts}=J' \nonumber\end{aligned}$$ Note that different from Wu $et$ $al.$[@Hu_K233] whose Hamiltonian is based on delocalized molecule orbitals with much weaker on-site interactions, our Hamiltonian is constructed with maximally projected atomic Wannier functions, with $U$, $U'$, $J$, and $J'$ directly put on Cr-$3d$ orbitals. The final charge and spin susceptibilities are obtained through: $$\nonumber \chi_{c(s)}(\mathbf{q},\omega)=\frac{1}{2}\sum_{st}[\chi_{c(s)}]_{ss,tt}(\mathbf{q},\omega)$$
--- abstract: 'We propose a method using solid state detectors with directional sensitivity to dark matter interactions to detect low-mass Weakly Interacting Massive Particles (WIMPs) originating from galactic sources.  In spite of a large body of literature for high-mass WIMP detectors with directional sensitivity, no available technique exists to cover WIMPs in the mass range $<$1 . We argue that single-electron resolution semiconductor detectors allow for directional sensitivity once properly calibrated. We examine commonly used semiconductor material response to these low-mass WIMP interactions.' author: - 'Fedja Kadribasic, Nader Mirabolfathi' - 'Kai Nordlund, Andrea E. Sand, E. Holmstr[ö]{}m, Flyura Djurabekova' bibliography: - 'Diurnal\_DM\_v02\_FK\_No\_Red\_2018\_03\_19.bib' title: 'Directional Sensitivity In Light-Mass Dark Matter Searches With Single-Electron Resolution Ionization Detectors' --- Many astrophysical observations indicate that standard model particles compose only 15% of the matter in the universe [@Ade:2013zuv]. Understanding the nature of dark matter, the remaining 85%, is of fundamental importance to cosmology, astrophysics, and high energy particle physics. Although Weakly Interacting Massive Particles (WIMPs) of mass 10-100  have been the main interest of the majority of direct dark matter detection experiments, recent signal claims, compelling theoretical models, and the lack of a convincing signal at those masses have shifted the old paradigm to include broader regions in the dark matter parameter space well below 10 [@CF1]. Direct detection experiments attempt to detect WIMPs via their elastic interaction with detector nuclei [@Gaitskell:2004gd]. Since very low energy nuclear recoils and small interaction rates from these low-mass WIMPs are expected, large-mass detectors with very low threshold are desirable.  Solid state detectors, especially those utilizing phonon-mediated readout technology, have already reached the sensitivities required to detect these very-low-mass WIMPs or are braced to do so [@Agnese:2013lua]. Both reducible (environmental) and irreducible (solar neutrino) backgrounds that may mimic WIMPs affect WIMP direct search experiment sensitivity. A potential tool to circumvent these backgrounds is the directionality of the WIMPs’ signal due to Earth’s motion through their isothermal halo distribution in our galaxy. The WIMP velocity distribution in the lab frame, and hence the expected direction of the WIMP-induced recoils, varies daily depending on the angular orientation of the detectors with respect to the galactic WIMP flux. Although many experiments propose to track WIMP-induced recoils using low-pressure gas or even liquid scintillators, they do not offer low enough energy thresholds to detect recoils from low-mass WIMP interactions ($<$1 ) [@DirectionalReview]. Furthermore, low-pressure-gas detectors require prohibitively large volumes to detect any WIMP signal.  We argue that single-electron resolution phonon-mediated semiconductor detectors, such as those in development for SuperCDMS and future generation-3 dark matter experiments, are sensitive to the nuclear recoil direction and can be used for a directional dark matter search. Our method uses the fundamental processes involved in nuclear recoil ionization excitation whose threshold exhibits a strong recoil direction dependence. Recent progress on phonon-mediated detectors, especially Neganov-Luke phonon amplification detectors [@Luke_CDMSlite], promises future large-mass semiconductor detectors with single-electron resolution [@Contactfree]. Neither experimental data nor an established computational framework exists to estimate the minimum energy required to create single electron-hole pair excitations via nuclear recoil interactions. Based on two recent observations, we assume that this so-called ionization threshold correlates with crystallographic orientation in the direction of the nuclear recoil. Firstly, strong experimental and theoretical evidence indicates that the ionization threshold, often referred to as electronic stopping, displays a nonlinear dependence on projectile velocity at low projectile energies due to electronic band structure effects [@Val03; @Mar09; @Pri12]. Secondly, recent time-dependent density functional theory (TDDFT) calculations demonstrate the appearance of an intermediate band gap state for self-recoils in silicon (Si) that arises when the projectile occupies an interstitial position, which serves to modulate the sharp ionization threshold in insulators [@Lim16]. This intermediate “electron elevator” [@Lim16] state enables excitations across the band gap even when the energy transfer in ion-electron collisions remains below the level needed for a direct transition from the valence to conduction band [@Hor16]. Electronic excitation is thus observed for projectiles with velocities as low as 0.1 Å/fs [@Lim16] corresponding to an ionization threshold below 15 eV for an Si projectile. Because this defect state exists due to the interstitial atom configuration, and the energy level oscillates as a function of the position of the interstitial, the effective ionization threshold should depend on the recoil angle. This energy is comparable to the directionally sensitive threshold displacement energy (TDE), *i.e.* the minimum energy required to eject the recoiling nucleus permanently to a crystal defect position. Hence the recoil trajectory, and the probability of an atom reaching an interstitial position to facilitate electron-hole pair excitation, should depend on the recoil angle. We model the variations in the energy landscape experienced by low-energy recoils via the TDE. We consider the threshold variation for two common detector materials, Ge and Si.  For both, density-functional theory (DFT) molecular dynamics (MD) simulations have previously obtained the average threshold displacement energy and the direction-specific values in the $\langle100\rangle$ and $\langle111\rangle$ crystal directions [@Hol08a; @Hol10a].  To determine the full TDE surface to high statistical accuracy, we follow the procedure described in Ref. [@Nor05c] with tens of thousands of different recoil directions. Put succinctly, a 4096 atom Ge or Si simulation cell was equilibrated at 0.04 K (an upper limit for the experimental detector temperature), giving all atoms random thermal displacements. After this, an atom was randomly chosen within the central eight unit cells of the simulation cell and given a recoil of energy $E$ in a randomly selected direction $(\theta,\phi)$ in three dimensions, where $\theta$ is defined as the polar angle off the \[001\] crystal direction and $\phi$ as the azimuthal angle from the \[100\] direction towards \[010\]. The evolution of the collision sequence thus generated was simulated for 10 ps, and we analyze possible defect creation automatically using Wigner-Seitz and potential energy criteria [@Nor05c]. For each atom and direction, the energy $E$ was increased from 2 eV in steps of 1 eV until a stable defect was created. The outcome of MD simulations depends crucially on the interatomic potential used [@Allen-Tildesley; @Nor05c].  Hence, for the purpose of this study, we compared several different Ge and Si interatomic potentials with the DFT results.  Among the three tested interatomic potentials for Ge [@Din86; @Nor97f; @Pos09], the modified Stillinger-Weber (SW) potential from Ref. [@Nor97f] reproduced all of the reported DFT threshold displacement energies [@Hol10a] within the error bars, giving us high confidence of a reliable description of the entire data range. Hence, this potential was used for all Ge simulations. We have previously shown that, out of three commonly used Si potentials, SW [@Sti85] reproduces the DFT and experimental results the best. Consequently, we use this potential to calculate the rates in Si. In total, we simulate about 85,000 directions for Ge and about 24,000 for Si a total of eight times. Fig. \[thresh\] illustrates the average over the resulting threshold displacement energy surfaces for Ge and Si. The symmetry of the diamond crystal structure causes the periodicity with respect to $\phi = 45^\circ$, and the zero-point quantum motion of atoms in the lattice causes the graininess in the plots. Fig. \[thresh\] shows that the energy threshold to create a defect strongly depends on the nuclear recoil direction. The Ge threshold ranges from 12.5 eV to 63.5 eV whereas that for Si ranges from 17.5 eV to 63.5 eV.  The expected total WIMP signal rate above the detection threshold can be calculated by integrating the differential rate over the recoil angle and recoil energy. In the case of a charge detector, assuming that defect and electronic excitation thresholds are equal, the energy thresholds, henceforth referred to as $E_{th} (\theta, \phi)$ and shown in Fig. \[thresh\], simply provide the lower limit to the integral $$\label {integral} R(t) = \oint_{4 \pi} \int_{E_{th} (\theta, \phi)}^{E_r^{max}} \dfrac {\partial^2 R} {\partial E_r \partial \Omega_r} \text{d} E_r \text{d} \Omega_r.\$$ This rate, measured by a fixed detector on the surface of Earth, which is moving and rotating relative to the WIMP halo, should, therefore, exhibit a diurnal modulation since $E_{th}$ is a function of $\theta$ and $\phi$. Below, we describe our procedure to calculate this integral.  ![\[thresh\] Threshold displacement energy surface in different crystal directions in Ge (top) and Si (bottom) determined from classical MD simulations illustrated with a Mollweide projection. These plots represent the averages over the eight threshold surface datasets.  Darker regions correspond to a lower energy threshold and, hence, a higher differential rate (see Fig. \[ang\]).  ](Thresh_Ge_Si.png){width="0.98\columnwidth"}  [@rate] gives the integrand in Eq. \[integral\], the differential interaction rate between halo WIMPs and detectors for spin-independent interactions, as $$\begin{gathered} \label {dmrate} \dfrac {\partial^2 R} {\partial E_r \partial \Omega_r} = \dfrac {\rho_0 \sigma_{\chi-n} A^2} {4 \pi m_\chi \mu_{\chi n}^2} \times F^2 (E_r) \hat {f}_{\text {lab}} (v_{\text {min}}, \bm{\hat {q}_r}; t) \end{gathered}$$ where $m_\chi$ is the WIMP mass, $\mu_{\chi n}$ is the WIMP-nucleon reduced mass, $\rho_0 = 0.3\ \text {GeV cm}^{-3}$ is the local dark matter density, $A$ is the mass number of the nucleus, $\sigma_{\chi-n}$ is the WIMP-nucleon cross section, $v_{\text {min}} = \sqrt {2 m_N E_r} / 2 \mu_{\chi n}$ is the minimum WIMP speed required to produce a nuclear recoil of energy $E_r$ for a given nuclear mass $m_N$, and $F^2 (E_r)$ is the Helm nuclear form factor [@fsq].  [@rate] gives the Radon transform of the WIMP velocity distribution as $$\begin{gathered} \hat {f}_{\text {lab}} (v_{\text {min}}, \bm{\hat {q}}; t) = \dfrac {1} {N_{\text {esc}} \sqrt {2 \pi \sigma_\nu^2}} \times \\ \left[ \text {exp} \left(-\dfrac {|v_{\text {min}} + \bm {\hat {q}} \cdot \bm{v}_{\text {lab}}|^2} {2 \sigma_\nu^2}\right) - \text {exp} \left(-\dfrac {v_{\text{esc}}^2} {2 \sigma_\nu^2} \right) \right] \end{gathered}$$ where $\bm {\hat {q}}$ is the recoil direction in detector coordinates, $\bm {v}_{\text {lab}}$ is the velocity of the laboratory relative to a stationary observer, $v_{\text {esc}}$ is the circular escape velocity at the Solar System’s distance from the Milky Way’s center, $\sigma_v = v_0 / \sqrt(2)$ is the dark matter velocity dispersion, and $N_{\text {esc}}$ is a normalization factor. We use $v_0 = 220\ \text {km s}^{-1}$ for the circular speed and $v_{\text {esc}} = 544\ \text {km s}^{-1}$ [@rate]. Following Appendix B of Ref. [@vel], we find the total lab velocity using the contributions due to galactic rotation, solar motion, Earth’s revolution, and Earth’s rotation. The calculations assume a detector at SNOLAB coordinates $(46.4719 \degree, 81.1868 \degree)$.  The variation in lab-frame speed of the dark matter gives a $\sim$6% annual and nearly negligible diurnal modulation [^1]. We calculate signal rates assuming a detector with 1 eV resolution, 100% detection efficiency, and no backgrounds.  We perform the integral in Eq. \[integral\] over the recoil energy $E_r$ and recoil angle $\Omega_r$ using 48 time steps on September 6, 2015. The date was chosen to cross-check our differential rate calculations with those in Ref. [@rate]. An equidistant coordinate partition interpolation of the data shown in Fig. \[thresh\] is performed on a grid with 2400 elements in the $\theta$ direction and 4800 in the $\phi$ direction. For faster computation, the grid is resampled to a size of 196,608 pixels using the HEALPix algorithm [@hp].  We compute a multidimensional Riemann sum over each dimension with 200 sample points for $E_r$ and 196,608 for $\Omega_r$. Fig. \[ang\] shows the integrated event rate for a WIMP of mass 300  and cross section $\sigma_\text{WIMP-nucleon}$=$\text10^{-39} \text{cm}^{2}$ over the course of one day (Sept 6, 2015). The mass and cross section were arbitrarily chosen within the unexplored region in the halo WIMP parameter space. Also shown in this figure are the angular distributions of the rates at four different times illustrating recoil orientation change with respect to the crystal over the course of the day. As Earth rotates, more events are detected at the energy minima than the maxima, which leads to an integrated rate modulation (in this case $\sim$60$\%$) with a phase imposed by the threshold data in Fig. \[thresh\]. We repeated this study for WIMPs covering a mass range between 230  and 10  in Ge and between 165  and 10  in Si. Lighter-mass WIMPs do not produce stable defects or electron-hole pair excitation even when traveling at the escape velocity $v_{\text {esc}} = 544\ \text {km s}^{-1}$. Fig. \[angmass\] shows the recoil angular distribution in Ge at a given time (4:00 on September 6, 2015) for a sample of WIMP masses in this range. As shown in this figure, larger mass WIMPs produce a broader recoil angle distribution. Hence, the integrated signal rate associated with larger mass WIMPs is less sensitive to the crystallographic orientation of the detector. We expect smaller event rate modulation for larger mass WIMPs due to this effect. To assess the strength of the signal rate modulation with respect to the signal mean rate, we perform a normalized root-mean squared (RMS) modulation integral over one day $$\label {rmseq} R_{\text {RMS, norm}} = \sqrt {\dfrac {1} {\langle R \rangle^2 \Delta t} \oint_{\Delta t} (R(t) - \langle R \rangle)^2 dt}$$ where $\langle R \rangle$ is the average value over $\Delta t$, which is one solar day (24 hours). The results of these studies are shown in Fig. \[rms\]. We find a clear rate modulation for WIMPs of mass below 1 . As expected, while the signal mean rate (thicker graph) decreases at lower WIMP masses, the modulation gains strength, which enables the experiments to maintain their signal to background ratio by only looking at the time intervals when the signal rate is maximized.  Furthermore, since the Si nucleus is less massive than that of Ge, the energy transfer from a WIMP is more efficient; hence, a lower WIMP mass is required to transfer recoil energy sufficient to overcome the threshold displacement energy. Consequently, the peak of the modulation appears at lower WIMP masses for Si than for Ge. The stochastic threshold displacement caused by the zero-point quantum motion of atoms was included based on the Debye model, which allows calculating the one-dimensional RMS displacement amplitude [@Gemmell:1974ub; @Debyedisplacements]. We calculate eight separate threshold datasets for Ge and Si using MD simulations. In Fig. \[rms\], the RMS curves and shaded regions show the mean and standard deviation of the normalized RMS modulation values over all eight datasets.  The kinks in the normalized RMS modulation curves correspond to the various length-scale transitions in the energy threshold shown in Fig. \[thresh\], which reveal themselves due to the larger solid angle coverage at higher dark matter masses.  We reproduce the normalized RMS modulation and mean rate using energy thresholds 50% of those in Fig. \[thresh\] as dashed curves. As expected, there is a clear diurnal modulation, albeit at lower masses. This work provides strong motivation for experimental validation of the energy thresholds for ionization excitations via nuclear elastic scattering in Ge or Si.  Based on the substantiated evidences for the threshold dependence on the nuclear recoil direction, we project a strong diurnal modulation in the expected detection rate of galactic halo WIMPs. This modulation strongly depends on the target detector material and WIMP mass, and, together with the overall mean rate, it provides an extra handle to determine WIMP mass and cross section independently. This effect can be used to discriminate WIMPs from solar neutrino backgrounds that will become the irreducible background for all dark matter search experiments. Even if future experiments find different ionization thresholds, the anisotropy predicted for electron-hole pair creation could still cause modulation in dark matter signal, albeit over a different mass range. The significance of these results motivates thorough semiconductor detector calibration at low recoil energies. N. M. acknowledges Mitchell Institute For Fundamental Physics financial support. E. H. acknowledges financial support from the Emil Aaltonen foundation and the Academy of Finland through the Centres of Excellence Program (Project No. 251748). [^1]: Earth’s revolution around the sun causes the annual modulation, whereas Earth’s rotation causes the daily one.
--- abstract: 'The paper introduces the notion of *off-line justification* for Answer Set Programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom w.r.t. a given answer set. The paper extends also this notion to provide justification of atoms *during* the computation of an answer set (*on-line justification*), and presents an integration of on-line justifications within the computation model of [Smodels]{}. Off-line and on-line justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for *debugging* answer set programs. A preliminary implementation has been developed in [$\mathbb{ASP-PROLOG}$]{}.' author: - | Enrico Pontelli, Tran Cao Son, and Omar Elkhatib\ Department of Computer Science\ New Mexico State University\ [epontell|tson|okhatib@cs.nmsu.edu]{}\ title: '*Justifications* for Logic Programs under Answer Set Semantics' --- answer set programming, justification, offline justification, online justification Introduction ============ *Answer set programming (ASP)* is a programming paradigm [@smodels-constraint; @mar99; @lif02a] based on logic programming under answer set semantics [@gel88]. ASP is a [*highly declarative*]{} paradigm. In order to solve a problem $P$, we specify it as a logic program $\pi(P)$, whose answer sets correspond one-to-one to solutions of $P$, and can be computed using an answer set solver. ASP is also attractive because of its numerous building block results (see, e.g., [@Baral03]). This can be seen in the following example. Consider the problem of computing the Hamiltonian cycles of a graph. The graph can be encoded as a collection of facts, e.g., $${\tt \begin{array}{lclclcl} \texttt{vertex(a).} & \hspace{.4cm} & \texttt{vertex(b).} & \hspace{.4cm} & \texttt{vertex(c).} & \hspace{.4cm} & \texttt{vertex(d).}\\ \texttt{edge(a,b).} & & \texttt{edge(a,c).} & & \texttt{edge(b,d).} & & \texttt{edge(b,c).}\\ \texttt{edge(c,d).} && \texttt{edge(d,a).} \end{array}}$$ A program contains rules, in the form of Horn clauses; in our case: $${\tt \begin{array}{lcl} \multicolumn{3}{l}{\%\% \:\:\textit{Select an edge}}\\ \texttt{in(U,V)} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{edge(U,V)}, not\:\texttt{nin(U,V)}.\\ \texttt{nin(U,V)} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{edge(U,V)}, not \: \texttt{in(U,V)}.\\ \multicolumn{3}{l}{\%\% \:\:\textit{Traverse each node only once}}\\ \texttt{false} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{vertex(U), vertex(V), vertex(W)}, \\ & & \texttt{V} \neq \texttt{W, in(U,V), in(U,W)}.\\ \texttt{false} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{vertex(U), vertex(V), vertex(W)}, \\ & & \texttt{U} \neq \texttt{V, in(U,W), in(V,W)}.\\ \multicolumn{3}{l}{\%\% \:\:\textit{Reachability of nodes}}\\ \texttt{reachable(U)} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{vertex(U), in(a,U).}\\ \texttt{reachable(V)} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{vertex(V), vertex(U), reachable(U), in(U,V).}\\ \multicolumn{3}{l}{\%\%\:\:\textit{Each vertex reachable from a}}\\ \texttt{false} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{vertex(U), U} \neq \texttt{a}, not\:\texttt{reachable(U).} \end{array}}$$ It can be shown that every answer set of the program consisting of the rules representing the graph and the above rules corresponds to an Hamiltonian cycle of the graph and vice versa. Furthermore, the program has no answer set if and only if the graph does not have an Hamiltonian cycle. $\Box$ The popularity of ASP has grown significantly over the years, finding innovative and highly declarative applications in a variety of domains, such as intelligent agents [@Baral03; @BalducciniGN06], planning [@lif99d], software modeling and verification [@HeljankoN03], complex systems diagnosis [@BalducciniG03], and phylogenetic inference [@ErdemLR06]. The growing popularity of ASP, especially in domains like non-monotonic and commonsense reasoning, has been supported by the development of excellent inference engines [@AngerGLNS05; @eiter98a; @GebserKNS07; @GiunchigliaLM04; @lin02a; @sim02]. On the other hand, a source of difficulties in learning to use ASP lies in the lack of *methodologies* and *tools* which can assist users in understanding a program’s behavior and debugging it. The highly declarative nature of the ASP framework and the “hand-off” execution style of ASP leave a programmer with little information that helps in explaining the behavior of the programs, except for the program itself. For example, the additional information that can be gained by exploring the intermediate state of an execution (e.g., value of variables) of an imperative program using a debugger does not have any equivalent in the context of ASP. This situation is especially difficult when the program execution produces unexpected outcomes, e.g., incorrect or missing answer sets. In this sense, this paper shares the spirit of other attempts in developing tools and methodologies for understanding and debugging of ASP programs,[^1] as in [@BrainGP+a07; @BrainGP+b07; @ElKhatibPS05; @dlvdeb]. Although the traditional language of logic programming under answer set semantics, e.g., referred to as AnsProlog in [@Baral03] or A-Prolog [@GelfondL02], is syntactically close to Prolog, the execution model and the semantics are sufficiently different to make debugging techniques developed for Prolog impractical. For example, the traditional *trace-based* debuggers [@RoychoudhuryRR00] (e.g., Prolog four-port debuggers), used to trace the entire proof search tree (paired with execution control mechanisms, like spy points and step execution), are cumbersome in ASP, since: - Trace-based debuggers provide the entire search sequence, including the failed paths, which might be irrelevant in understanding how specific elements are introduced in an answer set. - The process of computing answer sets is bottom-up, and the determination of the truth value of one atom is intermixed with the computation of other atoms; a direct tracing makes it hard to focus on what is relevant to one particular atom. This is illustrated in the following example. \[exbelow\] Consider the following simple program. $${\tt \begin{array}{lclclcl} \texttt{s} &{\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{r}. & \hspace{1cm} & \texttt{s} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{t}.\\ \texttt{r} &{\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{a}. & & \texttt{t.} \end{array}}$$ The program $P$ has a unique answer set, $M=\{\texttt{s}, \texttt{t}\}$. In particular, $\texttt{t} \in M$, since $\texttt{t}$ appears as a fact in the program, and $\texttt{s} \in M$ because of the rule and $\texttt{t} \in M$. In this process, there is no need to expose the processing of the rule to the user, since `r \not \in M`.$\Box$ - Tracing repeats previously performed executions, degrading debugging performance and confusing the programmer. We address these issues by elaborating the concept of [*off-line justification*]{} for ASP. This notion is an evolution of the concept of *justification*, proposed to justify truth values in tabled Prolog [@RoychoudhuryRR00; @PemmasaniGDRR04]. Intuitively, an off-line justification of an atom w.r.t. an answer set is a graph encoding the reasons for the atom’s truth value. This notion can be used to explain the presence or absence of an atom in an answer set, and provides the basis for building a [*justifier*]{} for answer set solvers. In this paper, we develop this concept and investigate its properties. The notion of off-line justification is helpful when investigating the content of one (or more) answer sets. When the program does not have answer sets, a different type of justification is needed. This leads us to the notion of [*on-line justification*]{}, which provides justifications with respect to a *partial* and/or (sometimes) *inconsistent* interpretation. An on-line justification is *dynamic*, in that it can be obtained at any step of the answer set computation, provided that the computation process follows certain strategies. The intuition is to allow the programmer to interrupt the computation (e.g., at the occurrence of certain events, such as assignment of a truth value to a given atom) and to use the on-line justification to explore the motivations behind the content of the partial interpretation (e.g., why a given atom is receiving conflicting truth values). We describe a *generic* model of on-line justification and a version specialized to the execution model of [Smodels]{} [@sim02]. The latter has been implemented in [$\mathbb{ASP-PROLOG}$]{} [@asp-prolog]. Justifications are offered as first-class citizens of a Prolog system, enabling the programmer to use Prolog programs to reason about ASP computations. Debugging is one of the possible uses of the notion of justification developed in this paper. Background ========== In this paper, we focus on a logic programming language with negation as failure—e.g., the language of [Smodels]{} without weight constraints and choice rules [@sim02]. Logic Programming Language -------------------------- Each program $P$ is associated with a signature $\Sigma_P=\langle {\cal F}, \Pi, {\cal V} \rangle$, where - $\cal F$ is a finite set of *constants*, - $\cal V$ is a set of *variables*, and - $\Pi$ is a finite set of *predicate* symbols. In particular, we assume that $\top$ (stands for $true$) and $\bot$ (stands for $false$) are zero-ary predicates in $\Pi$. A *term* is a constant of $\cal F$ or a variable of $\cal V$. An *atom* is of the form $p(t_1,\dots,t_n)$, where $p\in \Pi$, and $t_1,\dots,t_n$ are terms. In particular, a term (atom) is said to be *ground* if there are no occurrences of elements of $\cal V$ in it. A *literal* is either an atom (*Positive Literal*) or a formula of the form $not\:a$, where $a$ is an atom (*NAF Literal*). In what follows, we will identify with $\cal L$ the set of all ground literals. A *rule*, $r$, is of the form $$\label{rule} h \:{\mbox{$\: {\tt : \!\! - }\:$}}\: b_1, \dots, b_n.$$ ($n\geq 0$) where $h$ is an atom and $\{b_1, \dots, b_n\}\subseteq {\cal L}$. The atom $h$ is referred to as the *head* of the rule, while the set of literals $\{b_1,\dots, b_n\}$ represents the *body* of the rule. Given a rule $r$, we denote $h$ with $head(r)$ and we use $body(r)$ to denote the set $\{b_1,\dots,b_n\}$. We also denote with $pos(r) = body(r) \cap {\cal A}$—i.e., all the elements of the body that are not negated—and with $neg(r) = \{ a \:|\: (not\: a) \in body(r)\}$—i.e., the atoms that appear negated in the body of the rule. Given a rule $r$, we denote with $ground(r)$ the set of all rules obtained by consistently replacing the variables in $r$ with constants from $\cal F$ (i.e., the *ground instances* of $r$). We identify special types of rules: - A rule $r$ is *definite* if $neg(r) = \emptyset$; - A rule $r$ is a *fact* if $neg(r) \cup pos(r) = \emptyset$; for the sake of readability, the fact $$h {\mbox{$\: {\tt : \!\! - }\:$}}.$$ will be simply written as $$h .$$ A program $P$ is a set of rules. A program with variables is understood as a shorthand for the set of all ground instances of the rules in $P$; we will use the notation: $$ground(P) = \bigcup_{r\in P} ground(r)$$ A program is *definite* if it contains only definite rules. The answer set semantics of a program (Subsection \[semantics\]) is highly dependent on the truth value of atoms occurring in the negative literals of the program. For later use, we denote with $NANT(P)$ the atoms which appear in NAF literals in $P$—i.e., $$NANT(P) = \{a \mid a \:\textnormal{is a ground atom },\: \exists r \in ground(P):\: a\in neg(r)\}.$$ We will also use ${\cal A}_P$ to denote the Herbrand base of a program $P$. For brevity, we will often write $\cal A$ instead of ${\cal A}_P$. \[exa\] Let us consider the program $P_1$ containing the rules: $${\tt \begin{array}{clclcclcl} (r_1) & \texttt{q} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{a}, not\:\texttt{p}. & \hspace{1cm} & (r_2) & \texttt{p} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{a}, not\:\texttt{q}.\\ (r_3) & \texttt{a} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{b}. & & (r_4) & \texttt{b}. \end{array}}$$ The rule $r_3$ is definite, while the rule $r_4$ is a fact. For the rule $r_1$ we have: - $head(r_1)=\texttt{q}$ - $body(r_1)=\{\texttt{a}, not\:\texttt{p}\}$ - $pos(r_1) = \{\texttt{a}\}$ - $neg(r_1) = \{\texttt{p}\}$ For $P_1$, we have $NANT(P_1) = \{\texttt{p}, \texttt{q}\}$. $\Box$ Answer Set Semantics and Well-Founded Semantics {#semantics} ----------------------------------------------- We will now review two important semantics of logic programs, the answer set semantics and the well-founded semantics. The former is foundational to ASP and the latter is important for the development of our notion of a justification. We will also briefly discuss the basic components of ASP systems. ### Interpretations and Models A *(three-valued) interpretation* $I$ is a pair $\langle I^+,I^- \rangle$, where $I^+ \cup I^- \subseteq {\cal A}$ and $I^+ \cap I^- = \emptyset$. Intuitively, $I^+$ collects the knowledge of the atoms that are known to be true, while $I^-$ collects the knowledge of the atoms that are known to be false. $I$ is a *complete interpretation* if $I^+ \cup I^- = {\cal A}$. If $I$ is not complete, then it means that there are atoms whose truth value is *undefined* with respect to $I$. For convenience, we will often say that an atom $a$ is undefined in $I$ and mean that the truth value of $a$ is undefined in $I$. Let $P$ be a program and $I$ be an interpretation. A positive literal $a$ is satisfied by $I$, denoted by $I \models a$, if $a \in I^+$. A NAF literal $not \; a$ is satisfied by $I$—denoted by $I \models not\:a$—if $a \in I^-$. A set of literals $S$ is satisfied by $I$ ($I\models S$) if $I$ satisfies each literal in $S$. The notion of satisfaction is easily extended to rules and programs as follows. A rule $r$ is satisfied by $I$ if $I\not\models body(r)$ or $I\models head(r)$. $I$ is a *model* of a program if it satisfies all its rules. An atom $a$ is *supported* by $I$ in $P$ if there exists $r \in P$ such that $head(r) = a$ and $I \models body(r)$. We introduce two partial orders on the set of interpretations: - For two interpretations $I$ and $J$, we say that $I \sqsubseteq J$ iff $I^+ \subseteq J^+$ and $I^- \subseteq J^-$ - For two interpretations $I$ and $J$, we say that $I \preceq J$ iff $I^+ \subseteq J^+$ We will denote with $\cal I$ the set of all possible interpretations and with $\cal C$ the set of complete interpretations. An important property [@llo87] of definite programs is that for each program $P$ there exists a unique model $M_P$ which is $\preceq$-minimal over $\cal C$. $M_P$ is called the *least Herbrand model* of $P$. ### Answer Set Semantics For an interpretation $I$ and a program $P$, the *reduct* of $P$ w.r.t. $I$ (denoted by $P^I$) is the program obtained from $P$ by deleting [*(i)*]{} each rule $r$ such that $neg(r)\cap I^+ \neq \emptyset$, and [*(ii)*]{} all NAF literals in the bodies of the remaining rules. Formally, $$P^I = \left\{ head(r) {\mbox{$\: {\tt : \!\! - }\:$}}pos(r) \:\:|\:\: r \in P, \:\: neg(r) \cap I^+ = \emptyset \right\}$$ Given a complete interpretation $I$, observe that the program $P^I$ is a definite program. A complete interpretation $I$ is an *answer set* [@gel88] of $P$ if $I^+$ is the least Herbrand model of $P^I$ [@apt94a]. briefly Let us reconsider the program $P_1$ in Example \[exa\]. If we consider the interpretation $I = \langle \{\texttt{b,a,q}\},\{\texttt{p}\}\rangle$, then the reduct $P_1^I$ will contain the rules: $${\tt\begin{array}{lclclcl} \texttt{q} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{a}. & \hspace{1cm} & \texttt{a} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{b}.\\ \texttt{b}. \end{array} }$$ It is easy to see that $\{\texttt{a},\texttt{b},\texttt{q}\}$ is the least Herbrand model of this program; thus, $I$ is an answer set of $P_1$. [$\Box$]{} For a definite program $P$ and an interpretation $I$, the immediate consequence operator $T_P$ is defined by $$T_P(I) = \{a \mid \exists r \in P, head(r) = a, I \models body(r)\}.$$ $T_P$ is monotone and has a least fixpoint [@VanEmdenK76]. The fixpoint of $T_P$ will be denoted by $lfp(T_P)$. ### Well-Founded Semantics Let us describe the *well-founded semantics*, following the definition proposed in [@apt94a]. We note that this definition is slightly different from the original definition of the well-founded semantics in [@VanGelderRS91]. Let us start by recalling some auxiliary definitions. \[tpj\] Let $P$ be a program, $S$ and $V$ be sets of atoms from $\cal A$. The set $T_{P,V}(S)$ ([*immediate consequence of S w.r.t P and V*]{}) is defined as follows: $$T_{P,V}(S) = \{ a \mid \exists r \in P, head(r) = a, pos(r) \subseteq S, neg(r) \cap V = \emptyset \}$$ It is easy to see that, if $V$ is fixed, the operator is monotone with respect to $S$. Against, we use $lfp(.)$ to denote the least fixpoint of this operator when $V$ is fixed. \[kui\] Let $P$ be a program and $P^+$ be the set of definite rules in $P$. The sequence $(K_i,U_i)_{i\ge 0}$ is defined as follows: $$\begin{array}{lclclcl} K_0 & = & lfp(T_{P^+}) & \hspace{1cm} & U_0 & = & lfp(T_{P,K_0}) \\ K_i & = & lfp(T_{P,U_{i-1}}) && U_i & = & lfp(T_{P,K_{i}}) \\ \end{array}$$ Let $j$ be the first index of the computation such that $\langle K_j,U_j \rangle = \langle K_{j+1}, U_{j+1} \rangle$. We will denote with $WF_P = \langle W^+,W^- \rangle$ the (unique) *well-founded* model of $P$, where $W^+ = K_j$ and $W^- = {\cal A} \setminus U_j$. briefly Let us reconsider the program $P_1$ of Example \[exa\]. The computation of the well-founded model proceeds as follows: $$\begin{array}{lclcl} K_0 & = & \{\texttt{b},\texttt{a}\}\\ U_0 & = & \{\texttt{a},\texttt{b},\texttt{p},\texttt{q}\}\\ K_1 & = & \{\texttt{a},\texttt{b}\}& =& K_0\\ U_1 & = & \{\texttt{a},\texttt{b},\texttt{p},\texttt{q}\}& =& U_0 \end{array}$$ Thus, the well-founded model will be $\langle \{\texttt{a},\texttt{b}\}, \emptyset\rangle$. Observe that both $\texttt{p}$ and $\texttt{q}$ are undefined in the well-founded model.[$\Box$]{} Answer Set Programming Systems ------------------------------ ### A Novel Paradigm As recognized by a number of authors [@mar99; @smodels-constraint], the adoption of answer set semantics requires a *paradigm shift* to reconcile the peculiar features of the semantics—i.e., the existence of multiple admissible models—with the traditional program view of logic programming. In the first place, each program potentially admits more than one intended model, leading to an additional level of non-determinism. The presence of multiple answer sets complicates the framework in two ways. First of all, we need to provide programmers with a way of handling the multiple answer sets. The additional level of non-determinism is a real need for a number of applications, and it bears some similarities with the proposals put forward in other communities—such as the *choice* and *witness* constructs used in the database community [@AbiteboulV89; @db-choice; @zaniolo]. The presence of multiple answer sets, in turn, leads to a new set of requirements on the *computational mechanisms* used. Given a program, now the main goal of the computation is not to provide a goal-directed tuple-at-a-time answer (as in Prolog), but the goal is to return *complete answer sets*. The traditional resolution-based control used in Prolog is largely inadequate, and it should give place to a different form of control and different execution mechanisms. In simple terms, the goal of an ASP program is to identify a *collection of answer sets*—i.e., each program is interpreted as a specification of a collection of *sets of atoms*. Each rule in the program plays the role of a *constraint* [@smodels-constraint] on the collection of sets specified by the program: a generic rule\ $$head {\mbox{$\: {\tt : \!\! - }\:$}}b_1, \dots, b_n, \mathit{not }\: g_1, \dots, \mathit{not }\: g_m$$ requires that whenever $b_1,\dots,b_n$ are part of the answer set and $q_1,\dots,g_m$ are not, then $\mathit{head}$ has to be in the answer set as well. Thus, the collection of rules in a program constrain what sets of literals can be considered admissible answer sets. The shift of perspective from traditional logic programming to ASP is very important. The programmer is led to think about writing programs as manipulating sets of elements, and the outcome of the computation is going to be a collection of sets—instead of an answer substitution, as in Prolog. This perspective comes very natural in a large number of application domains (e.g., graph applications, planning problems). Several efficient ASP solvers have been developed, such as [Smodels]{} [@NiemelaS97], [DLV]{} [@eiter98a], [Cmodels]{} [@GiunchigliaLM04], [ASSAT]{} [@lin02a], and [CLASP]{} [@GebserKNS07]. One of the most popular ASP solvers is [Smodels]{} [@NiemelaS97; @sim02] which comes with [Lparse]{}, a grounder. [Lparse]{} takes as input a logic program $P$ and produces as output a simplified version of $ground(P)$. The output of [Lparse]{} is in turn accepted by [Smodels]{}, and used to produce the answer sets of $P$ (see Figure \[lparse\_smodel\]). The [Lparse]{}/[Smodels]{} system supports several extended types of literals, such as the *cardinality literals*, which are of the form: $L\ \{l_1, \ldots, l_n\}\ U$, where $L$ and $U$ are integers, $L \le U$, and $l_1, \ldots, l_n$ are literals. The cardinality literal is satisfied by an answer set $M$ if the number $x$ of literals in $\{l_1,\dots,l_n\}$ that are true in $M$ is such that $L \leq x \leq U$. The back-end engine, [Smodels]{} in Figure \[lparse\_smodel\], produces the collection of answer sets of the input program. Various control options can be provided to guide the computation—e.g., establish a limit on the number of answer sets provided or request the answer set to contains specific atoms. We note that all of the available ASP solvers [@AngerGLNS05; @eiter98a; @GebserKNS07; @GiunchigliaLM04; @lin02a] operate in a similar fashion as [Smodels]{}. [DLV]{} uses its own grounder while others use [Lparse]{}. New grounder programs have also been recently proposed, e.g., Gringo in [@GebserTT07]. SAT-based answer set solvers rely on SAT-solver in computing answer sets [@GiunchigliaLM04; @lin02a]. [$\mathbb{ASP-PROLOG}$]{} [@asp-prolog] is a system which provides a tight and semantically well-founded integration between Prolog (in the form of CIAO Prolog [@GrasH00]) and answer set programming (in the form of [Smodels]{}). The [$\mathbb{ASP-PROLOG}$]{} system has been developed using the module and class capabilities of CIAO Prolog. [$\mathbb{ASP-PROLOG}$]{} allows programmers to develop programs as collections of *modules*. Along with the traditional types of modules supported by CIAO Prolog (e.g., Prolog modules, Constraint Logic Programming modules), it allows the presence of *ASP modules*, each being a complete ASP program. Each CIAO Prolog module can access the content of any ASP module (using the traditional module qualification of Prolog), read its content, access its models, and modify it (using the traditional [assert]{} and [retract]{} predicates of Prolog). [$\mathbb{ASP-PROLOG}$]{} allows us to create Prolog modules that access (and possibly modify) other modules containing ASP code. For example, the following Prolog module :- use_asp(aspmod, 'asp_module.lp'). count_p(X) :- findall(Q, (aspmod:model(Q), Q:p), List), length(List,X). accesses an ASP module (called [aspmod]{}) and defines a predicate ([count\_p]{}) which counts how many answer sets of [aspmod]{} contain the atom [p]{}. [$\Box$]{} Explanations ============ The traditional methodology employed in ASP relies on encoding each problem $Q$ as a logic program $\pi(Q)$, whose answer sets are in one-to-one correspondence with the solutions of $Q$. From the software development perspective, it would be important to address the question *“why is $M$ an answer set of the program $P$?”* This question gives rise to the question “why does an atom $a$ belong to $M^+$ (or $M^-$)?” Answering this question can be very important, in that it provides us with explanations regarding the presence (or absence) of different atoms in $M$. Intuitively, we view answering these questions as the “declarative” parallel of answering questions of the type “why is $3.1415$ the value of the variable $x$?” in the context of imperative languages—a question that can be typically answered by producing and analyzing an *execution trace* (or *event trace* [@Auguston00]). The objective of this section is to develop the notion of *explanation*, as a graph structure used to describe the “reason” for the truth value of an atom w.r.t. a given answer set. In particular, each explanation graph will describe the derivation of the truth value (i.e., true or false) of an atom using the rules in the program. The explanation will also need to be flexible enough to explain those contradictory situations, arising during the construction of answer sets, where an atom is made true *and* false at the same time—for reference, these are the situations that trigger a backtracking in systems like [Smodels]{} [@sim02]. In the rest of this section, we will introduce this graph-based representation of the support for the truth values of atoms in an interpretation. In particular, we will incrementally develop this representation. We will start with a generic graph structure (*Explanation Graph*), which describes truth values without accounting for program rules. We will then identify specific graph patterns that can be derived from program rules (*Local Consistent Explanations*), and impose them on the explanation graph, to obtain the *$(J,A)$-based Explanation Graphs*. These graphs are used to explain the truth values of an atom w.r.t. an interpretation $J$ and a set of assumptions $A$—where an assumption is an atom for which we will not seek any explanations. The assumptions derive from the inherent “guessing” process involved in the definition of answer sets (and in their algorithmic construction), and they will be used to justify atoms that have been “guessed” in the construction of the answer set and for which a meaningful explanation cannot be constructed. Before we proceed, let us introduce notation that will be used in the following discussion. For an atom $a$, we write $a^+$ to denote the fact that the atom $a$ is true, and $a^-$ to denote the fact that $a$ is false. We will call $a^+$ and $a^-$ the *annotated* versions of $a$. Furthermore, we will define $atom(a^+) = a$ and $atom(a^-)=a$. For a set of atoms $S$, we define the following sets of annotated atoms: - $S^p = \{a^+ \mid a \in S\}$, - $S^n = \{a^- \mid a \in S\}$. Furthermore, we denote with $\naf S$ the set $\naf S = \{\naf a \mid a \in S\}$. Explanation Graphs ------------------ In building the notion of justification, we will start from a very general (labeled, directed) graph structure, called *explanation graph*. We will incrementally construct the notion of justification, by progressively adding the necessary restrictions to it. \[Explanation Graph\] \[egraph\] For a program $P$, an *explanation graph* (or *e-graph*) is a labeled, directed graph $(N,E)$, where $N \subseteq {\cal A}^p \cup {\cal A}^n \cup \{\textit{assume},\top,\bot\}$ and $E \subseteq N \times N \times \{+,-\}$, which satisfies the following properties: 1. \[one1\] the only sinks in the graph are: $assume$, $\top$, and $\bot$; 2. \[two1\] for every $b \in N \cap {\cal A}^p$, we have that $(b,assume,-) \not\in E$ and $(b,\bot,-) \not\in E$; 3. \[three1\] for every $b \in N \cap {\cal A}^n$, we have that $(b,assume,+) \not\in E$ and $(b,\top,+) \not\in E$; 4. \[four1\] for every $b \in N$, if $(b,l,s) \in E$ for some $l \in \{assume, \top, \bot\}$ and $s \in \{+,-\}$ then $(b,l,s)$ is the only outgoing edge originating from $b$. Property (\[one1\]) indicates that each atom appearing in an e-graph should have outgoing edges (which will explain the truth value of the atom). Properties (\[two1\]) and (\[three1\]) ensure that true (false) atoms are not explained using explanations that are proper for false (true) atoms. Finally, property (\[four1\]) ensures that atoms explained using the special explanations $assume$, $\top$, $\bot$ have only one explanation in the graph. Intuitively, - $\top$ will be employed to explain program facts—i.e., their truth does not depend on other atoms; - $\bot$ will be used to explain atoms that do not have defining rules—i.e., the falsity is not dependent on other atoms; and - $assume$ is used for atoms we are not seeking any explanations for. Each edge of the graph connects two annotated atoms or an annotated atom with one of the nodes in $\{\top, \:\bot, \: assume\}$, and it is marked by a label from $\{+,-\}$. Edges labeled $'+'$ are called *positive* edges, while those labeled $'-'$ are called *negative* edges. A path in an e-graph is *positive* if it contains only positive edges, while a path is negative if it contains at least one negative edge. We will denote with $(n_1,n_2) \in E^{*,+}$ the fact that there is a positive path in the e-graph from $n_1$ to $n_2$. \[ex1\] Figure \[img1\] illustrates several simple e-graphs. Intuitively, - The graph [(i)]{} describes the true state of $\texttt{p}$ by making it positively dependent on the true state of $\texttt{q}$ and $\texttt{r}$; in turn, $\texttt{q}$ is simply assumed to be true while $\texttt{r}$ is a fact in the program. - The graph [(ii)]{} describes more complex dependencies; in particular, observe that $\texttt{t}$ and $\texttt{u}$ are both false and they are mutually dependent—as in the case of a program containing the rules $$\begin{array}{lclclcl} \texttt{t} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{u}. & \hspace{1cm} & \texttt{u} &{\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{t}. \end{array}$$ Observe also that $\texttt{s}$ is explained being false because there are no rules defining it. - The graph [(iii)]{} states that $\texttt{p}$ has been simply assumed to be false. $\Box$ Given an explanation graph and an atom, we can extract from the graph the elements that directly contribute to the truth value of the atom. We will call this set of elements the support of the atom. This is formally defined as follows. Let $G = (N,E)$ be an e-graph and $b \in N \cap ({\cal A}^p\cup{\cal A}^n)$ a node in $G$. The direct support of $b$ in $G$, denoted by $support(b,G)$, is defined as follows. - $support(b,G) = \{atom(c) \mid (b,c,+) \in E\} \cup \{\naf atom(c) \mid (b,c,-) \in E\}$, if for every $\ell \in \{assume, \top, \bot\}$ and $s \in \{+,-\}$, $(b,\ell,s) \not\in E$; - $support(b,G) = \{\ell\}$ if $(b,\ell,s) \in E$, $\ell \in \{assume, \top, \bot\}$ and $s \in \{+,-\}$. If we consider the e-graph [(ii)]{} in Figure \[img1\], then we have that $support(\texttt{p}^+, G_2) = \{\texttt{q}, \naf\ \texttt{s}, \naf\ \texttt{t}\}$ while $support(\texttt{t}^-,G_2) = \{\texttt{u}\}$. We also have $support(\texttt{p}^+, G_1) = \{\texttt{q, r}\}$. $\Box$ It is worth mentioning that an explanation graph is a general concept aimed at providing arguments for answering the question ‘[*why is an atom true or false?*]{}’ In this sense, it is similar to the concept of a support graph used in program analysis [@SahaR05]. The main difference between these two concepts lies in that support graphs are defined only for definite programs while explanation graphs are defined for general logic programs. Furthermore, a support graph contains information about the support for [*all*]{} answer while an explanation graph stores only the support for [*one*]{} atom. An explanation graph can be used to answer the question of why an atom is false which is not the case for support graphs. Local Explanations and $(J,A)$-based Explanations ------------------------------------------------- The next step towards the definition of the concept of justification requires enriching the general concept of e-graph with explanations of truth values of atoms that are derived from the rules of the program. A *Local Consistent Explanation (LCE)* describes one step of justification for a literal. Note that our notion of local consistent explanation is similar in spirit, but different in practice, from the analogous definition used in [@PemmasaniGDRR04; @RoychoudhuryRR00]. It describes the possible local reasons for the truth/falsity of a literal. If $a$ is true, the explanation contains those bodies of the rules for $a$ that are satisfied by $I$. If $a$ is false, the explanation contains sets of literals that are false in $I$ and they falsify all rules for $a$. The construction of a LCE is performed w.r.t. a possible interpretation and a set of atoms $U$—the latter contains atoms that are automatically assumed to be false, without the need of justifying them. The need for this last component (to be further elaborated later in the paper) derives from the practice of computing answer sets, where the truth value of certain atoms is first guessed and then later verified. \[lcedef\] Let $P$ be a program, $b$ be an atom, $J$ a possible interpretation, $U$ a set of atoms (*assumptions*), and $S \subseteq {\cal A} \cup \naf{\cal A} \cup \{assume,\top,\bot\}$ a set of literals. We say that 1. $S$ is *a* local consistent explanation of $b^+$ w.r.t. $(J,U)$, if $b \in J^+$ and - $S = \{assume\}$, or - $S \cap {\cal A} \subseteq J^+$, $\{c \mid \naf c \in S\} \subseteq J^- \cup U$, and there is a rule $r$ in $P$ such that $head(r) = b$ and $S = body(r)$; for convenience, we write $S = \{\top\}$ to denote the case where $body(r) = \emptyset$. 2. $S$ is a local consistent explanation of $b^-$ w.r.t. $(J,U)$ if $b \in J^- \cup U$ and - $S = \{assume\}$; or - $S \cap {\cal A} \subseteq J^- \cup U$, $\{c \mid \naf c \in S\} \subseteq J^+ $, and $S$ is a minimal set of literals such that for every rule $r \in P$, if $head(r) = b$, then $pos(r) \cap S \ne \emptyset$ or $neg(r) \cap \{c \mid \naf c \in S\} \ne \emptyset$; for convenience, we write $S = \{\bot\}$ to denote the case $S = \emptyset$. We will denote with $LCE^p_P(b,J,U)$ the set of all the LCEs of $b^+$ w.r.t. $(J,U)$, and with $LCE^n_P(b,J,U)$ the set of all the LCEs of $b^-$ w.r.t. $(J,U)$. Observe that $U$ is the set of atoms that are assumed to be false. For this reason, negative LCEs are defined for elements $J^- \cup U$ but positive LCEs are defined only for elements in $J^+$. We illustrate this definition in a series of examples. \[ex3\] Let $P$ be the program: $$\begin{array}{lclclcl} \texttt{p} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{q},\:\: \texttt{r}. & \hspace{2cm} & \texttt{q}. \\ \texttt{q} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{r}. & & \texttt{r}. \end{array}$$ The program admits only one answer set $M=\langle \{\texttt{p},\texttt{q},\texttt{r}\},\emptyset\rangle$. The LCEs for the atoms of this program w.r.t. $(M,\emptyset)$ are: $$\begin{array}{lcl} LCE_P^p(\texttt{p},M,\emptyset) = \{ \{\texttt{q},\texttt{r}\},\{assume\} \} \\ LCE_P^p(\texttt{q},M,\emptyset)= \{\{\top\}, \{\texttt{r}\},\{assume\}\}\\ LCE_P^p(\texttt{r},M,\emptyset) = \{\{\top\},\{assume\}\} \end{array}$$$\Box$ The above example shows a program with a unique answer set. The next example discusses the definition in a program with more than one answer set and an empty well-founded model. It also highlights the difference between the positive and negative LCEs for atoms given a partial interpretation and a set of assumptions. \[ex6\] Let $P$ be the program: $$\begin{array}{lclclcl} \texttt{p} & {\mbox{$\: {\tt : \!\! - }\:$}}& \naf \texttt{q}. & \hspace{2cm} & \texttt{q} & {\mbox{$\: {\tt : \!\! - }\:$}}& \naf \texttt{p}. \end{array}$$ Let us consider the partial interpretation $M = \langle\{\texttt{p}\},\emptyset\rangle$. The following are LCEs w.r.t. $(M,\emptyset)$: $$\begin{array}{l} LCE_P^p(\texttt{p},M,\emptyset) = \{\{assume\}\} \\ LCE_P^n(\texttt{q},M,\emptyset)= LCE_P^p(\texttt{q},M,\emptyset) = \{\{\bot\}\} \end{array}$$ The above LCEs are explanations for the truth value of $\texttt{p}$ and $\texttt{q}$ being true and false with respect to $M$ and the empty set of assumptions. Thus, the only explanation for $\texttt{p}$ being true is that it is assumed to be true, since the only way to derive $\texttt{p}$ to be true is to use the first rule and nothing is assumed to be false, i.e., $not \:\texttt{q}$ is not true. On the other hand, $\texttt{q} \not\in M^- \cup \emptyset$ leads to the fact that there is no explanation for $q$ being false. Likewise, because $\texttt{q} \not\in M^+$, there is no positive LCE for $\texttt{q}$ w.r.t. $(M,\emptyset)$. The LCEs w.r.t. $(M,\{\texttt{q}\})$ are: $$\begin{array}{l} LCE_P^p(\texttt{p},M,\{\texttt{q}\}) = \{ \{assume\}, \{\naf \texttt{q}\}\}\\ LCE_P^n(\texttt{q},M,\{\texttt{q}\}) = \{\{assume\}, \{\naf \texttt{p}\}\} \end{array}$$ Assuming that $\texttt{q}$ is false leads to one additional explanation for $\texttt{p}$ being true. Furthermore, there are now two explanations for $\texttt{q}$ being false. The first one is that it is assumed to be false and the second one satisfies the second condition in Definition \[lcedef\]. Consider the complete interpretation $M'=\langle \{\texttt{p}\},\{\texttt{q}\} \rangle$. The LCEs w.r.t. $(M',\emptyset)$ are: $$\begin{array}{l} LCE_P^p(\texttt{p},M',\emptyset) = \{ \{assume\}, \{\naf \texttt{q}\}\}\\ LCE_P^n(\texttt{q},M',\emptyset) = \{\{assume\}, \{\naf \texttt{p}\}\} \end{array}$$ $\Box$ The next example uses a program with a non-empty well-founded model. \[ex5\] Let $P$ be the program: $$\begin{array}{lclclclclcl} \texttt{a} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{f},\: not\:\texttt{b}.& \hspace{1cm} & \texttt{b} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e},\: not\:\texttt{a}. & \hspace{1cm} & \texttt{e} . \\ \texttt{f} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e}. && \texttt{d} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{c}, \:\texttt{e}. && \texttt{c} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{d},\:\texttt{f}. \end{array}$$ This program has the answer sets: $$\begin{array}{lcr} M_1 = \langle\{\texttt{f},\texttt{e},\texttt{b}\},\{\texttt{a},\texttt{c},\texttt{d}\}\rangle & \hspace{1cm}& M_2 = \langle \{\texttt{f},\texttt{e},\texttt{a}\},\{\texttt{c},\texttt{b},\texttt{d}\}\rangle\end{array}$$ Observe that the well-founded model of this program is $\langle W^+, W^- \rangle = \langle \{\texttt{e},\texttt{f}\}, \{\texttt{c},\texttt{d}\}\rangle$. The following are LCEs w.r.t. the answer set $M_1$ and the empty set of assumptions (those for $(M_2,\emptyset)$ have a similar structure): $$\begin{array}{l} LCE_P^n(\texttt{a},M_1,\emptyset)=\{\{\naf \texttt{b}\},\{assume\}\} \\ LCE_P^p(\texttt{b},M_1,\emptyset)=\{\{\texttt{e},\naf \texttt{a}\},\{assume\}\}\\ LCE_P^p(\texttt{e},M_1,\emptyset) = \{\{\top\},\{assume\}\} \\ LCE_P^p(\texttt{f},M_1,\emptyset)=\{\{\texttt{e}\},\{assume\}\} \\ LCE_P^n(\texttt{d},M_1,\emptyset) = \{\{\texttt{c}\},\{assume\}\} \\ LCE_P^n(\texttt{c},M_1,\emptyset)=\{\{\texttt{d}\},\{assume\}\} \end{array}$$ $\Box$ Let us open a brief parenthesis to discuss some complexity issues related to the existence of LCEs. First, checking whether or not there is a LCE of $b^+$ w.r.t. $(J,U)$ is equivalent to checking whether or not the program contains a rule $r$ whose head is $b$ and whose body is satisfied by the interpretation $\langle J^+,J^- \cup U\rangle$. This leads to the following observation. Given a program $P$, a possible interpretation $J$, a set of assumptions $U$, and an atom $b$, determining whether or not there is a LCE $S$ of $b^+$ w.r.t. $(J,U)$ such that $S \ne \{assume\}$ can be done in time polynomial in the size of $P$. In order to determine whether or not there exists a LCE of $b^-$ w.r.t. $(J,U)$, we need to find a minimal set of literals $S$ that satisfies the second condition of Definition \[lcedef\]. This can also be accomplished in time polynomial in the size of $P$. In fact, let $P_b$ be the set of rules in $P$ whose head is $b$. Furthermore, for a rule $r$, let $$S_r(J,U) = \{a \mid a \in pos(r) \cap (J^- \cup U)\} \cup \{not \: a \mid a \in J^+ \cap neg(r)\}.$$ Intuitively, $S_r(J,U)$ is the maximal set of literals that falsifies the rule $r$ w.r.t. $(J,U)$. To find a LCE for $b^-$, it is necessary to have $S_r(J,U) \ne \emptyset$ for every $r \in P_b$. Clearly, computing $S_r(J,U)$ for $r \in P_b$ can be done in polynomial time in the size of $P$. Finding a minimal set $S$ such that $S \cap S_r \ne \emptyset$ for every $r \in P_b$ can be done by scanning through the set $P_b$ and adding to $S$ (initially set to $\emptyset$) an arbitrary element of $S_r(J,U)$ if $S \cap S_r(J,U) = \emptyset$. This leads to the following observation. Given a program $P$, a possible interpretation $J$, a set of assumptions $U$, and an atom $b$, determining whether there exists a LCE $S$ of $b^-$ w.r.t. $(J,U)$ such that $S \ne \{assume\}$ can be done in time polynomial in the size of $P$. We are now ready to instantiate the notion of e-graph by forcing the edges of the e-graph to represent encodings of local consistent explanations of the corresponding atoms. To select an e-graph as an acceptable explanation, we need two additional components: the current interpretation ($J$) and the collection ($U$) of elements that have been introduced in the interpretation without any “supporting evidence”. An e-graph based on $(J,U)$ is defined next. \[ja-based\] Let $P$ be a program, $J$ a possible interpretation, $U$ a set of atoms, and $b$ an element in ${\cal A}^p \cup {\cal A}^n$. A $(J,U)$-*based explanation graph* $G=(N,E)$ of $b$ is an e-graph such that - [**(Relevance)**]{} every node $c \in N$ is reachable from $b$ - [**(Correctness)**]{} for every $c \in N \setminus \{assume,\top,\bot\}$, $support(c,G)$ is an LCE of $c$ w.r.t. $(J,U)$ The two additional conditions we impose on the e-graph force the graph to be connected w.r.t. the element $b$ we are justifying, and force the selected nodes and edges to reflect local consistent explanations for the various elements. The next condition we impose on the explanation graph is aimed at ensuring that no positive cycles are present. The intuition is that atoms that are true in an answer set should have a non-cyclic support for their truth values. Observe that the same does not happen for elements that are false—as in the case of elements belonging to unfounded sets [@apt94a]. A $(J,U)$-based e-graph $(N,E)$ is [*safe*]{} if $\forall b^+ \in N$, $(b^+,b^+)\not\in E^{*,+}$. Consider the e-graphs in Figure \[fig1\], for the program of Example \[ex5\]. Neither the e-graph of $\texttt{a}^+$ ([(i)]{} nor the e-graph [(ii)]{}) is a $(M_1,\{\texttt{c},\texttt{d}\})$-based e-graph of $\texttt{a}^+$, since $support(\texttt{b},G)=\{assume\}$ in both cases, and this does not represent a valid LCE for $\texttt{b}^-$ (since $\texttt{b} \notin M_1^-\cup \{\texttt{c},\texttt{d}\}$). Observe, on the other hand, that they are both acceptable $(M_2,\{\texttt{b},\texttt{c},\texttt{d}\})$-based e-graphs of $\texttt{a}^+$. The e-graph of $\texttt{c}^+$ (the graph [(iii)]{}) is neither a $(M_1,\{\texttt{c},\texttt{d}\})$-based nor a $(M_2,\{\texttt{b},\texttt{c},\texttt{d}\})$-based e-graph of $\texttt{c}^+$, while the e-graph of $\texttt{c}^-$ (graph [(iv)]{}) is a $(M_1,\{\texttt{c},\texttt{d}\})$-based and a $(M_2,\{\texttt{b},\texttt{c},\texttt{d}\})$-based e-graph of $\texttt{c}^-$. Observe also that all the graphs are safe.$\Box$ Off-Line Justifications {#off} ======================= *Off-line* justifications are employed to characterize the “reason” for the truth value of an atom w.r.t. a given answer set $M$. The definition will represent a refinement of the $(M,A)$-based explanation graph, where $A$ will be selected according to the properties of the answer set $M$. Off-line justifications will rely on the assumption that $M$ is a *complete* interpretation. Let us start with a simple observation. If $M$ is an answer set of a program $P$, and $WF_P$ is the well-founded model of $P$, then it is known that, $WF_P^+ \subseteq M^+$ and $WF_P^- \subseteq M^-$ [@apt94a]. Furthermore, we observe that the content of $M$ is uniquely determined by the truth values assigned to the atoms in $V=NANT(P) \setminus (WF_P^+ \cup WF_P^-)$, i.e., the atoms that - appear in negative literals in the program, and - their truth value is not determined by the well-founded model. We are interested in the subsets of $V$ with the following property: if all the elements in the subset are assumed to be false, then the truth value of all other atoms in $\cal A$ is uniquely determined and leads to the desired answer set. We call these subsets the *assumptions* of the answer set. Let us characterize this concept more formally. Let $P$ be a program and $M$ be an answer set of $P$. The *tentative assumptions* of $P$ w.r.t. $M$ (denoted by ${\cal TA}_P(M)$) are defined as: $${\cal TA}_P(M) = \{ a \:|\: a \in NANT(P)\:\wedge\: a \in M^- \:\wedge\: a \not\in (WF_P^+\cup WF_P^-)\}$$ The negative reduct of a program $P$ w.r.t. a set of atoms $U$ is a program obtained from $P$ by forcing all the atoms in $U$ to be false. \[nred\] Let $P$ be a program, $M$ an answer set of $P$, and $U \subseteq {\cal TA}_P(M)$ a set of tentative assumption atoms. The *negative reduct* of $P$ w.r.t. $U$, denoted by $NR(P,U)$, is the set of rules: $$NR(P,U) = P \setminus \{\: r \:|\: head(r) \in U\}.$$ Let us consider the program p :- not q. q :- not p. r :- p, s. t :- q, u. s. The well-founded model for this program is $\langle \{\texttt{s}\},\{\texttt{u}\}\rangle$. The program has two answer sets, $M_1=\langle \{\texttt{p,s,r}\},\{\texttt{t,u,q}\}\rangle$ and $M_2 = \langle \{\texttt{q,s}\},\{\texttt{p,r,t,u}\}\rangle$. The tentative assumptions for this program w.r.t. $M_1$ is the set $\{\texttt{q}\}$. If we consider the set $\{\texttt{q}\}$, the negative reduct of the program is the set of rules p :- not q. r :- p, s. t :- q, u. s. $\Box$ We are now ready to introduce the proper concept of assumptions—these are those tentative assumptions that are sufficient to allow the reconstruction of the answer set. \[assumption\] Let $P$ be a program and $M$ be an answer set of $P$. An *assumption* w.r.t. $M$ is a set of atoms $U$ satisfying the following properties: - $U \subseteq {\cal TA}_P(M)$, and - the well-founded model of $NR(P,U)$ is equal to $M$—i.e., $$WF_{NR(P,U)} = M.$$ We will denote with $Assumptions(P,M)$ the set of all assumptions of $P$ w.r.t. $M$. A *minimal assumption* is an assumption that is minimal w.r.t. the set inclusion operator. We will denote with $\mu Assumptions(P,M)$ the set of all the minimal assumptions of $P$ w.r.t. $M$. An important observation we can make is that $Assumptions(P,M)$ is not an empty set, since the complete set ${\cal TA}_P(M)$ is an assumption. \[prop1\] Given a program $P$ and an answer set $M$ of $P$, the well-founded model of the program $NR(P,{\cal TA}_P(M))$ is equal to $M$. Appendix A. Let us consider the program of Example \[ex6\]. The interpretation $M= \langle \{\texttt{p}\}, \{\texttt{q}\}\rangle$ is an answer set. For this program we have: $$\begin{array}{lcl} WF_P & = & \langle \emptyset, \emptyset \rangle\\ {\cal TA}_P(\langle \{\texttt{p}\}, \{\texttt{q}\}\rangle ) & = & \{\texttt{q}\} \end{array}$$ Observe that $NR(P,\{\texttt{q}\}) = \{ \texttt{p} {\mbox{$\: {\tt : \!\! - }\:$}}not\: \texttt{q}\}$. The well-founded model of this program is $\langle \{\texttt{p}\}, \{\texttt{q}\}\rangle$, which is equal to $M$. Thus, $\{\texttt{q}\}$ is an assumption of $P$ w.r.t. $M$. In particular, one can see that this is the only assumption we can have.$\Box$ \[ex-wf\] Let us consider the following program $P$: $$\begin{array}{lclclclclcl} \texttt{a} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{f}, \:\:not\:\texttt{b}. & \hspace{.5cm} & \texttt{b} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e}, \:\:not\:\texttt{a}. & \hspace{.5cm} & \texttt{e} .\\ \texttt{f} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e}. & & \texttt{d} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{c},\:\: \texttt{e}. & & \texttt{c} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{d}, \:\texttt{f}, \:not\: \texttt{k}.\\ \texttt{k} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{a}. \end{array}$$ The interpretation $M_1=\langle \{\texttt{f},\texttt{e},\texttt{b}\}, \{\texttt{a},\texttt{c},\texttt{d},\texttt{k}\}\rangle$ is an answer set of the program. In particular: $$\begin{array}{lcl} WF_P & = & \langle \{\texttt{e},\texttt{f}\}, \{\texttt{d},\texttt{c}\}\rangle\\ {\cal TA}_P(\langle \{\texttt{f},\texttt{e},\texttt{b}\},\{\texttt{a},\texttt{c},\texttt{d}\}\rangle) & = & \{\texttt{a},\texttt{k}\} \end{array}$$ The program $NR(P,\{\texttt{a}\})$ is: $$\begin{array}{lclclcl} \texttt{b} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e},\: not\: \texttt{a}. & \hspace{1cm} & \texttt{e} .\\ \texttt{f} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e}. & & \texttt{d} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{c}, \:\texttt{e}.\\ \texttt{c} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{d}, \:\texttt{f},\: not\:\texttt{k}. & & \texttt{k} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{a}. \end{array}$$ The well-founded model of this program is $\langle \{\texttt{e},\texttt{f},\texttt{b}\},\{\texttt{a},\texttt{c},\texttt{d},\texttt{k}\}\rangle$. Thus, $\{\texttt{a}\}$ is an assumption w.r.t. $M_1$. Observe also that if we consider $NR(P,\{\texttt{a},\texttt{k}\})$ $$\begin{array}{lclclcl} \texttt{b} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e},\: not\: \texttt{a}. & \hspace{1cm} & \texttt{e} .\\ \texttt{f} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e}. & & \texttt{d} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{c},\:\texttt{e}.\\ \texttt{c} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{d},\texttt{f},\: not\:\texttt{k}. \end{array}$$ The well-founded model of this program is also $\langle \{\texttt{e},\texttt{f},\texttt{b}\},\{\texttt{a},\texttt{c},\texttt{d},\texttt{k}\}\rangle$, thus making $\{\texttt{a},\texttt{k}\}$ another assumption. Note that this second assumption is not minimal.$\Box$ We will now specialize e-graphs to the case of answer sets, where only false elements can be used as assumptions. Let $P$ be a program, $J$ a partial interpretation, $U$ a set of atoms, and $b$ an element in ${\cal A}^p \cup {\cal A}^n$. An *off-line explanation graph* $G=(N,E)$ of $b$ w.r.t. $J$ and $U$ is a $(J,U)$-based e-graph of $b$ satisfying the following additional conditions: - there exists no $p^+ \in N$ such that $(p^+,\textit{assume},+) \in E$; and - $(p^-,\textit{assume},-) \in E$ iff $p \in U$. We will denote with ${\cal E}(b,J,U)$ the set of all off-line explanation graphs of $b$ w.r.t. $J$ and $U$. The first condition ensures that true elements cannot be treated as assumptions, while the second condition ensures that only assumptions are justified as such in the graph. Let $P$ be a program, $M$ an answer set, $U \in Assumptions(P,M)$, and $a \in {\cal A}^p\cup {\cal A}^n$. An *off-line justification* of $a$ w.r.t. $M$ and $U$ is an element $(N,E)$ of ${\cal E}(a,M,U)$ which is safe. If $M$ is an answer set and $x\in M^+$ (resp. $x \in M^-$), then $G$ is an off-line justification of $x$ w.r.t. $M$ and the assumption $U$ iff $G$ is an off-line justification of $x^+$ (resp. $x^-$) w.r.t. $M$ and $U$. \[ex2\] Let us consider the program in Example \[ex5\]. We have that $NANT(P) = \{b, a\}$. The assumptions for this program are: $$Assumptions(P,M_1) = \{ \{a\} \}\:\:\:\textit{ and }\:\:\: Assumptions(P,M_2) = \{ \{b\} \}.$$ The off-line justifications for atoms in $M_1$ w.r.t. $M_1$ and $\{a\}$ are shown in Figure \[fig2\]. Justifications are built by assembling items from the LCEs of the various atoms and avoiding the creation of positive cycles in the justification of true atoms. Also, the justification is built w.r.t. a chosen set of assumptions ($A$), whose elements are all assumed false. In general, an atom may admit multiple justifications, even w.r.t. the same assumptions. The following lemma shows that elements in $WF_P$ can be justified without negative cycles and assumptions. \[good\] Let $P$ be a program, $M$ an answer set, and $WF_P$ the well-founded model of $P$. Each atom $a\in WF_P$ has a justification w.r.t. $M$ and $\emptyset$ which does not contain any negative cycle. From the definition of assumption and from the previous lemma we can infer that a justification free of negative cycles can be built for every atom. \[propimp\] Let $P$ be a program and $M$ an answer set. For each atom $a$, there is an off-line justification w.r.t. $M$ and $M^-\setminus WF_P^-$ which does not contain negative cycles. Proposition \[propimp\] underlines an important property—the fact that all true elements can be justified in a non-cyclic fashion. This makes the justification more natural, reflecting the non-cyclic process employed in constructing the minimal answer set (e.g., using the iterations of $T_P$) and the well-founded model (e.g., using the characterization in [@BrassDFZ01]). This also gracefully extends a similar property satisfied by the justifications under well-founded semantics used in [@RoychoudhuryRR00]. Note that the only cycles possibly present in the justifications are positive cycles associated to (mutually dependent) false elements—this is an unavoidable situation due the semantic characterization in well-founded and answer set semantics (e.g., unfounded sets). A similar design choice has been made in [@PemmasaniGDRR04; @RoychoudhuryRR00]. Let us reconsider the following program $P$ from Example \[ex-wf\]: $$\begin{array}{lclclclclcl} \texttt{a} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{f}, \:\:not\:\texttt{b}. & \hspace{.5cm} & \texttt{b} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e}, \:\:not\:\texttt{a}. & \hspace{.5cm} & \texttt{e} .\\ \texttt{f} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e}. & & \texttt{d} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{c},\:\: \texttt{e}. & & \texttt{c} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{d}, \:\texttt{f}, \:not\: \texttt{k}.\\ \texttt{k} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{a}. \end{array}$$ and the answer set $M=\langle \{\texttt{f},\texttt{e},\texttt{b}\}, \{\texttt{a},\texttt{c},\texttt{d},\texttt{k}\}\rangle$ is an answer set of the program. The well-founded model of this program is $$WF_P = \langle \{\texttt{e},\texttt{f}\}, \{\texttt{d},\texttt{c}\}\rangle$$ $\texttt{a}$ and $\texttt{k}$ are assumed to be false. Off-line justifications for $\texttt{b}^+, \texttt{f}^+, \texttt{e}^+$ and for $\texttt{c}^-, \texttt{d}^-, \texttt{a}^-$ with respect to $M$ and $M^- \setminus WF_P^- = \{\texttt{a}, \texttt{k}\}$, which do not contain negative cycles, are the same as those depicted in Figure \[fig2\]. $\texttt{k}^-$ has an off-line justification in which it is connected to $assume$ by a negative edge, as it is assumed to be false. $\Box$ On-Line Justifications for ASP ============================== Off-line justifications provide a “declarative trace” for the truth values of the atoms present in an answer set. The majority of the inference engines for ASP construct answer sets in an incremental fashion, making choices (and possibly undoing them) and declaratively applying the rules in the program. Unexpected results (e.g., failure to produce any answer sets) require a more refined view of computation. One way to address this problem is to refine the notion of justification to make possible the “declarative tracing” of atoms w.r.t. a partially constructed interpretation. This is similar to debugging of imperative languages, where breakpoints can be set and the state of the execution explored at any point during the computation. In this section, we introduce the concept of *on-line justification*, which is generated *during* the computation of an answer set and allows us to justify atoms w.r.t. an incomplete interpretation—that represents an intermediate step in the construction of the answer set. Computation {#subsec-comp} ----------- The concept of on-line justification is applicable to computation models that construct answer sets in an incremental fashion, e.g., [Smodels]{} and [DLV]{} [@sim02; @eiter98a; @GebserKNS07; @AngerGLNS05]. We can view the computation as a sequence of steps, each associated to a partial interpretation. We will focus, in particular, on computation models where the progress towards the answer set is monotonic. \[genc\] Let $P$ be a program. A *general computation* is a sequence $M_0, M_1, \dots, M_k$, such that *(i)* : $M_0 = \langle \emptyset, \emptyset\rangle$, *(ii)* : $M_0, \dots, M_{k-1}$ are partial interpretations, and *(iii)* : $M_{i} \sqsubseteq M_{i+1}$ for $i=0,\dots,k-1$. A *general complete computation* is a computation $M_0, \dots, M_k$ such that $M_k$ is an answer set of $P$. In general, we do not require $M_k$—the ending point of the computation—to be a partial interpretation, since we wish to model computations that can also “fail”—i.e., $M_k^+ \cap M_k^- \neq \emptyset$. This is, for example, what might happen during a [Smodels]{} computation—whenever the [Conflict]{} function succeeds [@sim02]. We will refer to a pair of sets of atoms as a [*possible interpretation*]{} (or [*p-interpretation*]{} for short). Clearly, each partial interpretation is a p-interpretation, but not vice versa. Abusing the notation, we use $J^+$ and $J^-$ to indicate the first and second component of a p-interpretation $J$; moreover, $I \sqsubseteq J$ denotes that $I^+ \subseteq J^+$ and $I^- \subseteq J^-$. Our objective is to associate a form of justification to each intermediate step $M_i$ of a general computation. Ideally, we would like the justifications associated to each $M_i$ to explain truth values in the “same way” as in the final off-line justification. Since the computation model might rely on guessing some truth values, $M_i$ might not contain sufficient information to develop a valid justification for each element in $M_i$. We will identify those atoms for which a justification can be constructed given $M_i$. These atoms describe a p-interpretation $D_i \sqsubseteq M_i$. The computation of $D_i$ is defined based on the two operators, $\Gamma$ and $\Delta$, which will respectively compute $D_i^+$ and $D_i^-$. Let us start with some preliminary definitions. Let $P$ be a program and $I$ be a p-interpretation. A set of atoms $S$ is called a [*cycle w.r.t. I*]{} if, for every $a \in S$ and for each $r \in P$ such that $head(r) = a$, we have that one of the following holds: - $pos(r) \cap I^- \neq \emptyset$ (rule is falsified by $I$), or - $neg(r) \cap I^+ \neq \emptyset$ (rule is falsified by $I$), or - $pos(r) \cap S \ne \emptyset$ (rule is in a cycle with elements of $S$). We can observe that, if $I$ is an interpretation, $S$ is a cycle w.r.t. $I$, and $M$ is an answer set with $I \sqsubseteq M$, then $S \subseteq M^-$—since the elements of $S$ are either falsified by the interpretation (and, thus, by $M$) or they are part of an unfounded set. The set of cycles w.r.t. $I$ is denoted by $cycles(I)$. Furthermore, for every element $a \in {\cal A}^p \cup {\cal A}^n$, let $PE(a,I)$ be the set of local consistent explanations of $a$ w.r.t. $I$ and $\emptyset$—i.e., LCEs that do not require any assumptions and that build on the interpretation $I$. We are now ready to define the operators that will compute the $D_i$ subset of the p-interpretation $M_i$. Let $P$ be a program and $I \sqsubseteq J$ be two p-interpretations. We define $$\begin{array}{lll} \Gamma_I(J) & = & I^+ \cup \{head(r) \in J^+ \mid r\in P, I \models body(r)\} \\ \Delta_I(J) & = & I^- \:\cup\: \{a \in J^-\mid PE(a^-,I) \ne \emptyset\} \:\cup\: \bigcup \{S \mid S \in cycles(I), S \subseteq J^- \} \\ \end{array}$$ Intuitively, for $I \sqsubseteq J$, $\Gamma_I(J)$ is a set of atoms that are true in $J$ and they will remain true in every answer set extending $J$, if $J$ is a partial interpretation. The set $\Delta_I(J)$ contains atoms that are false in $J$ and in each answer set that extends $J$. In particular, if $I$ is the set of *“justifiable”* atoms—i.e., atoms for which we can construct a justification—and $J$ is the result of the current computation step, then we have that $\langle \Gamma_I(J), \Delta_I(J) \rangle$ is a new interpretation satisfying the following two properties: - $I \sqsubseteq \langle \Gamma_I(J), \Delta_I(J) \rangle \sqsubseteq J$, and - it is possible to create a justification for all elements in $\langle \Gamma_I(J), \Delta_I(J) \rangle$. Observe that it is not necessarily true that $\Gamma_I(J)=J^+$ and $\Delta_I(J)=J^-$. This means that there may be elements in the current step of computation for which it is not possible (yet) to construct a justification. This reflects the practice of guessing literals and propagating these guesses in the computation of answer sets, implemented by several solvers (based on variations of the Davis-Putnam-Logemann-Loveland procedure [@DavisLL62]). We are now ready to specify how the set $D_i$ is computed. Let $WF_P = \langle W^+,W^- \rangle$ be the well-founded model of $P$ and let $J$ be a p-interpretation.[^2] $$\begin{array}{lllllllll} \Gamma^0(J) & = & \Gamma_{\langle \emptyset,\emptyset\rangle}(J) & \hspace{.65cm}& & \Delta^0(J) & = & {\cal TA}_P(J) \cup \Delta_{\langle \emptyset, \emptyset \rangle} (J) \\ \Gamma^{i+1}(J) & = &\Gamma_{I_i}(J) & & & \Delta^{i+1}(J) & = & \Delta_{I_i}(J)\\ \multicolumn{3}{l}{\small (\textnormal{where } I_i = \langle \Gamma^{i}(J), \Delta^{i}(J) \rangle)}\\ \end{array}$$ Intuitively, 1. The iteration process starts by collecting the facts of $P$ ($\Gamma^0$) and all those elements that are false either because there are no defining rules for them or because they have been chosen to be false in the construction of $J$. All these elements can be easily provided with justifications. 2. The successive iterations expand the set of known justifiable elements from $J$ using $\Gamma$ and $\Delta$. Finally, we repeat the iteration process until a fixpoint is reached: $$\Gamma(J) = \bigcup_{i=0}^\infty \Gamma^i(J) \:\:\:\:\textnormal{ and } \:\:\:\: \Delta(J) = \bigcup_{i=0}^\infty \Delta^i(J)$$ Because $\Gamma^i(J) \subseteq \Gamma^{i+1}(J) \subseteq J^+ $ and $\Delta^i(J) \subseteq \Delta^{i+1}(J) \subseteq J^-$ (recall that $I \sqsubseteq J$), we know that both $\Gamma(J)$ and $\Delta(J)$ are well-defined. We can prove the following: \[gammadelta\] For a program $P$, we have that: - $\Gamma$ and $\Delta$ maintains the consistency of $J$, i.e., if $J$ is an interpretation, then $\langle \Gamma(J), \Delta(J) \rangle$ is also an interpretation; - $\Gamma$ and $\Delta$ are monotone w.r.t the argument $J$, i.e., if $J \sqsubseteq J'$ then $\Gamma(J) \subseteq \Gamma(J')$ and $\Delta(J) \subseteq \Delta(J')$; - $\Gamma(WF_P) = WF_P^+ $ and $\Delta(WF_P) = WF_P^-$; and - If $M$ is an answer set of $P$, then $\Gamma(M) = M^+ $ and $\Delta(M) = M^-$. We next introduce the notion of on-line explanation graph. Let $P$ be a program, $A$ a set of atoms, $J$ a p-interpretation, and $a \in {\cal A}^p \cup {\cal A}^n$. An *on-line explanation graph* $G=(N,E)$ of $a$ w.r.t. $J$ and $A$ is a $(J,A)$-based e-graph of $a$. In particular, if $J$ is an answer set of $P$, then any off-line e-graph of $a$ w.r.t. $J$ and $A$ is also an on-line e-graph of $a$ w.r.t. $J$ and $A$. Observe that $\Gamma^0(J)$ contains the set of facts of $P$ that belongs to $J^+$, while $\Delta^0(J)$ contains the set of atoms without defining rules and atoms belonging to positive cycles of $P$. As such, it is easy to see that, for each atom $a$ in $\langle \Gamma^0(J), \Delta^0(J)\rangle$, we can construct an e-graph for $a^+$ or $a^-$ whose nodes belong to $\Gamma^0(J) \cup \Delta^0(J)$. Moreover: - if $a \in \Gamma^{i+1}(J) \setminus \Gamma^{i}(J)$, then an e-graph with nodes (except $a^+$) belonging to $\Gamma^i(J) \cup \Delta^i(J)$ can be constructed; - if $a \in \Delta^{i+1}(J) \setminus \Delta^{i}(J)$, an e-graph with nodes (except $a^-$) belonging to $\Gamma^{i+1}(J) \cup \Delta^{i+1}(J)$ can be constructed. This leads to the following lemma. \[just-free\] Let $P$ be a program, $J$ a p-interpretation, and $A = {\cal TA}_P(J)$. The following properties hold: - For each atom $a \in \Gamma(J)$ (resp. $a \in \Delta(J)$), there exists a *safe* off-line e-graph of $a^+$ (resp. $a^-$) w.r.t. $J$ and $A$; - for each atom $a \in J^+ \setminus \Gamma(J)$ (resp. $a \in J^- \setminus \Delta(J)$) there exists an on-line e-graph of $a^+$ (resp. $a^-$) w.r.t. $J$ and $A$. We will now discuss how the above proposition can be utilized in defining a notion called [*on-line justification*]{}. To this end, we associate to each partial interpretation $J$ a snapshot $S(J)$: \[snapdef\] A *snapshot* of a p-interpretation $J$ is a tuple $S(J) = \langle \textnormal{\em Off}(J), On(J), D \rangle$, where [$\bullet$]{}[ ]{} $D = \langle \Gamma(J), \Delta(J) \rangle$, For each $a$ in $\Gamma(J)$,\ $\textnormal{\em Off}(J)$ contains exactly one safe off-line e-graph of $a^+$ w.r.t. $J$ and ${\cal TA}_P(J)$; For each $a$ in $\Delta(J)$,\ $\textnormal{\em Off}(J)$ contains exactly one safe off-line e-graph of $a^-$ w.r.t. $J$ and ${\cal TA}_P(J)$; For each $a \in J^+\setminus\Gamma(J)$,\ $On(J)$ contains exactly one on-line e-graph of $a^+$ w.r.t. $J$ and ${\cal TA}_P(J)$; For each $a \in J^-\setminus \Delta(J)$,\ $On(J)$ contains exactly one on-line e-graph of $a^-$ w.r.t. $J$ and ${\cal TA}_P(J)$. Given a computation $M_0, M_1, \dots, M_k$, an *on-line justification* of the computation is a sequence of snapshots $S(M_0), S(M_1), \dots, S(M_k)$. It is worth to point out that an on-line justification can be obtained in answer set solvers employing the computation model described in Definition \[genc\]. This will be demonstrated in the next section where we discuss the computation of on-line justifications in the [Smodels]{} system. We next illustrate the concept of an on-line justification. \[rem1\] Observe that the monotonicity of the computation allows us to avoid recomputing $\Gamma$ and $\Delta$ from scratch at every step. In particular, when computing the fixpoint, we can start the iterations from $\Gamma_{\langle \Gamma(M_i),\Delta(M_i)\rangle}$ and $\Delta_{\langle \Gamma(M_i),\Delta(M_i)\rangle}$ and look only at the elements of $\langle M_{i+1}^+\setminus \Gamma(M_i), M_{i+1}^-\setminus \Delta(M_i)\rangle$. Similarly, the computation of $\textnormal{\em Off}(M_{i+1})$ can be made incremental, by simply adding to $\textnormal{\em Off}(M_{i+1})$ the off-line e-graphs for the elements in $\Gamma(M_{i+1})\setminus \Gamma(M_i)$ and $\Delta(M_{i+1})\setminus \Delta(M_i)$. Note also that these new off-line graphs can be constructed reusing the off-line graphs already present in $\textnormal{\em Off}(M_i)$. Let us consider the program $P$ containing $$\begin{array}{lclclclclcl} \texttt{s} &{\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{a},\: not\: \texttt{t}. & \hspace{0.5cm} & \texttt{a} &{\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{f},\: not\: \texttt{b}. & \hspace{0.5cm} & \texttt{b} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e},\: not\: \texttt{a}. \\ \texttt{e}. & & & & \texttt{f} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{e}. \end{array}$$ Two possible general computations of $P$ are $$\begin{array}{llllllll} M^1_0 = \langle \{\texttt{e},\texttt{s}\}, \emptyset \rangle & \hspace{0.2cm}\mapsto \hspace{0.2cm}& M^1_1 = \langle \{\texttt{e},\texttt{s},\texttt{a}\}, \{\texttt{t}\} \rangle & \hspace{0.2cm}\mapsto \hspace{0.2cm} & M^1_2 = \langle \{\texttt{e},\texttt{s},\texttt{a},\texttt{f}\}, \{\texttt{t},\texttt{b}\} \rangle \\ M^2_0 = \langle \{\texttt{e},\texttt{f}\}, \emptyset \rangle & \multicolumn{1}{c}{\mapsto} & M^2_1 = \langle \{\texttt{e},\texttt{f}\}, \{\texttt{t}\} \rangle & \multicolumn{1}{c}{\mapsto} & M^2_2 = \langle \{\texttt{e},\texttt{f},\texttt{b},\texttt{a}\}, \{\texttt{t},\texttt{a},\texttt{b},\texttt{s}\} \rangle \end{array}$$ The first computation is a complete computation leading to an answer set of $P$ while the second one is not. An on-line justification for the first computation is given next: $$\begin{array}{llllllll} S(M^1_0) & = & \langle X_0, Y_0, \langle \{\texttt{e}\},\emptyset \rangle \rangle \\ S(M^1_1) & = & \langle X_0 \cup X_1, Y_0 \cup Y_1, \langle \{\texttt{e}\},\{\texttt{t}\} \rangle \rangle \\ S(M^2_1) & = & \langle X_0 \cup X_1 \cup X_2, \emptyset, M_2^1 \rangle \\ \end{array}$$ where (for the sake of simplicity we report only the edges of the graphs): $$\begin{array}{lcl} X_0 &= & \{ (\texttt{e}^+,\top,+)\} \\ Y_0 &=& \{ (\texttt{s}^+,assume,+)\}\\ X_1 & = & \{(\texttt{t}^-,\bot,-)\} \\ Y_1 &=& \{(\texttt{a}^+,assume,+)\}\\ X_2 &=& \{ (\texttt{f}^+,\texttt{e}^+,+), (\texttt{s}^+,\texttt{a}^+,+), (\texttt{s}^+,\texttt{t}^-,-), (\texttt{a}^+,\texttt{f}^+,+), (\texttt{a}^+,\texttt{b}^-,-), (\texttt{b}^-,assume,-)\}\\ \end{array}$$ An on-line justification for the second computation is: $$\begin{array}{lcl} S(M^2_0) & = & \langle X_0, Y_0, \langle \{\texttt{e},\texttt{f}\},\emptyset\rangle \rangle \\ S(M^2_1) & = & \langle X_0 \cup X_1, Y_0, \langle \{\texttt{e},\texttt{f}\},\{\texttt{t}\}\rangle \rangle \\ S(M^2_2) & = & \langle X_0 \cup X_1 \cup X_2, Y_0\cup Y_2, M_2^2 \rangle \end{array}$$ where: $$\begin{array}{lcl} X_0 & = & \{ (\texttt{e}^+, \top,+), (\texttt{f}^+,\texttt{e}^+,+) \}\\ Y_0 & = & \emptyset\\ X_1 & = & \{ (\texttt{t}^-,\bot,-)\}\\ Y_1 & = & \emptyset\\ X_2 & = & \{ (\texttt{a}^+,\texttt{f}^+,+), (\texttt{a}^+,\texttt{b}^-,-), (\texttt{b}^+,\texttt{e}^+,+), (\texttt{b}^+,\texttt{a}^-,-)\}\\ Y_2 & = & \{ (\texttt{a}^-, assume,-), (\texttt{b}^-, assume,-)\}\\ \end{array}$$ $\Box$ We can relate the on-line justifications and off-line justifications as follows. \[conserve\] Let $P$ be a program, $J$ an interpretation, and $M$ an answer set such that $J \sqsubseteq M$. For every atom $a$, if $(N,E)$ is a safe off-line e-graph of $a$ w.r.t. $J$ and $A$ where $A = J^- \cap {\cal TA}_P(M)$ then it is an off-line justification of $a$ w.r.t. $M$ and ${\cal TA}_P(M)$. This leads to the following proposition. \[on-off\] Let $M_0, \ldots, M_k$ be a general complete computation and $S(M_0), \ldots, S(M_k)$ be an on-line justification of the computation. Then, for each atom $a$ in $M_k$, the e-graph of $a$ in $S(M_k)$ is an off-line justification of $a$ w.r.t. $M_k$ and ${\cal TA}_P(M_k)$. [Smodels]{} On-line Justifications {#smo} ================================== The notion of on-line justification presented in the previous section is very general, to fit the needs of different answer set solver implementations that follow the computation model presented in Subsection \[subsec-comp\]. In this section, we illustrate how the notion of on-line justification has been specialized to (and implemented in) a specific computation model—the one used in [Smodels]{} [@sim02]. This allows us to define an incremental version of on-line justification—where the specific steps performed by [Smodels]{} are used to guide the incremental construction of the justification. The choice of [Smodels]{} was dictated by availability of its source code and its elegant design. We begin with an overview of the algorithms employed by [Smodels]{}. The following description has been adapted from [@GiunchigliaM05; @sim02]. Although more abstract than the concrete implementation, and without various implemented features (e.g., heuristics, lookahead), it is sufficiently faithful to capture the spirit of our approach, and to guide the implementation (see Section \[imple\]). An Overview of [Smodels]{}’ Computation --------------------------------------- We propose a description of the [Smodels]{} algorithms based on a composition of state-transformation operators. In the following, we say that an interpretation $I$ does not satisfy the body of a rule $r$ (i.e., $body(r)$ is false in $I$) if $(pos(r) \cap I^- ) \cup (neg(r) \cap I^+) \ne \emptyset$. ### Operator: {#operator .unnumbered} The $AtLeast$ operator is used to expand a partial interpretation $I$ in such a way that each answer set $M$ of $P$ that “agrees” with $I$—i.e., the elements in $I$ have the same truth value in $M$ (or $I \sqsubseteq M$)—also agrees with the expanded interpretation. Given a program $P$ and a partial interpretation $I$, we define the intermediate operators $AL_P^1, \dots, AL_P^4$ as follows: - [**Case 1.**]{} if $r\in P$, $head(r)\notin I^+$, $pos(P) \subseteq I^+$ and $neg(P) \subseteq I^-$ then $$\begin{array}{lcr} AL_P^1(I)^+=I^+\cup\{head(r)\} & \:\:\:\textit{and}\:\:\:& AL_P^1(I)^-= I^-\end{array}$$ - [**Case 2.**]{} if $a \notin I^+\cup I^-$ and $\forall r\in P. (head(r)= a \Rightarrow \textit{$body(r)$ is false in $I$})$, then $$AL_P^2(I)^+ = I^+\:\:\:\textit{ and }\:\:\: AL_P^2(I)^- = I^- \cup \{a\}$$ - [**Case 3.**]{} if $a\in I^+$ and $r$ is the only rule in $P$ with $head(r)=a$ and whose body is not false in $I$ then $$AL_P^3(I)^+=I^+\cup pos(r) \:\:\:\textit{and}\:\:\: AL_P^3(I)^- = I^- \cup neg(r)$$ - [**Case 4.**]{} if $a\in I^-$, $head(r) = a$, and - if $pos(r) \setminus I^+ = \{b\}$ then $$AL_P^4(I)^+ = I^+ \:\:\:\textit{and}\:\:\: AL_P^4(I)^- = I^- \cup \{b\}$$ - if $neg(r) \setminus I^+ = \{b\}$ then $$AL_P^4(I)^+ = I^+ \cup \{b\} \:\:\:\textit{and}\:\:\: AL_P^4(I)^- = I^-$$ $(pos(r) \setminus I^+)\cup (neg(r) \setminus I^-) = \{b\}$ then $$\begin{array}{lcl} AL_P^4(I)^+& =& \left\{\begin{array}{lcl} I^+ \cup \{b\} & \hspace{.3cm} & \textit{if $b\in neg(r)$}\\ I^+ & & \textit{if $b \in pos(r)$} \end{array}\right.\\ &&\\ AL_P^4(I)^- & = & \left\{\begin{array}{lcl} I^- & \hspace{.3cm} & \textit{if $b\in neg(r)$}\\ I^- \cup \{b\} && \textit{if $b \in pos(r)$} \end{array}\right. \end{array}$$ Given a program $P$ and an interpretation $I$, $AL_P(I) = AL_P^i(I)$ if $AL_P^i(I) \neq I$ and $\forall j < i. \: AL_P^j(I)=I$ ($1 \leq i \leq 4$); otherwise, $AL_P(I) = I$. ### Operator: {#operator-1 .unnumbered} The $AtMost_P$ operator recognizes atoms that are defined exclusively as mutual positive dependences (i.e., “positive loops”)—and falsifies them. Given a set of atoms $S$, the operator $AM_P$ is defined as $AM_P(S) = S \cup \{head(r) \:|\: r\in P\wedge pos(r)\subseteq S\}$. Given an interpretation $I$, the $AtMost_P(I)$ operator is defined as $$AtMost_P(I)^+ = I^+ \:\:\:\textit{and}\:\:\: AtMost_P(I)^- = I^- \cup \{p \in {\cal A} \:|\: p \not\in \bigcup_{i\geq 0}S_i\}$$ where $S_0 = I^+$ and $S_{i+1} = AM_P(S_i)$. ### Operator: {#operator-2 .unnumbered} This operator is used to randomly select an atom that is unknown in a given interpretation. Given a partial interpretation $I$, $choose_P$ returns an atom of $\cal A$ such that $$choose_P(I) \not\in I^+ \cup I^- \:\:\:\textnormal{ and }\:\:\: choose_P(I) \in NANT(P) \setminus (WF_P^+\cup WF_P^-).$$ ### Computation: {#computation .unnumbered} Given an interpretation $I$, we define the transitions: $$\begin{array}{lclcl} I & \mapsto_{AL^c}& I' & \hspace{.5cm} &\left[\begin{array}{l} \textit{If $I' = AL^c_P(I)$, $c \in \{1, 2, 3, 4\}$} \end{array}\right.\\ &&\\ I & \mapsto_{atmost}& I' & \hspace{.5cm} &\left[\begin{array}{l} \textit{If $I'=AtMost_P(I)$} \end{array}\right.\\ &&\\ I & \mapsto_{choice}& I' & \hspace{.5cm} &\left[\begin{array}{l} \textit{\small If $I'=\langle I^+\cup\{choose_P(I)\}, I^-\rangle$ or }\\ \textit{\small $I'=\langle I^+, I^-\cup\{choose_P(I)\}\rangle$} \end{array}\right. \end{array}$$ If there is an $\alpha$ in $\{AL^1, AL^2, AL^3, AL^4, atmost, choice\}$ such that $I \mapsto_{\alpha} I'$, then we will simply denote this fact with $I \mapsto I'$. The [Smodels]{} system imposes constraints on the order of application of the transitions. Intuitively, the [Smodels]{} computation is depicted in the algorithms of Figs. \[main\] and \[exp\]. We will need the following notations. A computation $I_0 \mapsto I_1\mapsto I_2 \mapsto \dots \mapsto I_n$ is said to be [*$AL$-pure*]{} if every transition in the computation is an $AL^c$ transitions and for every $c \in \{1,2,3,4\}$, $AL^c_P(I_n) = I_n$. A choice point of a computation $I_0 \mapsto I_1\mapsto I_2 \mapsto \dots \mapsto I_n$ is an index $1 \le j < n$ such that $I_j \mapsto_{choice} I_{j+1}$. Let $P$ be a program. Let $$C = I_0 \mapsto I_1\mapsto I_2 \mapsto \dots \mapsto I_n$$ be a computation and $$0 \le \nu_1 < \nu_2 < \dots < \nu_r< n$$ ($r \geq 0$) be the sequence of all choice points in $C$. We say that $C$ is a *[Smodels]{} computation* if for every $0 \le j \le r$, there exists a sequence of indices $\nu_j+1 = a_1 < a_2 < \ldots < a_t \le \nu_{j+1}- 1$ ($\nu_{r+1}=n$ and $\nu_0=-1$) such that - the transition $I_{a_{i+1}-1} \mapsto I_{a_{i+1}}$ is an $\mapsto_{atmost}$ transition ($1\leq i \leq t-1$) - the computation $I_{a_i} \mapsto \ldots \mapsto I_{a_{i+1}-1}$ is a $AL$-pure computation. <!-- --> - for every $j$, $1 \le j \le r$, $I_{\nu_j} \mapsto_{choice} I_{\nu_j+1}$, - for every $0 \le j \le r$, there exists a sequence of indices $\nu_j+1 = a_1 < a_2 < \ldots < a_t \le \nu_{j+1}- 1$ ($\nu_{r+1}=n$ and $\nu_0=-1$) such that - the transition $I_{a_{i+1}-1} \mapsto I_{a_{i+1}}$ is an $\mapsto_{atmost}$ transition ($1\leq i \leq t-1$) - all transitions from $I_{a_i}$ to $I_{a_{i+1}-1}$ are $\mapsto_{AL^c}$ transitions ($1\leq i \leq t-1$ and $1 \leq c \leq 4$); in particular, for each $a_i \leq j \leq a_{i+1}-1$ we have that if $I_{j} \mapsto_{AL^c} I_{j+1}$ then $AL_P^{c'}(I_j) = I_j$ for $1 \leq c' < c$ - $AL_P(I_{a_{i+1}-1} )= I_{a_{i+1}-1}$ - there is no $j \not\in \{\nu_1,\ldots,\nu_r\}$ such that $I_{j} \mapsto_{choice} I_{j+1}$. We illustrate this definition in the next example. \[exnew\] Consider the program of Example \[ex5\]. A possible computation of $M_1$ is:[^3] $$\begin{array}{lclclc} \langle \emptyset,\emptyset \rangle &\mapsto_{AL^1}& \langle \{ \texttt{e} \}, \emptyset \rangle & \mapsto_{AL^1} & \langle \{\texttt{e}, \texttt{f}\},\emptyset \rangle &\mapsto_{atmost}\\ \langle \{\texttt{e},\texttt{f}\}, \{\texttt{c},\texttt{d}\}\rangle& \mapsto_{choice} &\langle \{\texttt{e},\texttt{f},\texttt{b}\}, \{\texttt{c},\texttt{}d\}\rangle& \mapsto_{AL^2}& \langle \{\texttt{e},\texttt{f},\texttt{b}\}, \{\texttt{c},\texttt{d},\texttt{a}\}\rangle \end{array}$$ $\Box$ [Smodels]{} On-line Justifications {#smodels-on-line-justifications} ---------------------------------- We can use knowledge of the specific steps performed by [Smodels]{} to guide the construction of an on-line justification. Assuming that $$C = M_0 \mapsto M_1\mapsto M_2 \mapsto \dots \mapsto M_n$$ is a computation of [Smodels]{}. Let $S(M_i) = \langle E_1, E_2, D\rangle$ and $S(M_{i+1}) = \langle E'_1, E'_2, D'\rangle$ be the snapshots correspond to $M_i$ and $M_{i+1}$ respectively. Obviously, $S(M_{i+1})$ can be computed by the following steps: - computing $D'= \langle \Gamma(M_{i+1}), \Delta(M_{i+1})\rangle$; - updating $E_1$ and $E_2$ to obtain $E_1'$ and $E_2'$. We observe that $\langle \Gamma(M_{i+1}), \Delta(M_{i+1} \rangle$ can be obtained by computing the fixpoint of the $\Gamma$- and $\Delta$-function with the starting value $\Gamma_{\langle \Gamma(M_i),\Delta(M_i)\rangle}$ and $\Delta_{\langle \Gamma(M_i),\Delta(M_i)\rangle}$. This is possible due to the monotonicity of the computation. Regarding $E_1'$ and $E_2'$, observe that the e-graphs for elements in $\langle \Gamma^k(M_{i+1}), \Delta^k(M_{i+1}) \rangle$ can be constructed using the e-graphs constructed for elements in $\langle \Gamma^{k-1}(M_{i+1}), \Delta^{k-1}(M_{i+1}) \rangle$ and the rules involved in the computation of $\langle \Gamma^k(M_{i+1}), \Delta^k(M_{i+1}) \rangle$. Thus, we only need to update $E_1'$ with e-graphs of elements of $\langle \Gamma^k(M_{i+1}), \Delta^k(M_{i+1}) \rangle$ which do not belong to $\langle \Gamma^{k-1}(M_{i+1}), \Delta^{k-1}(M_{i+1}) \rangle$. Also, $E_2'$ is obtained from $E_2$ by removing the e-graphs of atoms that “move” into $D'$ and adding the e-graph $(a,assume,+)$ (resp. $(a,assume,-)$) for $a \in M_{i+1}^+$ (resp. $a \in M_{i+1}^-$) not belonging to $D'$. Clearly, this computation depends on the transition from $M_i$ to $M_{i+1}$. Assume that $M_i \mapsto_\alpha M_{i+1}$, the update of $S(M_i)$ to create $S(M_{i+1})$ is done as follows. - let $p$ be the atom chosen in this step. If $p$ is chosen to be true, then we can use the graph $$G_p = (\{a,\textit{assume}\}, \{(a,\textit{assume},+)\})$$ and the resulting snapshot is [$ S(M_{i+1}) = \langle E_1, E_2 \cup \{G_p\}, D \rangle$]{}. Observe that $D$ is unchanged, since the structure of the computation (in particular the fact that an *expand* has been done before the choice) ensures that $p$ will not appear in the computation of $D$. If $p$ is chosen to be false, then we will need to add $p$ to $D^-$, compute $\Gamma(M_{i+1})$ and $\Delta(M_{i+1})$, and update $E_1$ and $E_2$ correspondingly; in particular, $p$ belongs to $\Delta(M_{i+1})$ and $G_p = (\{a,\textit{assume}\}, \{(a,\textit{assume},-)\})$ is added to $E_1$. - in this case, $M_{i+1} = \langle M_i^+, M_i^-\cup AtMost(P,M_i)\rangle$. The computation of $S(M_{i+1})$ is performed as from definition of on-line justification. In particular, observe that if $\forall c\in AtMost(P,M_i)$ we have that $LCE_P^n(c,D)\neq \emptyset$ then the computation can be started from $\Gamma(M_i)$ and $\Delta(M_i)\cup AtMost(P,M_i)$. - let $p$ be the atom dealt with in this step and let $r$ be the rule employed. We have that $M_{i+1} = \langle M_i^+\cup \{p\}, M_i^-\rangle$. If $D\models body(r)$ then $S(M_{i+1})$ will be computed starting from $\Gamma(M_i)\cup\{p\}$ and $\Delta(M_i)$. In particular, an off-line graph for $p$, let’s say $G_p$, will be added to $E_1$, and such graph will be constructed using the LCE based on the rule $r$ and the e-graphs in $E_1$. Otherwise, $S(M_{i+1}) = \langle E_1, E_2 \cup \{G^+(p,r,\Sigma)\}, D\rangle$, where $G^+(p,r,\Sigma)$ is an e-graph of $p^+$ constructed using the LCE of rule $r$ and the e-graphs in $\Sigma = E_1 \cup E_2$ (note that all elements in $body(r)$ have an e-graph in $E_1\cup E_2$). - let $p$ be the atom dealt with in this step. In this case $M_{i+1} = \langle M_i^+, M_i^-\cup \{p\}\rangle$. If there exists $\gamma \in LCE_P^n(p,D,\emptyset)$, then $S(M_{i+1})$ can be computed according to the definition of on-line justification, starting from $\Gamma(M_i)$ and $\Delta(M_i)\cup \{p\}$. Observe that the graph of $p$ can be constructed starting with $\{(p,a,+)\mid a\in \gamma\}\cup \{(p,b,-)\mid not\:b\in \gamma\}$). Otherwise, given an arbitrary $\psi \in LCE_P^n(p,M_i, \emptyset)$, we can construct an e-graph $G_p$ for $p^-$, such that $\psi = support(p^-,G_p)$, the graphs $E_1\cup E_2$ are used to describe the elements of $\psi$, and $S(M_{i+1}) = \langle E_1, E_2 \cup \{G_p\}, D\rangle$. - let $r$ be the rule used in this step and let $p = head(r)$. Then $M_{i+1} = \langle M_i^+ \cup pos(r), M_i^-\cup neg(r)\rangle$ and $S(M_{i+1})$ is computed according to the definition of on-line justification. Observe that the e-graph $G_p$ for $p^+$ (added to $E_1$ or $E_2$) for $S(M_{i+1})$ will be constructed using $body(r)$ as $support(p^+,G_p)$, and using the e-graphs in $E_1\cup E_2 \cup \Sigma$ for some $$\Sigma \subseteq \{ (a^+,assume,+)\:|\:a\in pos(r)\} \cup \{(a,\textit{assume},-)\mid a\in neg(r)\}.$$ - let $r$ be the rule processed and let $b$ the atom detected in the body. If $b \in pos(r)$, then $M_{i+1} = \langle M_i^+, M_i^- \cup \{b\}\rangle$, while if $b \in neg(r)$ then $M_{i+1} = \langle M_i^+\cup\{b\}, M_i^-\rangle$. In either cases, the snapshot $S(M_{i+1})$ will be computed using the definition of on-line justification. \[ex44\] Let us consider the computation of Example \[exnew\]. A sequence of snapshots is (we provide only the edges of the graphs and we combine together e-graphs of different atoms): -------------------------------------------------------------------------------------------------------------------------------------- $E_1$ $E_2$ $D$ ---------- --------------------------------------------------- ------------------------------- --------------------------------------- $S(M_0)$ $\emptyset$ $\emptyset$ $\emptyset$ $S(M_1)$ $\{(e^+,\top,+)\}$ $ \emptyset$ $\langle \{e\},\emptyset\rangle$ $S(M_2)$ $\{(e^+,\top,+), (f^+,e^+,+)\}$ $\emptyset$ $\langle \{e,f\},\emptyset \rangle$ $S(M_3)$ $\left\{\begin{array}{c} $\emptyset$ $\langle \{e,f\}, \{c,d\}\rangle$ (e^+,\top,+),\{f^+,e^+,+)\\ (d^-,c^-,+), (c^-,d^-,+)\end{array}\right\}$ $S(M_4)$ $\left\{\begin{array}{c} $\{(b^+,\textit{assume},+)\}$ $\langle \{e,f\}, \{c,d\}\rangle$ (e^+,\top,+),\{f^+,e^+,+)\\ (d^-,c^-,+), (c^-,d^-,+) \end{array}\right\}$ $S(M_5)$ $\left\{\begin{array}{c} $\emptyset$ $\langle \{e,f,b\}, \{c,d,a\}\rangle$ (e^+,\top,+),\{f^+,e^+,+),\\ (d^-,c^-,+), (c^-,d^-,+), \\ (a^-,\textit{assume},-), \\ (b^+,e^+,+), (b^+,a^-,-) \end{array}\right\}$ -------------------------------------------------------------------------------------------------------------------------------------- $$\begin{array}{|r||c|c|c|} \hline & E_1 & E_2 & D\\ \hline S(M_0) & \emptyset & \emptyset & \emptyset\\ S(M_1) & \{(e^+,\top,+)\} & \emptyset & \langle \{e\},\emptyset\rangle\\ S(M_2) & \{(e^+,\top,+), (f^+,e^+,+)\} & \emptyset & \langle \{e,f\},\emptyset \rangle\\ S(M_3) & \left\{\begin{array}{c} (e^+,\top,+),\{f^+,e^+,+)\\ (d^-,c^-,+), (c^-,d^-,+)\end{array}\right\} & \emptyset & \langle \{e,f\}, \{c,d\}\rangle\\ S(M_4) & \left\{\begin{array}{c} (e^+,\top,+),\{f^+,e^+,+)\\ (d^-,c^-,+), (c^-,d^-,+) \end{array}\right\} & \{(b^+,\textit{assume},+)\} & \langle \{e,f\}, \{c,d\}\rangle\\ S(M_5) & \left\{\begin{array}{c} (e^+,\top,+),\{f^+,e^+,+),\\ (d^-,c^-,+), (c^-,d^-,+), \\ (a^-,\textit{assume},-), \\ (b^+,e^+,+), (b^+,a^-,-) \end{array}\right\} & \emptyset & \langle \{e,f,b\}, \{c,d,a\}\rangle\\ \hline \end{array}$$ $\Box$ Let $P$ be the program: $$\begin{array}{lclclcl} \texttt{p} & {\mbox{$\: {\tt : \!\! - }\:$}}& \naf \texttt{q} &\hspace{.5cm} & \texttt{q} & {\mbox{$\: {\tt : \!\! - }\:$}}& \naf \texttt{p} \\ \texttt{r} & {\mbox{$\: {\tt : \!\! - }\:$}}& \naf \texttt{p} &\hspace{.5cm} & \texttt{p} & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{r} \\ \end{array}$$ This program does not admit any answer sets where $p$ is false. One possible computation (we highlight only steps that change the trace): $$\begin{array}{lllcclllcclllc} 1. &\langle \emptyset, \emptyset \rangle & \mapsto_{choice} & \hspace{.5cm} & 2. & \langle \emptyset,\{p\} \rangle & \mapsto_{AL^1}\\ 3. & \langle \{q\}, \{p\} \rangle & \mapsto_{AL^1} && 4. & \langle \{q,r\},\{p\}\rangle & \mapsto_{AL^1} & \hspace{.5cm} 5. & \langle \{q,r,p\}, \{p\} \rangle \end{array}$$ [From]{} this computation we can obtain a sequence of snapshots: $$\begin{array}{|r||c|c|c|} \hline & E_1 & E_2 & D\\ \hline S(M_0) & \emptyset & \emptyset & \emptyset \\ S(M_1) & \{(p^-,\textit{assume},-)\} & \emptyset & \langle \emptyset, \{p\}\rangle\: \rangle\\ S(M_2) & \{(p^-,\textit{assume},-), (q^+,p^-,-)\} & \emptyset & \langle\{q\}, \{p\} \rangle\\ S(M_3) & \{(p^-,\textit{assume},-), (q^+,p^-,-), (r^+,p^-,-)\} & \emptyset & \langle \{q,r\},\{p\} \rangle\\ S(M_4) & \left \{ \begin{array}{c} (p^-,\textit{assume},-), (q^+,p^-,-), \\ (r^+,p^-,-), (p^+,r^+,+) \end{array} \right\} & \emptyset & \langle \{p,q,r\},\{p\} \rangle\\ \hline \end{array}$$ Observe that a conflict is detected by the computation and the sources of conflict are highlighted in the presence of two justifications for $p$, one for $p^+$ and another one for $p^-$. $\Box$ Discussion {#imple} ---------- In this subsection, we discuss possible ways to extend the notion of justifications on various language extensions of ASP. We also describe a system capable of computing off-line and on-line justifications for ASP programs. ### Language Extensions In the discussion presented above, we relied on a standard logic programming language. Various systems, such as [Smodels]{}, have introduced language extensions, such as choice atoms, to facilitate program development. The extension of the notion of justification to address these extensions is relatively straightforward. Let us consider, for example, the choice atom construct of [Smodels]{}. A choice atom has the form $L \leq \{a_1,\dots,a_n,not\:b_1,\dots,not\:b_m\}\leq U$ where $L, U$ are integers (with $L\leq U$) and the various $a_i, b_j$ are atoms. Choice atoms are allowed to appear both in the head as well as the body of rules. Given an interpretation $I$ and a choice atom, we say that $I$ satisfies the atom if $$L \leq |\{a_i\:|\: a_i \in I^+\}| + |\{b_j \:|\: b_j \in I^-\}| \leq U$$ The local consistent explanation of a choice atom can be developed in a natural way: - If the choice atom $L\leq T \leq U$ is true, then a set of literals $S$ is an LCE if - ${\cal A}\cap S \subseteq T$ and $not\:{\cal A}\cap S \subseteq T$ - for each $S'$ such that $S\subseteq S'$ and $\{atom(\ell)\:|\: \ell \in S'\}=\{atom(\ell)\:|\: \ell \in T\}$ we have that $$L\leq |\{a\:|\: a\in T\cap {\cal A}\cap S'\}|+|\{b\:|\:not\:b \in T\cap S'\}|\leq U$$ - if the choice atom $L\leq T \leq U$ is false, then a set of literals $S$ is an LCE if - ${\cal A}\cap S \subseteq T$ and $not\:{\cal A}\cap S \subseteq T$ - for each $S'$ such that $S\subseteq S'$ and $\{atom(\ell)\:|\: \ell \in S'\}=\{atom(\ell)\:|\: \ell \in T\}$ we have that $$\begin{array}{c} L > |\{a\:|\: a\in T\cap {\cal A}\cap S'\}|+|\{b\:|\:not\:b \in T\cap S'\}|\\ \textit{or}\\ |\{a\:|\: a\in T\cap {\cal A}\cap S'\}|+|\{b\:|\:not\:b \in T\cap S'\}| > U \end{array}$$ The notions of e-graphs can be extended to include choice atoms. Choice atoms in the body are treated as such and justified according to the new notion of LCE. On the other hand, if we have a rule of the type $$L\leq T \leq U \:{\mbox{$\: {\tt : \!\! - }\:$}}\: Body$$ and $M$ is an answer set, then we will - treat the head as a new (non-choice) atom ($new_{L\leq T \leq U}$), and allow its justification in the usual manner, using the body of the rule - for each atom $p\in T\cap M^+$, the element $p^+$ has a new LCE $\{new_{L\leq T \leq U}\}$ Consider the program containing the rules: $$\begin{array}{lclclcl} \multicolumn{2}{l}{\texttt{p}\:\:\:\: {\mbox{$\: {\tt : \!\! - }\:$}}} & &\hspace{.5cm} & \texttt{q} & {\mbox{$\: {\tt : \!\! - }\:$}}& \\ 2\leq \{\texttt{r,t,s}\}\leq 2 & {\mbox{$\: {\tt : \!\! - }\:$}}& \texttt{p, q} & & \end{array}$$ The interpretation $\langle \{t,s,p,q\},\{r\}\rangle$ is an answer set of this program. The off-line justifications for $s^+$ and $t^+$ are illustrated in Figure \[choiceexp\]. $\Box$ The concept can be easily extended to deal with weight atoms. ### Concrete Implementation The notions of off-line and on-line justifications proposed in the previous sections have been implemented and integrated in a debugging system for Answer Set Programming, developed within the [$\mathbb{ASP-PROLOG}$]{} framework [@asp-prolog]. The notions of justification proposed here is meant to represent the basic data structure on which debugging strategies for ASP can be developed. [$\mathbb{ASP-PROLOG}$]{} allows the construction of Prolog programs—using CIAO Prolog [@GrasH00]—which include modules written in ASP (the [Smodels]{} flavor of ASP). In this sense, the embedding of ASP within a Prolog framework (as possible in [$\mathbb{ASP-PROLOG}$]{}) allows the programmer to use Prolog itself to query the justifications and develop debugging strategies. We will begin this section with a short description of the system [$\mathbb{ASP-PROLOG}$]{}. The [$\mathbb{ASP-PROLOG}$]{} system has been developed using the module and class capabilities of CIAO Prolog. [$\mathbb{ASP-PROLOG}$]{} allows programmers to develop programs as collections of *modules*. Along with the traditional types of modules supported by CIAO Prolog (e.g., Prolog modules, Constraint Logic Programming modules), it allows the presence of *ASP modules*, each being a complete ASP program. Each CIAO Prolog module can access the content of any ASP module (using the traditional module qualification of Prolog), read its content, access its models, and modify it (using the traditional [assert]{} and [retract]{} predicates of Prolog). [$\mathbb{ASP-PROLOG}$]{} allows us to create Prolog modules that access (and possibly modify) other modules containing ASP code. For example, the following Prolog module :- use_asp(aspmod, 'asp_module.lp'). count_p(X) :- findall(Q, (aspmod:model(Q), Q:p), List), length(List,X). accesses an ASP module (called [aspmod]{}) and defines a predicate ([count\_p]{}) which counts how many answer sets of [aspmod]{} contain the atom [p]{}. [$\Box$]{} #### Off-Line Justifications: {#off-line-justifications} The [Smodels]{} engine has been modified to extract, during the computation, a compact footprint of the execution, i.e., a trace of the key events (corresponding to the transitions described in Sect. \[smo\]) with links to the atoms and rules involved. The modifications of the trace are trailed to support backtracking. Parts of the justification (as described in the previous section) are built on the fly, while others (e.g., certain cases of $AL^3$ and $AL^4$) are delayed until the justification is requested. To avoid imposing the overhead of justification construction on every computation, the programmer has to specify what ASP modules require justifications, using an additional argument ([justify]{}) in the module import declaration: $$\texttt{:- use\_asp($\langle$ module\_name $\rangle$, $\langle$ file\_name $\rangle$, $\langle$ parameters $\rangle$ [,justify]).}$$ Figure \[ovju\] shows a general overview of the implementation of ASP justifications in [$\mathbb{ASP-PROLOG}$]{}. Each program is composed of CIAO Prolog modules and ASP modules (each containing rules of the form (\[rule\]), possibly depending on the content of other ASP/Prolog modules). The implementation of [$\mathbb{ASP-PROLOG}$]{}, as described in [@asp-prolog], automatically generates, for each ASP module, an *interface module*—which supplies the predicates to access/modify the ASP module and its answer sets—and a *model class*—which allows the encoding of each answer set as a CIAO Prolog object [@Pineda99]. The novelty is the extension of the model class, to provide access to the justification of the elements in the corresponding answer set. [$\mathbb{ASP-PROLOG}$]{} provides the predicate [model/1]{} to retrieve answer sets of an ASP module—it retrieves them in the order they are computed by [Smodels]{}, and it returns the current one if the computation is still in progress. The main predicate to access the justification is [justify/1]{} which retrieves a CIAO Prolog object containing the justification; i.e., $$\texttt{?- my\_asp:model(Q), Q:justify(J).}$$ will assign to [J]{} the object containing the justification relative to the answer set [Q]{} of the ASP module [my\_asp]{}. Each justification object provides the following predicates: - [just\_node/1]{} which succeeds if the argument is one of the nodes in the justification graph, - [just\_edge/3]{} which succeeds if the arguments correspond to the components of one of the edges in the graph, and - [justify\_draw/1]{} which will generate a graphical drawing of the justification for the given atom (using the *uDrawGraph* application). An example display produced by [$\mathbb{ASP-PROLOG}$]{} is shown in Figure \[gf1\]; observe that rule names are also displayed to clarify the connection between edges of a justification and the generating program rules. For example, $$\texttt{ ?- my\_asp:model(Q),Q:justify(J),findall(e(X,Y),J:just\_edge(p,X,Y),L).}$$ will collect in [L]{} all the edges supporting [p]{} in the justification graph (for answer set [Q]{}). #### On-Line Justifications: The description of [Smodels]{} on-line justifications we proposed earlier is clearly more abstract than the concrete implementation—e.g., we did not address the use of lookahead, the use of heuristics, and other optimizations introduced in [Smodels]{}. All these elements have been handled in the current implementation, in the same spirit of what described here. On-line justifications have been integrated in the [$\mathbb{ASP-PROLOG}$]{} system as part of its ASP debugging facilities. The system provides predicates to set breakpoints on the execution of an ASP module, triggered by events such as the assignment of a truth value to a certain atom or the creation of a conflicting assignment. Once a breakpoint is encountered, it is possible to visualize the current on-line justification or step through the rest of the execution. Off-line justifications are always available. The [Smodels]{} solver is in charge of handling the activities of interrupting and resuming execution, during the computation of an answer set of an ASP program. A synchronous communication is maintained between a Prolog module and an ASP module—where the Prolog module requests and controls the ASP execution. When the ASP solver breaks, e.g., because a breakpoint is encountered, it sends a compact encoding of its internal data structures to the Prolog module, which stores it in a ASP-solver-state table. If the Prolog module requests resumption of the ASP execution, it will send back to the solver the desired internal state, that will allow continuation of the execution. This allows the execution to be restarted from any of a number of desired points (e.g., allowing a “replay”-style of debugging) and to control different ASP modules at the same time. [$\mathbb{ASP-PROLOG}$]{} provides the ability to establish a number of different types of breakpoints on the execution of an ASP module. In particular, - [break(atom,value)]{} interrupts the execution when the [atom]{} is assigned the given value; [value]{} could [true]{}, [false]{} or [any]{}. - [break(conflict)]{} interrupts the execution whenever a conflict is encountered during answer set computation.[^4] - [break(conflict(atom))]{} interrupts the execution if a conflict involving the [atom]{} is encountered. - [break(answer(N))]{} interrupts the execution at the end of the computation of the answer set referred to by the object [N]{}. Execution can be restarted using the built-in predicate [run]{}; the partial results of an interrupted computation (e.g., the partial answer set, the on-line justification) can be accessed using the predicates [model]{} and [justify]{}. Consider the following fragment of a Prolog program: :- module ( p, [m/0] ). :- use_asp ( asp, 'myasp.lp', justify ). m :- asp:break(atom(a,true)), asp:run, asp:model(Q), Q:justify(J), J:justify_draw(a). This will stop the execution of the answer set program [myasp.lp]{} whenever the atom [a]{} is made true; at that point, the Prolog program shows a graphical representation of the corresponding on-line justification of [a]{}. $\Box$ Justifications and Possible Applications ---------------------------------------- The previous subsection discusses a possible application of the notion of justification developed in this paper, namely the construction of an interactive debugging system for logic programs under the answer set semantics. It is worth mentioning that the notion of justification is general and can be employed in other applications as well. We will now briefly discuss other potential uses of this concept. Thanks to their ability to explain the presence and absence of atoms in an answer set, off-line justifications provide a natural solution to problems in the domain of ASP-based diagnosis. As in systems like [@BalducciniG03], off-line justifications can help in discriminating diagnoses. Let us consider, for example, a system composed of two components, $c_1$ and $c_2$. Let us assume that there is a dependence between these components, stating that if $c_1$ is defective then $c_2$ will be defective as well. This information can be expressed by the following rule: $$h(ab(c_2),T) \:\: {\mbox{$\: {\tt : \!\! - }\:$}}\:\: h(ab(c_1),T)$$ where $h(ab(c_1), t)$ (resp. $h(ab(c_2), t)$) being true indicates that the component $c_1$ (resp. $c_2$) is defective at an arbitrary time $T$. Given this rule, $h(ab(c_2), t)$ ($ab(c_2)$ is defective) belongs to any answer set which contain $h(ab(c_1), t)$ ($ab(c_1)$ is defective). Thus, any off-line justification for $h(ab(c_1), t)^+$ can be extended to an off-line justification for $h(ab(c_2), t)^+$ by adding a positive edge from $h(ab(c_2), t)^+$ to $h(ab(c_1), t)^+$. This is another argument, besides the minimality criteria, for preferring the diagnosis $\{c_1\}$ over $\{c_1,c_2\}$. The implemented system for on-line justification in this paper can be adapted to create a direct implementation of the CR-Prolog [@Balduccini07]. Currently, a generate-and-test front-end to [Smodels]{} is provided for computing answer sets of CR-Prolog programs. More precisely, the algorithm for computing the answer sets of a CR-Prolog program $P$, whose set of normal rules is $Q$, iterates through two steps until an answer set is found. In each iteration, a minimal set of CR-rules is selected randomly (or according to some preferences), activated (i.e., converted to normal rules) and added to $Q$ to create a new program $Q'$. The answer sets of $Q'$ are computed using [Smodels]{}. If any answer set is found, then the computation stops. This implementation does not make use of any information about possible conflicts or inconsistencies that can be recognized during the computation. A more effective implementation can be achieved by collecting on-line justifications during each cycle of execution of [Smodels]{}. The on-line justifications can be traversed to identify inconsistencies and identify rules outside of $Q$ that unavoidably conflict with rules in $Q$. Such knowledge can then be employed to suggest more effective selections of CR-rules to be activated. Consider the following simple CR-Prolog program $$\begin{array}{cclcl} r_1 & \:\:& a & {\mbox{$\: {\tt : \!\! - }\:$}}& not\:b.\\ r_2&&\neg a \\ r_3&&b & \stackrel{+}{\leftarrow}\\ r_4&&c & \stackrel{+}{\leftarrow} \end{array}$$ In this case, the set of normal rules $Q$ contains the two rules $r_1$ and $r_2$, and $Q$ does not admit a (consistent) answer set. The point of conflict is characterized by the on-line justification shown in Figure \[cr1\]. The conflict is clearly represented by the presence of justifications for $a^+$ and $(\neg a)^+$; the justification also highlights that the only way of defeating the conflict is to remove the positive edge between $not\:b$ and $a^+$. This suggests the need of introducing a CR-rule that has $b$ as head, i.e., rule $r_3$. Simple [$\mathbb{ASP-PROLOG}$]{} meta-interpreters can be introduced to detect this type of situations and suggest some consistency restoring rules to be used; e.g., given the partial answer set $M$ present at the time of conflict, we can use the following clause to resolve conflicts due to atoms of the type $p$ and $\neg p$ both being true: $$\begin{array}{l} \texttt{candidate\_rule}(Y\stackrel{+}{\leftarrow}Body,M) {\mbox{$\: {\tt : \!\! - }\:$}}\\ \hspace{1cm} M:\texttt{justify}(J),\\ \hspace{1cm} M:Atom, M:(-Atom),\\ \hspace{1cm} ( \texttt{reachable}(Atom,Y,J) ; \texttt{reachable}(-Atom,Y,J) ),\\ \hspace{1cm} M:not\:\: Y,\\ \hspace{1cm} \stackrel{+}{\leftarrow}(Y,Body). \end{array}$$ where [reachable]{} performs a simple transitive closure over the edges of the justification $J$. $\Box$ Related Work ============ Various approaches to logic program understanding and debugging have been investigated (and a thorough comparison is beyond the limited space of this paper). Early work in this direction geared towards the understanding of Prolog programs rather than logic programs under the answer set semantics. Only recently, we can find some work on debugging inconsistent programs or providing explanation for the presence (or absence) of an atom in an answer set. While our notion of justification is related to the research aimed at debugging Prolog and XSB programs, its initial implementation is related to the recent attempts in debugging logic programs under the answer set semantics. We will discuss each of these issues in each subsection. Justifications and Debugging of Prolog Programs ----------------------------------------------- As discussed in [@PemmasaniGDRR04], $3$ main phases can be considered in understanding/debugging a logic program: - [*Program instrumentation and execution:*]{} assertion-based debugging (e.g., [@PueblaBH98]) and algorithmic debugging [@Shapiro82] are examples of approaches focused on this first phase. - [*Data Collection:*]{} focuses on *extracting* from the execution data necessary to understand it, as in event-based debugging [@Auguston00], tracing, and explanation-based debugging [@Ducasse99; @MalletD99]. - [*Data Analysis:*]{} focuses on reasoning on data collected during the execution. The proposals dealing with automated debugging (e.g., [@Auguston00]) and execution visualization (e.g., [@VaupelGP97]) are approaches focusing on this phase of program understanding. The notion of *Justification* has been introduced in [@PemmasaniGDRR04; @RoychoudhuryRR00; @Specht93] to support understanding and debugging of logic programs. Justification is the process of generating evidence, in terms of high-level proofs based on the answers (or models) produced during the computation. Justifications are *focused*, i.e., they provide only the information that are relevant to the item being explained—and this separates them from other debugging schemes (e.g., tracing). Justification plays an important role in manual and automatic verification, by providing a *proof description* if a given property holds; otherwise, it generates a *counter-example*, showing where the violation/conflict occurs in the system. The justification-based approach focuses on the last two phases of debugging—collecting data from the execution and presenting them in a meaningful manner. Differently from generic tracing and algorithmic debugging, justifications are focused only on parts of the computation relevant to the justified item. Justifications are fully automated and do not require user interaction (as in declarative debugging). Justifications relies on describing the evidence for an answer in terms of a graph structure. The term *justification* was introduced in [@RoychoudhuryRR00], as a data structure to explain answers to Prolog queries within a Prolog system with tabling. The notion of justification and its implementation in the XSB Prolog system was successively refined in [@PemmasaniGDRR04; @Guo01]. Similar structures have been suggested to address the needs of other flavors of logic programming—e.g., various approaches to tree-based explanation for deductive databases (e.g., the *Explain* system [@explain], the explanation system for LOLA [@Specht93], and the DDB trees method [@MalletD99]). Similar methods have also been developed for the analysis of CLP programs (e.g., [@clpdeb]). In this work, we rely on graph structures as a mean to describe the *justifications* that are generated during the generation (or from) an answer set of a program. Graphs have been used in the context of logic programming for a variety of other applications. *Call graphs* and *dependence graphs* have been extensively used to profile and discover program properties (e.g., [@call1; @call2]). *Support graphs* are used for program analysis in [@SahaR05]. The use of graphs proposed in this paper is complementary to the view proposed by other authors, who use graph structures as a mean to describe answer set programs, to make structural properties explicit, and to support the development of the program execution. In [@AngerGLNS05; @color1], *rule dependency graphs* (a.k.a. *block graphs*) of answer set programs are employed to model the computation of answer sets as special forms of graph coloring. A comprehensive survey of alternative graph representations of answer set programs, and their properties with respect to the problem of answer set characterization, has been presented in [@constantini1; @CostantiniDP02]. In particular, the authors provide characterizations of desirable graph representations, relating the existence of answer sets to the presence of cycles and the use of coloring to characterize properties of programs (e.g., consistency). We conjuncture that the outcome of a successful coloring of an EDG [@constantini1] to represent one answer set can be projected, modulo non-obvious transformations, to an off-line graph and vice versa. On the other hand, the notion of on-line justification does not seem to have a direct relation to the graph representations presented in the cited works. Debugging Logic Programs under Answer Set Semantics --------------------------------------------------- This paper continues the work initiated in [@ElKhatibPS05], by proposing a more advanced and sound notion of off-line justification, by developing the concept of on-line justification, and introducing these concepts in [Smodels]{}. The approach differs significantly from the recently introduced approach to debugging ASP programs in [@BrainGP+b07] While our approach relies on the notion of justification, the approach in [@BrainGP+b07] uses the tagging technique [@DelgrandeST03] to compile a program into a new program whose answer sets can be used to debug the original program. Inspecting an answer set of the new program can reveal the rules which have been applied in its generation. It does not, however, provides explanation of why an atom does (or does not) belong to the answer set. In this sense, we can say that the approach of [@BrainGP+b07] and ours are complementary to each other. An advantage of the approach in [@BrainGP+b07] is that it enables the development of a debugger as a front-end of an answer set solver. However, their approach does not consider on-line justification. At this point, it is worth mentioning that the [$\mathbb{ASP-PROLOG}$]{} debugger, described in Section \[smo\], differs from the system [ spock]{} [@BrainGP+a07]—which was developed based on the technical foundation in [@BrainGP+b07]—in several aspects. In our system, the justification for the truth value of an atom consists of facts, assumptions, and rules which are applicable given these facts and assumptions, i.e., we not only justify why an atom is [*true*]{} but also why an atom is [*false*]{}. Moreover, justifications can be queried during the process of answer set computation. [spock]{} only provides the justification, or the applicable rules, for the presence of an atom in a given answer set. In this sense, justifications in [spock]{} is similar to our off-line LCEs. In [@dlvdeb], a tool for developing and testing DLV programs was described. The commands provided by this tool allow an user to inspect why an atom is true in the current model and why there is no answer set. This is similar to the on-line justifications developed for [Smodels]{}. The tool in [@dlvdeb], however, does not answer the question why an atom is not in the current model. The notion of justifications is not developed in [@dlvdeb]. The proposed debugger is similar to the system described in [@BrainDv05] in that it provides the users with the information on why some atoms occur in an answer set and some others do not. An explanation given by the tool described in this work is similar to an off-line justification in our work. Our implementation also provides users with on-line justifications but the system described in [@BrainDv05] does not. The paper [@Syrjanen06] presents a theory for debugging of inconsistent programs and an implementation of this theory. The focus of this paper is on inconsistent programs. On the other hand, our focus is not solely on inconsistent programs. Our notion of on-line justification can be used in identifying the reasons that lead to the inconsistency of the problem but it is significant different from the theory of diagnosis developed in [@Syrjanen06]. Conclusion ========== In this paper we provided a generalization of the notion of *justification* (originally designed for Prolog with SLG-resolution [@RoychoudhuryRR00]), to suit the needs of ASP. The notion, named *off-line justification*, offers a way to understand the motivations for the truth value of an atom within a specific answer set, thus making it easy to analyze answer sets for program understanding and debugging. We also introduced *on-line justifications*, which are meant to justify atoms *during* the computation of an answer set. The structure of an on-line justification is tied to the specific steps performed by a computational model for ASP (specifically, the computation model adopted by [Smodels]{}). An on-line justification allows a programmer to inspect the reasons for the truth value of an atom at the moment such value is determined while constructing an answer set. These data structures provide a foundation for the construction of tools to understand and debug ASP programs. The process of computing and presenting justifications has been embedded in the ASP-Prolog system [@asp-prolog], thus making justifications a first-class citizen of the language. This allows the programmer to use Prolog to manipulate justifications as standard Prolog terms. A prototype implementation has been completed and is currently under testing. As future work, we propose to complete the implementation, refine the definition of on-line justification to better take advantage of the [Smodels]{} mechanisms, and develop a complete debugging and visualization environment for ASP based on these data structures. [**Acknowledgement**]{}: We would like to thank the anonymous reviewers for their comments and suggestions that help improve the papers in many ways. The authors are partially supported by NSF grants CNS-0220590, HRD-0420407, and IIS-0812267. , [Gebser, M.]{}, [Linke, T.]{}, [Neumann, A.]{}, [ and]{} [Schaub, T.]{} 2005. The nomore++ approach to answer set solving. In [*Proceedings of the 12th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning*]{}. 95–109. 1994\. Logic programming and negation: a survey.  [*19,20*]{}, 9–71. , [Ramakrishnan, R.]{}, [Roth, W.]{}, [Seshadri, P.]{}, [and]{} [Drivastava, D.]{} 1993. . In [*Proceedings of the DOOD Conference*]{}. Springer Verlag. 2000\. Assertion checker for the [C]{} programming language based on computations over event traces. In [*AADEBUG*]{}. 2007\. cr-models: An inference engine for cr-prolog. In [*LPNMR*]{}, [C. Baral]{}, [G. Brewka]{}, [and]{} [J. S. Schlipf]{}, Eds. Lecture Notes in Computer Science, vol. 4483. Springer, 18–30. 2003\. .  [*3,*]{} 4,5, 425–461. , [Gelfond, M.]{}, [and]{} [Nogueira, M.]{} 2006. . . . . 2005\. . In [*Answer Set Programming: Advances in Theory and Implementation*]{}, [M. D. Vos]{} [and]{} [A. Provetti]{}, Eds. 142–152. , [Gebser, M.]{}, [P[ü]{}hrer, J.]{}, [Schaub, T.]{}, [ Tompits, H.]{}, [and]{} [Woltran, S.]{} 2007a. Debugging [ASP]{} programs by means of [ASP]{}. In [*Proceedings of the Ninth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’07)*]{}, [C. Baral]{}, [G. Brewka]{}, [and]{} [J. Schlipf]{}, Eds. Lecture Notes in Artificial Intelligence, vol. 4483. Springer-Verlag, 31–43. , [Gebser, M.]{}, [P[ü]{}hrer, J.]{}, [Schaub, T.]{}, [ Tompits, H.]{}, [and]{} [Woltran, S.]{} 2007b. hat is illogical captain!["]{} – [T]{}he debugging support tool spock for answer-set programs: System description. In [*Proceedings of the Workshop on Software Engineering for Answer Set Programming (SEA’07)*]{}, [M. [De Vos]{}]{} [and]{} [T. Schaub]{}, Eds. 71–85. , [Dix, J.]{}, [Freitag, B.]{}, [and]{} [Zukowski, U.]{} 2001. Transformation-based bottom-up computation of the well-founded model.  [*1,*]{} 5, 497–538. 2001\. . In [*Answer Set Programming Workshop*]{}. , [D’Antona, O. M.]{}, [and]{} [Provetti, A.]{} 2002. On the equivalence and range of applicability of graph-based representations of logic programs.  [*84,*]{} 5, 241–249. , [Logemann, G.]{}, [and]{} [Loveland, D. W.]{} 1962. A machine program for theorem-proving.  [*5,*]{} 7, 394–397. , [Lopez-Garcia, P.]{}, [Hermenegildo, M.]{}, [and]{} [Lin, N.]{} 1997. . In [*International Logic Programming Symposium*]{}. MIT Press, 291–305. , [Schaub, T.]{}, [and]{} [Tompits, H.]{} 2003. A framework for compiling preferences in logic programs.  [*3,*]{} 2 (Mar.), 129–187. , [Hermenegildo, M. V.]{}, [and]{} [Maluszynski, J.]{}, Eds. 2000. . Lecture Notes in Computer Science, vol. 1870. Springer. 1999\. Opium: An extendable trace analyzer for prolog.  [*39,*]{} 1-3, 177–223. , [Leone, N.]{}, [Mateis, C.]{}, [Pfeifer, G.]{}, [ and]{} [Scarcello, F.]{} 1998. . In [*International Conference on Principles of Knowledge Representation and Reasoning*]{}. 406–417. , [Pontelli, E.]{}, [and]{} [Son, T. C.]{} 2005. Justification and debugging of answer set programs in [ASP]{}. In [*Proceedings of the Sixth International Workshop on Automated Debugging, AADEBUG 2005, Monterey, California, USA, September 19-21, 2005*]{}, [C. Jeffery]{}, [J.-D. Choi]{}, [and]{} [R. Lencevicius]{}, Eds. ACM, 49–58. , [Pontelli, E.]{}, [and]{} [Son, T.]{} 2004. . In [*[Proceedings of the Sixth International Symposium on Practical Aspects of Declarative Languages (PADL-2004)]{}*]{}. Springer, 148–162. , [Lifschitz, V.]{}, [and]{} [Ringe, D.]{} 2006. Temporal phylogenetic networks and logic programming.  [*6,*]{} 5, 539–558. 1994\. Consistency of [C]{}lark’s completion and existence of stable models.  1, 51–60. , [Kaufmann, B.]{}, [Neumann, A.]{}, [and]{} [Schaub, T.]{} 2007. clasp: A conflict-driven answer set solver. In [*Proceedings of the Ninth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’07)*]{}, [C. Baral]{}, [G. Brewka]{}, [and]{} [J. Schlipf]{}, Eds. Lecture Notes in Artificial Intelligence, vol. 4483. Springer-Verlag, 260–265. , [Schaub, T.]{}, [and]{} [Thiele, S.]{} 2007. Gringo: A new grounder for answer set programming. In *Proceedings of the Ninth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR’07)*, [C. Baral]{}, [G. Brewka]{}, [and]{} [J. Schlipf]{}, Eds. Lecture Notes in Artificial Intelligence, vol. 4483. Springer-Verlag, 266–271. 2002\. Logic programming and knowledge representation – the [A-Prolog]{} perspective.  [*138,*]{} 1-2, 3–38. 1988\. The stable model semantics for logic programming. In [*Logic Programming: Proceedings of the Fifth International Conf. and Symp.*]{}, [R. Kowalski]{} [and]{} [K. Bowen]{}, Eds. 1070–1080. , [Lierler, Y.]{}, [and]{} [Maratea, M.]{} 2004. Sat-based answer set programming. In [*Proceedings of the Nineteenth National Conference on Artificial Intelligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence, July 25-29, 2004, San Jose, California, USA*]{}. AAAI Press / The MIT Press, 61–66. 2005\. In [*Logic Programming, 21st International Conference, ICLP 2005, Sitges, Spain, October 2-5, 2005, Proceedings*]{}, [M. Gabbrielli]{} [and]{} [G. Gupta]{}, Eds. Lecture Notes in Computer Science, vol. 3668. Springer, 37–51. 2000\. A new module system for prolog. In [*Computational Logic - CL 2000, First International Conference, London, UK, 24-28 July, 2000, Proceedings*]{}, [J. W. Lloyd]{}, [V. Dahl]{}, [U. Furbach]{}, [M. Kerber]{}, [K.-K. Lau]{}, [C. Palamidessi]{}, [L. M. Pereira]{}, [Y. Sagiv]{}, [and]{} [P. J. Stuckey]{}, Eds. Lecture Notes in Computer Science, vol. 1861. Springer, 131–148. , [Ramakrishnan, C.]{}, [and]{} [Ramakrishnan, I.]{} 2001. . In [*International Conference on Logic Programming*]{}. Springer Verlag, 150–165. 2003\. Bounded [LTL]{} model checking with stable models.  [*3,*]{} 4,5, 519–550. , [Linke, T.]{}, [and]{} [Schaub, T.]{} 2006. .  [*6,*]{} 1–2, 61–106. 1999\. Answer set planning. In [*International Conference on Logic Programming*]{}. 23–37. 2002\. .  [*138,*]{} 1–2, 39–54. 2002\. . In [*AAAI*]{}. 112–117. 1987\. . Springer Verlag. Second, extended edition. 1999\. Generating deductive database explanations. In [*International Conference on Logic Programming*]{}. 154–168. 1999\. Stable models and an alternative logic programming paradigm. In [*The Logic Programming Paradigm: a 25-year Perspective*]{}. 375–398. , [Lopez-Garcia, P.]{}, [Puebla, G.]{}, [Carro, M.]{}, [ and]{} [Hermenegildo, M.]{} 2006. . In [*International Conference on Logic Programming*]{}. Springer Verlag, 431–432. 1999\. Logic programming with stable model semantics as a constraint programming paradigm.  [ *25,*]{} 3,4, 241–273. 1997\. - an implementation of the stable model and well-founded semantics for normal logic programs. In [*[Proceedings of ICLP & LPNMR]{}*]{}. 420–429. , [Guo, H.-F.]{}, [Dong, Y.]{}, [Ramakrishnan, C. R.]{}, [and]{} [Ramakrishnan, I. V.]{} 2004. Online justification for tabled logic programs. In [*Functional and Logic Programming, 7th International Symposium, FLOPS 2004, Nara, Japan, April 7-9, 2004, Proceedings*]{}, [Y. Kameyama]{} [and]{} [P. J. Stuckey]{}, Eds. Lecture Notes in Computer Science, vol. 2998. Springer, 24–38. , [Ricca, F.]{}, [Terracina, G.]{}, [Cianni, D.]{}, [ and]{} [Veltri, P.]{} 2007. . In [*Proceedings of the 1st SEA Workshop, LPNMR’07*]{}. 86–100. 1999\. . Tech. Rep. CLIP 6/99.0, UPM Madrid. , [Bueno, F.]{}, [and]{} [Hermenegildo, M. V.]{} 1998. A framework for assertion-based debugging in constraint logic programming. In [*Principles and Practice of Constraint Programming - CP98, 4th International Conference, Pisa, Italy, October 26-30, 1998, Proceedings*]{}, [M. J. Maher]{} [and]{} [J.-F. Puget]{}, Eds. Lecture Notes in Computer Science, vol. 1520. Springer, 472. , [Ramakrishnan, C. R.]{}, [and]{} [Ramakrishnan, I. V.]{} 2000. Justifying proofs using memo tables. In [*PPDP*]{}. 178–189. 2005\. Symbolic support graph: A space efficient data structure for incremental tabled evaluation. In [*Logic Programming, 21st International Conference, ICLP 2005, Sitges, Spain, October 2-5, 2005, Proceedings*]{}, [M. Gabbrielli]{} [and]{} [G. Gupta]{}, Eds. Lecture Notes in Computer Science, vol. 3668. Springer, 235–249. 1982\. Algorithmic program diagnosis. In [*POPL*]{}. 299–308. , [Niemel[ä]{}, N.]{}, [and]{} [Soininen, T.]{} 2002. .  [*138,*]{} 1–2, 181–234. 1993\. Generating explanation trees even for negations in deductive database systems. In [*LPE*]{}. 8–13. 2006\. Debugging inconsistent answer set programs. In [*Proceedings of the 11th International Workshop on Non-Monotonic Reasoning*]{}. Lake District, UK, 77–84. 1976\. The semantics of predicate logic as a programming language.  [*23,*]{} 4, 733–742. , [Ross, K.]{}, [and]{} [Schlipf, J.]{} 1991. The well-founded semantics for general logic programs.  [*38,*]{} 3, 620–650. , [Pontelli, E.]{}, [and]{} [Gupta, G.]{} 1997. Visualization of and/or-parallel execution of logic programs. In [*ICLP*]{}. 271–285. Appendix: Proofs {#appendix-proofs .unnumbered} ================ Proof of Proposition \[prop1\]. {#proof-of-proposition-prop1. .unnumbered} ------------------------------- [*Proposition \[prop1\].*]{} [*Given a program $P$ and an answer set $M$ of $P$, the well-founded model of $NR(P,{\cal TA}_P(M))$ is equal to $M$.* ]{} The following result has been proved [@apt94a]. Let $P$ be a program and $j$ be the first index such that $(K_j,U_j) = (K_{j+1},U_{j+1})$. The well-founded model of $P$, $WF_P=\langle W^+,W^- \rangle$, satisfies $W^+ = K_j$ and $W^- = {\cal A} \setminus U_j$. Let $T_R$ denote the traditional immediate consequence operator for a definite program $R$ [@llo87]. We will also make use of the usual notations such as $T_R \uparrow 0 = \emptyset$, $T_R \uparrow i = T_R(T_R \uparrow (i-1))$. Given a program $P$ and one of its answer sets $M$, to simplify the presentation, let us denote with $Q(M)$ the negative reduct $NR(P, {\cal TA}_P(M))$. We will denote with $(K^P_i,U^P_i)$ the pair $(K_i,U_i)$ (Definition \[kui\]) for the original program $P$ and with $(K^Q_i,U^Q_i)$ the pair $(K_i,U_i)$ for program $Q(M)$ respectively. \[le-a1\] For a program $P$, $lfp(T_{P^+}) = lfp(T_{Q(M)^+})$. [**Proof.**]{} Clearly, $lfp(T_{P^+}) \supseteq lfp(T_{Q(M)^+})$ since $P^+ \supseteq Q(M)^+$. Let us prove, by induction on $i$, that $T_{P^+} \uparrow i \subseteq T_{Q(M)^+} \uparrow i$. The result is trivial for the base case. Let us assume that the result holds for $i$ and let us prove it for $i+1$. Consider $a \in T_{P^+} \uparrow i+1$. This means that there is a rule $r \in P^+$ such that $head(r) = a$ and $pos(r) \subseteq T_{P^+} \uparrow i$. Since $a \in lfp(T_{P^+})\subseteq M^+$, we know that $a \in M^+$ and therefore $r \in Q(M)^+$. Thus, thanks to the inductive hypothesis, we can conclude that $a \in T_{Q(M)^+} \uparrow i+1$. $\Box$ \[co1\] For a program $P$, $K^P_0 = K^Q_0$. \[l2\] For a program $P$, $U^Q_0 \subseteq U^P_0$. [**Proof.**]{} We prove, by induction on $i$, that $T_{Q(M),K^Q_0} \uparrow i \subseteq T_{P,K^P_0} \uparrow i$. [**Base:**]{} The result is obvious for $i=0$, since $$T_{Q(M),K^Q_0} \uparrow 0 = \emptyset = T_{P,K^P_0} \uparrow 0$$ Let $a \in T_{Q(M),K^Q_0} \uparrow 1$. This implies that there is $r \in Q(M)$ such that $head(r) = a$, $pos(r) = \emptyset$, and $neg(r) \cap K^Q_0 =\emptyset$. Since $Q(M) \subseteq P$, we also have that $r \in P$. Furthermore, since $K^P_0 = K^Q_0$ (from Corollary \[co1\]), we have that $a \in T_{P,K^P_0} \uparrow 1$. [**Step:**]{} Let us assume the result to be true for $i$ and let us consider the iteration $i+1$. Let $a \in T_{Q(M),K^Q_0} \uparrow i+1$. This implies that there is a rule $r \in Q(M)$ such that - $head(r) = a$, - $pos(r) \subseteq T_{Q(M),K^Q_0} \uparrow i$, and - $neg(r) \cap K^Q_0 =\emptyset$. Since $Q(M) \subseteq P$, then we have that $r \in P$. Furthermore, since $K^P_0 = K^Q_0$, we have that $a \in T_{P,K^P_0} \uparrow i+1$. $\Box$ For a program $P$, $K^P_1 \subseteq K^Q_1$. [**Proof.**]{} We prove by induction on $i$ that $T_{P,U^P_0} \uparrow i \subseteq T_{Q(M),U^Q_0} \uparrow i$. [**Base:**]{} The result is obvious for $i=0$, since $$T_{P,U^P_0} \uparrow 0 = \emptyset = T_{Q(M),U^Q_0} \uparrow 0$$ Let $a \in T_{P,U^P_0} \uparrow 1$. This implies that there is a rule $r \in P$ such that - $head(r) = a$, - $pos(r) = \emptyset $, and - $neg(r) \cap U^P_0 =\emptyset$. Since $K^P_1 \subseteq W^+$, we have that $a \ni {\cal TA}(M)$, thus $r \in Q(M)$. Furthermore, since $U^Q_0 \subseteq U^P_0$ (from Lemma \[l2\]), we have that $neg(r) \cap U^Q_0 = \emptyset$. Hence, $a \in T_{Q(M),U^Q_0} \uparrow 1$. [**Step:**]{} let us assume the result to hold for $i$ and let us consider the iteration $i+1$. Let $a \in T_{P,U^P_0} \uparrow i+1$. This implies that there is a rule $r \in P$ such that $head(r) = a$, $pos(r) \subseteq T_{P,U^P_0} \uparrow i$, and $neg(r) \cap U^P_0 =\emptyset$. Since $K^P_1 \subseteq W^+$, we have that $a \ni {\cal TA}(M)$, thus $r \in Q(M)$. Furthermore, since $U^Q_0 \subseteq U^P_0$, we have that $neg(r) \cap U^Q_0 = \emptyset$, and from the inductive hypothesis we have that $pos(r) \subseteq T_{Q(M),U^Q_0} \uparrow i$. Thus $a \in T_{Q(M),U^Q_0} \uparrow i+1$. $\Box$ \[sequence\] For every $i$, $U^Q_i \subseteq U^P_i$ and $K^P_i \subseteq K^Q_i$. [**Proof.**]{} We will prove this result by induction on $i$. The base case follows directly from Lemmas \[le-a1\]-\[l2\]. Let us proceed with the inductive step. First, let us prove by induction on $j$ that $T_{P,U^P_{i-1}} \uparrow j \subseteq T_{Q(M),U^Q_{i-1}} \uparrow j$. - [**Base:**]{} Let $a \in T_{P,U^P_{i-1}} \uparrow 1$. This implies that there is a rule $r \in P$ such that $head(r) = a$, $pos(r) = \emptyset$, and $neg(r) \cap U^P_{i-1} =\emptyset$. Since $K^P_{i} \subseteq W^+$, we have that $a \not\in {\cal TA}(M)$, and thus $r \in Q(M)$. Furthermore, since $U^Q_{i-1} \subseteq U^P_{i-1}$, we have that $neg(r) \cap U^Q_{i-1} = \emptyset$. Hence, $a \in T_{Q(M),U^Q_{i-1}} \uparrow 1$. - [**Step:**]{} let us assume the result to hold for $j$ and let us prove it for $j+1$. Let $a \in T_{P,U^P_{i-1}} \uparrow j+1$. This implies that there is a rule $r \in P$ such that - $head(r) = a$, - $pos(r) \subseteq T_{P,U^P_{i-1}} \uparrow j$, and - $neg(r) \cap U^P_{i-1} =\emptyset$. Since $K^P_{i} \subseteq W^+$, we have that $a \not\in {\cal TA}(M)$, and thus $r \in Q(M)$. Furthermore, since $U^Q_{i-1} \subseteq U^P_{i-1}$, we have that $neg(r) \cap U^Q_{i-1} = \emptyset$. By inductive hypothesis, we have that $pos(r) \subseteq T_{Q(M),U^Q_{i-1}} \uparrow j$. Hence, $a \in T_{Q(M),U^Q_{i-1}} \uparrow j+1$. Let us now prove, by induction on $j$ that $T_{Q(M),K^Q_{i}} \uparrow j \subseteq T_{P,K^P_{i}} \uparrow j$. - [**Base:**]{} Let $a \in T_{Q(M),K^Q_{i}} \uparrow 1$. This implies that there is a rule $r \in Q(M)$ such that $head(r) = a$, $pos(r) = \emptyset$, and $neg(r) \cap K^Q_{i} =\emptyset$. Since $Q(M) \subseteq P$, we have that $r \in P$. Furthermore, since $K^P_{i} \subseteq K^Q_{i}$, we have that $neg(r) \cap K^P_{i} = \emptyset$. Hence, $a \in T_{P,K^P_{i}} \uparrow 1$. - [**Step:**]{} let us assume the result to hold for $j$ and let us consider the case $j+1$. Let $a \in T_{Q(M),K^Q_{i}} \uparrow j+1$. This implies that there is a rule $r \in Q(M)$ such that $head(r) = a$, $pos(r)\subseteq T_{Q(M),K^Q_{i}} \uparrow j$, and $neg(r) \cap K^Q_{i} =\emptyset$. Since $Q(M) \subseteq P$, we have that $r \in P$. Furthermore, since $K^P_{i} \subseteq K^Q_{i}$, we have that $neg(r) \cap K^P_{i} = \emptyset$. By inductive hypothesis, we also have that $pos(r) \subseteq T_{P,K^P_{i}} \uparrow j$. Hence, $a \in T_{P,K^P_{i}} \uparrow j+1$. $\Box$ If $M$ is an answer set of $P$, then $M$ is an answer set of $Q(M)$. [**Proof.**]{} Obviously, $lfp(T_{Q(M)^{M^+}}) \subseteq lfp(T_{P^{M^+}})$ because $Q(M)^{M^+} \subseteq P^{M^+}$. Thus, it is sufficient to show that $lfp(T_{P^{M^+}}) \subseteq lfp(T_{Q(M)^{M^+}})$. We prove by induction on $i$ that $T_{P^{M^+}} \uparrow i \subseteq T_{Q(M)^{M^+}} \uparrow i$. [**Base:**]{} Let $a \in T_{P^{M^+}} \uparrow 1$. This implies that there is a rule $r \in P$ such that $head(r) = a$, $pos(r) = \emptyset$, and $neg(r) \subseteq M^-$. Because $a \in M^+$, we have that $r \in Q(M)$. Thus, $a \in T_{Q(M)^{M^+}} \uparrow 1$. [**Step:**]{} Let $a \in T_{P^{M^+}} \uparrow i+1$. This implies that there exists a rule $r \in P$ such that $head(r) = a$, $pos(r) \subseteq T_{P^{M^+}} \uparrow i$, and $neg(r) \subseteq M^-$. Since $a \in M^+$, we have that $r \in Q(M)$. Thus, $a \in T_{Q(M)^{M^+}} \uparrow i+1$. $\Box$ Let us indicate in the rest of this discussion the well-founded model of $Q(M)$ with $WF_Q$ and the well-founded model of $P$ with $WF_P$. ${\cal TA}_P(M) \subseteq WF_Q^-$. [**Proof.**]{} Consider $a \in {\cal TA}_P(M)$. We have that $a \not\in U^Q_i$ for every $i$ since there are no rules with $a$ as head in $Q(M)$. This means that $a \in WF_Q^-$. Thus, ${\cal TA}_P(M) \subseteq WF_Q^-$. $\Box$ The well-founded model $WF_Q$ of $Q(M)$ is equal to $M$, i.e., $W_Q = M$. [**Proof.**]{} From Proposition \[sequence\], we have that $WF_P^+ \subseteq WF_Q^+$ and $WF_P^- \subseteq WF_Q^-$. Furthermore, since ${\cal TA}_P(M) \subseteq WF_Q^-$, we can conclude that $M^- \subseteq WF_Q^-$. Since $M$ is an answer set of $Q(M)$, we also have that $WF_Q^- \subseteq M^-$. Thus, $M^- = WF_Q^-$. This conclusion implies that there is a value $k$ such that $U^Q_k = {\cal A} \setminus M^-$. Let us now show that $K^Q_{k+1} = M^+$. Since $M$ is an answer set of $Q(M)$, we immediately have that $K^Q_{k+1} \subseteq M^+$. Let us prove, by induction on $i$, that $T_{P^{M^+}} \uparrow i \subseteq T_{Q(M),U^Q_k} \uparrow i$. [**Base:**]{} Let $a \in T_{P^{M^+}} \uparrow 1$. This implies that there is a rule $r \in P$ such that $head(r) = a$, $pos(r) = \emptyset$, and $neg(r) \subseteq M^-$. Since $a \in M^+$, we have that $r \in Q(M)$. Furthermore, since $U^Q_k = {\cal A} \setminus M^-$ and $neg(r) \subseteq M^-$, we have that $neg(r) \cap U^Q_k = \emptyset$. Thus, $a \in T_{Q(M),U^Q_k} \uparrow 1$. [**Step:**]{} Let $a \in T_{P^{M^+}} \uparrow i+1$. This implies that there is a rule $r \in P$ such that $head(r) = a$, $pos(r) \subseteq T_{P^{M^+}} \uparrow i$, and $neg(r) \subseteq M^-$. Since $a \in M^+$, we have that $r \in Q(M)$. Furthermore, since $U^Q_k = {\cal A} \setminus M^-$ and $neg(r) \subseteq M^-$, we have that $neg(r) \cap U^Q_k = \emptyset$. By inductive hypothesis, we also have that $pos(r) \subseteq T_{Q(M),U^Q_k} \uparrow i$. Thus, $a \in T_{Q(M),U^Q_k} \uparrow i+1$. $\Box$ Proof of Lemma \[good\]. {#proof-of-lemma-good. .unnumbered} ------------------------ The proof of this lemma makes use of several results and definitions in [@BrassDFZ01]. For this reason, let us recall the necessary definitions from [@BrassDFZ01]. Given a program $P$, let us denote with $heads(P) = \{a\:|\: \exists r\in P.\: head(r)=a\}$ and with $facts(P) = \{a \:|\: (a{\mbox{$\: {\tt : \!\! - }\:$}}) \in P\}$. We can consider the following program transformations [@BrassDFZ01]: - $P_1\mapsto_P P_2$ iff $a{\mbox{$\: {\tt : \!\! - }\:$}}body \in P_1$, $not\:b \in body$, $b \not\in heads(P_1)$, and\ $P_2 = (P_1 \setminus \{a {\mbox{$\: {\tt : \!\! - }\:$}}body\})\cup\{a {\mbox{$\: {\tt : \!\! - }\:$}}body\setminus\{not\:b\}\}$ - $P_1 \mapsto_N P_2$ iff $a{\mbox{$\: {\tt : \!\! - }\:$}}body \in P_1$, $not\:b \in body$, $b\in facts(P_1)$, and\ $P_2 = P_1 \setminus \{a{\mbox{$\: {\tt : \!\! - }\:$}}body\}$ - $P_1 \mapsto_S P_2$ iff $a{\mbox{$\: {\tt : \!\! - }\:$}}body \in P_1$, $b\in body$, $b \in facts(P_1)$, and\ $P_2 = (P_1 \setminus \{a{\mbox{$\: {\tt : \!\! - }\:$}}body\})\cup \{a {\mbox{$\: {\tt : \!\! - }\:$}}(body\setminus\{b\})\}$ - $P_1 \mapsto_F P_2$ iff $a{\mbox{$\: {\tt : \!\! - }\:$}}body \in P_1$, $b\in body$, $b\not\in heads(P_1)$, and\ $P_2 = P_1 \setminus \{a{\mbox{$\: {\tt : \!\! - }\:$}}body\}$ - $P_1 \mapsto_L P_2$ iff there is a non-empty set of atoms $S$ such that - for each rule $a{\mbox{$\: {\tt : \!\! - }\:$}}body$ in $P_1$ where $a\in S$ we have that $S\cap body \neq \emptyset$ - $P_2 = \{r\in P_1\:|\: body(r) \cap S = \emptyset\}$ - $P_1 \neq P_2$ We write $P_1 \mapsto P_2$ to indicate that there exists a transformation $t \in \{P, N, S, F, L\}$ such that $P_1 \mapsto_t P_2$. A program $P$ is [*irreducible*]{} if $P \mapsto_t P$ for every $t\in \{P, N, S, F, L\}$. The results in [@BrassDFZ01] show that the above transformation system is terminating and confluent, i.e., given a program $P$, (a) there exists a sequence of programs $P=P_0,P_1,\ldots,P_n=P^*$ such that $P_i \mapsto P_{i+1}$ for $0 \le i \le n-1$ and $P^*$ is irreducible; and (b) for every sequence of programs $P=Q_0,Q_1,\ldots,Q_m=Q^*$ such that $Q_i \mapsto Q_{i+1}$ for $0 \le i \le m-1$ and $Q^*$ is irreducible then $P^* = Q^*$. We call the irreducible program $P^*$ obtained from $P$ through this transformation system the normal form of $P$. The result in [@BrassDFZ01] shows that the well-founded model $WF_P=\langle W^+,W^-\rangle$ of $P$ can be obtained by $$\begin{array}{lcr} W^+=facts(P^*) & \hspace{1cm} & W^- = \{a\:|\: a\not\in heads(P^*)\}\end{array}$$ where $P^*$ is the normal form of $P$. [*Lemma \[good\].*]{} Let $P$ be a program, $M$ an answer set, and $WF_P$ the well-founded model of $P$. Each atom $a\in WF_P$ has an off-line justification w.r.t. $M$ and $\emptyset$ which does not contain any negative cycle. [**Proof:**]{} Let us consider the sequence of transformations of the program $$P=P_0 \mapsto P_1 \mapsto \dots \mapsto P^*$$ such that the transformation $\mapsto_L$ is used only when no other transformation can be applied. Furthermore, let $$WP_i = \langle W_i^+,W_i^-\rangle = \langle facts(P_i), \{a\:|\:a\not\in heads(P_i)\}\rangle$$ We wish to prove, by induction on $i$, that if $a\in W_i^+\cup W_i^-$ then it has a justification which is free of negative cycles and it contains exclusively elements in $W_i^+\cup W_i^-$. For the sake of simplicity, we will describe justification graphs simply as set of edges. Also, we will denote with ${\cal J}(a)$ the graph created for the element $a$. [**Base:**]{} Let us consider $i = 0$. We have two cases: - $a\in W_0^+$. This means that $a \in facts(P_0) = facts(P)$. This implies that ${\cal J}(a)=\{(a^+,\top,+)\}$ is a cycle-free justification for $a$ w.r.t. $WP_0$ and $\emptyset$. - $a\in W_0^-$. This means that $a \not\in heads(P_0)=heads(P)$. From the definition of off-line justification, this means that we can build the justification ${\cal J}(a)=\{(a^-,\bot,+)\}$, which is also cycle-free. In addition, the only atoms in the justification belongs to $W_0^+\cup W_0^-$. [**Step:**]{} Let us assume that the inductive hypothesis holds for $j \le i$. Let us consider $a \in W_{i+1}^+ \cup W_{i+1}^-$. We have two cases: - $a \in W_{i+1}^+$. Without loss of generality, we can assume that $a \not\in W_i^+$. This means that the reduction step taken to construct $P_{i+1}$ from $P_i$ produced a fact of the form $a{\mbox{$\: {\tt : \!\! - }\:$}}$. This implies that there exists a rule $$a {\mbox{$\: {\tt : \!\! - }\:$}}b_1, \dots, b_k, not\:c_1, \dots, not\:c_h$$ in $P$ such that each $b_j$ has been removed in the previous steps by $\mapsto_S$ transformations, and each $not\:c_r$ has been removed by $\mapsto_P$ transformations. This means that each $b_j \in W_i^+$, each $c_r \in W_i^-$, and, by inductive hypothesis, they admit justifications free of negative cycles. We can construct a justification ${\cal J}(a)$ for $a$, which is free of negative cycles and is the union of all the justifications free of negative cycles of $b_1,\dots,b_k,c_1,\dots,c_h$ and the edges $(a^+,b_1^+,+), \dots, (a^+,b_k^+,+), (a^+,c_1^-,-), \dots, (a^+,c_h^-,-)$. Note that, with the exception of $a$, the atoms involved in the justification ${\cal J}(a)$ are only atoms of $W_i^+\cup W^-_i$. - Let us now consider $a\in W_{i+1}^-$. Again, we assume that $a \not\in W_i^-$. This means that in $P_{i+1}$ there are no rules left for $a$. Let us consider each individual rule for $a$ in $P$, of the generic form $$\label{generic} a {\mbox{$\: {\tt : \!\! - }\:$}}b_1, \dots, b_k, not\:c_1, \dots, not\:c_h$$ We consider two cases: - $P_i \mapsto_N P_{i+1}$ or $P_i \mapsto_F P_{i+1}$. By our assumption about the sequence of transformations, we can conclude that the transformation $\mapsto_L$ has not been applied in removing rules whose head is $a$. In other words, each rule (\[generic\]) has been removed by either a $\mapsto_N$ or a $\mapsto_F$ transformation. This implies that for each rule (\[generic\]), there exists either a $c_j \in W_i^+$ or a $b_l \in W_i^-$, i.e., there exists $C^+ \subseteq W_i^+$ and $C^- \subseteq W_i^-$ such that for each rule $r$ with $head(r)=a$, $C^+ \cap neg(r) \ne \emptyset$ or $C^- \cap pos(r) \ne \emptyset$. Without loss of generality, we can assume that $C^+$ and $C^-$ are minimal (w.r.t. $\subseteq$). By inductive hypothesis, we know that each element in $C^+$ and $C^-$ posses a justification free of negative cycles which contain only atoms in $WP_i$. Similar to the first item, we have that ${\cal J}(a) = \bigcup_{c \in C^+ \cup C^-} {\cal J}(c) \cup \{(a^-,c^+,-) \mid c \in C^+\} \cup \{(a^-,c^-,+) \mid c \in C^-\}$ is a justification free of negative cycles for $a$ which, with the exception of $a^-$, contains only atoms in $WP_i$. - $P_i \mapsto_L P_{i+1}$. The fact that $a \in W_{i+1}^- \setminus W_i^-$ indicates that all rules with $a$ as head have been removed. In this case, there might be some rules with $a$ as its head that have been removed by other transformations. Let $R_1(a)$ (resp. $R_2(a)$) be the set of rules, whose head is $a$, which are removed by a transformation $\mapsto_F$ or $\mapsto_N$ (resp. $\mapsto_L$). Let $S$ be the set of atoms employed for the $\mapsto_L$ step (i.e., the $i$-th step). Let $a_1,\ldots,a_s$ be an enumeration of $S$. For a subset $X$ of $S$, let $\min(X)$ denote the element in $X$ with the smallest index according to the above enumeration. Let $$\begin{array}{lcr} G_0 & = & \{ (a^-, b^-, +) \:|\: a{\mbox{$\: {\tt : \!\! - }\:$}}body \in P_i, b = \min(body\cap S) \}\\ G_{j+1} & = & \{ (b^-,c^-,+) \:|\: \exists (d^-,b^-,+) \in G_j, (b{\mbox{$\: {\tt : \!\! - }\:$}}body)\in P_i, \\ & & c = \min(body \cap S) \} \end{array}$$ Because of the finiteness of $S$, there exists some $j$ such that $G_j \subseteq \bigcup_{0 \le i \le j-1} G_i$. Let be the graph[^5] $G = \bigcup_{j\geq 0} G_j$. Because of the property of $S$, it is easy to see that for each atom $c$ in the graph $G$, $support(c,G)$ is a LCE of $c$ w.r.t. $WP_i$ and $\emptyset$ (w.r.t. the program $P_i$). Thus, we have that $G$ is an off-line justification for $a$ in $P_i$. Furthermore, it contains only positive cycles and it is composed of atoms from $S\cup \{a\}$. The construction of $G$ takes care of rules of the form (\[generic\]), which belong to $R_2(a)$. Similar to the previous case, we know that for each atom $b$ such that $b^-$ is a node in $G$, there exists $C_b^+ \subseteq W_{i-1}^+$ and $C_b^- \subseteq W_{i-1}^-$ such that for each rule $r$ with $head(r) = b$ in $R_1(b)$, $C_b^+ \cap neg(r) \ne \emptyset$ or $C_b^- \cap pos(r) \ne \emptyset$. $G$ can be extended to an off-line justification of $a$ by adding to its the justifications of other atoms that falsify the rules in $R_1(b)$ for every $b \in S$. More precisely, for each atom $b$ such that $b^-$ is a node in $G$, let $$G_b = \bigcup_{c \in C_b^+ \cup C_b^-} {\cal J}(c) \cup \{(b^-,c^+,-) \mid c \in C_b^+\} \cup \{(b^-,c^-,+) \mid c \in C_b^-\}.$$ Note that ${\cal J}(c)$ in the above equation exists due to the inductive hypothesis. Furthermore, each $G_b$ contains only atoms in $WP_i$ with the exception of $b$ and therefore cannot contain negative cycles. Thus, $G' = G \cup \bigcup_{b \textnormal{ is a node in } G} G_b$ does not contain negative cycles. It is easy to check that $support(c,G')$ is a LCE of $c$ in $P$ w.r.t. $WP_{i+1}$ and $\emptyset$. Thus, $G'$ is an off-line justification for $a$ in $P$ w.r.t. $WP_{i+1}$ and $\emptyset$. Let us now extend this into a justification for $a$ in $P$. Let us define: $$\begin{array}{lcl} G'_0 & = & \bigcup\left\{ \{(a^-,b^-,+)\}\cup {\cal J}(b)\:\begin{array}{|l} a{\mbox{$\: {\tt : \!\! - }\:$}}body \in P \wedge b \in body\cap W_i^- \end{array}\right\}\\ && \cup \bigcup \left\{ \{(a^-,b^+,-)\}\cup {\cal J}(b)\:\begin{array}{|l} a{\mbox{$\: {\tt : \!\! - }\:$}}body \in P \wedge not\:b \in body \wedge b \in W_i^+ \end{array}\right\}\\ G'_{j+1} & = & \bigcup\left\{ \{(b^-,c^-,+)\}\cup {\cal J}(c)\:\begin{array}{|l} \exists (d^-,b^-,+) \in G_j \wedge b{\mbox{$\: {\tt : \!\! - }\:$}}body \in P \wedge c \in body\cap W_i^- \end{array}\right\}\\ && \cup \bigcup \left\{ \{(b^-,c^+,-)\}\cup {\cal J}(c)\:\begin{array}{|l} \exists (d^-,b^-,+)\in G_j \wedge\\ b{\mbox{$\: {\tt : \!\! - }\:$}}body \in P \wedge\\ not\:c \in body \wedge\\ c \in W_i^+ \end{array}\right\}\\ {\cal J}(a) & = & \bigcup_{j\geq 0} G_j \cup \bigcup_{j\geq 0} G'_j \end{array}$$ The elements added in $G'_j$ corresponds to the justifications for the rules for the atoms of $G$ that have been removed before $P_{i}$—by inductive hypothesis all these have justifications that are free of recursive cycles, and they can be safely added to $G$. $\Box$ Proof of Proposition \[propimp\]. {#proof-of-proposition-propimp. .unnumbered} --------------------------------- [*Proposition \[propimp\].*]{} Let $P$ be a program and $M$ an answer set. For each atom $a$, there is an off-line justification w.r.t. $M$ and ${\cal TA}_P(M)$ which does not contain negative cycles. [**Proof:**]{} The result is trivial, since all the elements in ${\cal TA}_P(M)$ are immediately set to false, and $NR(P,{\cal TA}_P(M))$ has a well-founded model equal to $M$ (and thus all elements have justifications free of negative cycles, from Lemma \[good\]). $\Box$ Proof of Proposition \[gammadelta\]. {#proof-of-proposition-gammadelta. .unnumbered} ------------------------------------ The proof of this proposition will develop through a number of intermediate steps. Let us start by introducing some notation. Given a program $P$ and given the Herbrand universe $\cal A$, let $nohead(P)=\{a \in {\cal A}\: :\: \forall r\in P.\: a \neq head(r)\}$. Furthermore, for two sets of atoms $\Gamma, \Delta$ such that $\Gamma \cap \Delta = \emptyset$, we define a program transformation $\rightarrow_{\langle \Gamma, \Delta \rangle}$ as follows. The program $P'$, obtained from $P$ by - removing $r$ from $P$ if $pos(r) \cap \Delta \ne \emptyset$ or $neg(r) \cap \Gamma \ne \emptyset$ (remove rules that are inapplicable w.r.t. $\langle \Gamma, \Delta \rangle$). - replacing each remaining rule $r$ with $r'$ where $head(r') = head(r)$, $pos(r') = pos(r) \setminus \Gamma$, and $neg(r') = neg(r) \setminus \Delta$ (normalize the body of the rules w.r.t. $\langle \Gamma, \Delta \rangle$) is said to be the result of the transformation $\rightarrow_{\langle \Gamma, \Delta \rangle}$. We write $P \rightarrow_{\langle \Gamma, \Delta \rangle} P'$ to denote this fact. The following can be proven. \[l1\] Let $P$ be a program $\Gamma$ and $\Delta$ be two sets of atoms such that $\Gamma \subseteq facts(P)$, $\Delta = \bigcup_{i=1}^k S_i \cup X$ where $X \subseteq nohead(P)$ and $S_1,\ldots,S_k$ is a sequence of sets of atoms such that $S_i \in cycles(\langle \emptyset,\emptyset\rangle)$ for $1 \le i \le k$. It holds that if $P \rightarrow_{\langle \Gamma , \Delta \rangle }P'$ then there exists a sequence of basic transformations $P \mapsto_{t_1} P_1 \mapsto_{t_2} \ldots \mapsto_{t_m} P'$ where $t_i \in \{P,N,S,F,L\}$ (see the proof of Lemma \[good\] for the definition of these transformations). [**Proof.**]{} We prove this lemma by describing the sequence of transformations $\mapsto$. Let $\Omega = \bigcup_{i=1}^k S_i$. The proof is based on the following observations: 1. Since $\Gamma$ is a set of facts, we can repeatedly apply the $\mapsto_N$ and $\mapsto_S$ transformations to $P$. The result is a program $P_1$ with the following properties: for every $r \in P_1$, there exists some $r' \in P$ with $neg(r') \cap \Gamma = \emptyset$ 1. $neg(r) = neg(r')$ 2. $head(r) = head(r')$ and 3. $pos(r) = pos(r') \setminus \Gamma$. 2. Since $X$ is a set of atoms with no rules in $P_1$, we can repeatedly apply the $\mapsto_P$ and $\mapsto_F$ transformations to $P_1$ for the atoms belonging to $X$. The result is a program $P_2$ with the following properties: for every $r \in P_2$, there exists some $r' \in P_1$ with $pos(r') \cap X = \emptyset$ and 1. $pos(r) = pos(r')$ 2. $head(r) = head(r')$ and 3. $neg(r) = neg(r') \setminus X$. 3. Since $\Omega$ is a set of atoms with cycles, we can apply the loop detection transformation $\mapsto_L$ for each of the loops in $\Omega$ to $P_2$; thus, we obtain $P_3 = P_2 \setminus \{r \in P_2 \mid head(r) \in \Omega\}$. 4. Since atoms in $\Omega$ will no longer have defining rules in $P_3$, the transformations for atoms in $\Omega$ (similar to those for atoms in $X$) can be applied to $P_3$; the result is the program $P_4$ with the property: for every $r \in P_4$, there exists some $r' \in P_3$ with $pos(r') \cap \Omega = \emptyset$ and 1. $pos(r) = pos(r')$ 2. $head(r) = head(r')$ and 3. $neg(r) = neg(r') \setminus \Omega$. Finally, let us consider $P_4$; for each rule $r \in P_4$, there is a rule $r' \in P$ such that $pos(r') \cap \Delta = \emptyset$, $neg(r') \cap \Gamma = \emptyset$, and 1. $pos(r) = pos(r') \setminus \Gamma$ 2. $head(r) = head(r')$ and 3. $neg(r) = neg(r') \setminus \Delta$. This shows that $P \rightarrow_{\langle \Gamma, \Delta \rangle} P_4$. $\Box$ For a program $P$, let $WF_P$ be its well-founded model. Let us define a sequence of programs $P_0, P_1,\ldots,P_k,...$ as follows: $$\begin{array}{lcl} P_0 & = & P \\ P_0 & \rightarrow_{\langle \Gamma^1(WF_P), \Delta^1(WF_P) \rangle } & P_1 \\ P_i & \rightarrow_{\langle \Gamma^{i+1}(WF_P), \Delta^{i+1}(WF_P) \rangle } & P_{i+1} \\ \end{array}$$ \[l2new\] Given the previously defined sequence of programs, the following properties hold: 1. For $i \ge 0$, $\Gamma^{i}(WF_P) \subseteq facts(P_i)$ and $\Delta^{i}(WF_P)\subseteq nohead(P_i)$. 2. If $\Gamma^i(WF_P) = \Gamma^{i+1}(WF_P)$ then $\Gamma^{i+1}(WF_P) = facts(P_{i+1})$. 3. If $\Delta^i(WF_P) = \Delta^{i+1}(WF_P)$ then $\Delta^{i+1}(WF_P) = nohead(P_{i+1})$. [**Proof.**]{} 1. The first property holds because of the construction of $P_i$ and the definitions of $\Gamma^{i}(WF_P)$ and $\Delta^{i}(WF_P)$. 2. Consider some $a \in facts(P_{i+1})$. By the definition of $P_{i+1}$, there exists some rule $r \in P_i$ such that - $head(r) = a$, - $pos(r) \cap \Delta^i(WF_P) = \emptyset$, - $neg(r) \cap \Gamma^i(WF_P) = \emptyset$, - $pos(r) \setminus \Gamma^i(WF_P) = \emptyset$, and - $neg(r) \setminus \Delta^i(WF_P) = \emptyset$. This implies that $pos(r) \subseteq \Gamma^i(WF_P)$ and $neg(r) \subseteq \Delta^i(WF_P)$, i.e., $a \in \Gamma^{i+1}(WF_P)$. This proves the equality of the second item. 3. Consider some $a \in nohead(P_{i+1})$. This means that every rule of $P_i$ having $a$ in the head has been removed; i.e., for every $r \in P_i$ with $head(r) = a$, we have that - $pos(r) \cap \Delta^i(WF_P) \ne \emptyset$ or - $neg(r) \cap \Gamma^i(WF_P) \ne \emptyset$. This implies that $a \in \Delta^{i+1}(WF_P)$, which allows us to conclude the third property. $\Box$ \[l3\] Let $k$ be the first index such that $\Gamma^k(WF_P) = \Gamma^{k+1}(WF_P)$ and $\Delta^k(WF_P) = \Delta^{k+1}(WF_P)$. Then, $P_{k+1}$ is irreducible w.r.t. the transformations $\mapsto_{NPSFL}$. [**Proof.**]{} This results follows from Lemma \[l2new\], since $\Gamma^{k+1}(WF_P) = facts(P_{k+1})$ and $\Delta^{k+1}(WF_P) = nohead(P_{k+1})$. This means that $P_{k+1} \mapsto^*_{NPSF} P_{k+1}$. Furthermore, $cycles(\langle \Gamma^{k+1}(WF_P), \Delta^{k+1}(WF_P)) = \emptyset$. Hence, $P_{k+1}$ is irreducible. $\Box$ \[prop-wfs\] For a program $P$, $WF_P = \langle \Gamma(WF_P), \Delta(WF_P) \rangle$. [**Proof.**]{} This results follows from Lemmas \[l2new\] and \[l3\]. $\Box$ \[l4\] Given two p-interpretations $I \sqsubseteq J$, we have that $\Gamma(I) \subseteq \Gamma(J)$ and $\Delta(I) \subseteq \Delta(J)$. [**Proof.**]{} We prove that $\Gamma^i(I) \subseteq \Gamma^i(J)$ and $\Delta^i(I) \subseteq \Delta^i(J)$ by induction on $i$. 1. [**Base:**]{} $i=0$. This step is obvious, since $I \subseteq J$. 2. [**Step:**]{} Let $I_i = \langle \Gamma^i(I), \Delta^i(I) \rangle$ and $J_i = \langle \Gamma^i(J), \Delta^i(J) \rangle$. From the inductive hypothesis, we can conclude that $I_i \sqsubseteq J_i$. This result, together with the fact that, for any rule $r$, $I_i \models body(r)$ implies $J_i \models body(r)$, allows us to conclude that $\Gamma^{i+1}(I) \subseteq \Gamma^{i+1}(J)$. Similarly, from the fact that $cycles(I_i) \subseteq cycles(J_i)$ and the inductive hypothesis, we can show that $\Delta^{i+1}(I) \subseteq \Delta^{i+1}(J)$. $\Box$ \[prop-ans\] Given a program $P$ and an answer set $M$ of $P$, $M = \langle \Gamma(M), \Delta(M)\rangle$. [**Proof.**]{} Let us prove this lemma by contradiction. Let $J = \langle \Gamma(M), \Delta(M)\rangle$. First, Lemma \[l4\] and \[prop-wfs\] imply that $WF_P \subseteq J$. Since $M$ is an answer set of $P$, there exists some level mapping $\ell$ such that $M$ is a *well-supported model* w.r.t. $\ell$ [@Fages94], i.e., for each $a \in M^+$ there exists a rule $r_a$ satisfying the following conditions: - $head(r_a) = a$, - $r_a$ is supported by $M$ (i.e., $pos(r_a) \subseteq M^+$ and $neg(r_a) \subseteq M^-$), and - $\ell(a) > \ell(b)$ for each $b \in pos(r_a)$. We have to consider the following cases: - [**Case 1:**]{} $M^+ \setminus J^+ \ne \emptyset$. Consider $a \in M^+ \setminus J^+$ such that $\ell(a) = \min \{\ell(b)\: \mid\: b \in M^+ \setminus J^+\}$. There exists a rule $r$ such that $head(r) = a$, $r$ is supported by $M$, and $\ell(a) > \ell(b)$ for each $b \in pos(r)$. The minimality of $\ell(a)$ implies that $pos(r) \subseteq J^+$. The fact that $a \not\in J^+$ implies that $neg(r) \setminus J^- \ne \emptyset$. Consider some $c \in neg(r) \setminus J^-$. Clearly, $c \not\in (NANT(P) \setminus WF_P^-)$—otherwise, it would belong to $J^-$. This implies that $c \in WF_P^-$ because $c \in NANT(P)$. Hence, $c \in J^-$. This represents a contradiction. - [**Case 2:**]{} $M^- \setminus J^- \ne \emptyset$. Consider $a \in M^- \setminus J^-$. This is possible only if there exists some rule $r$ such that - $head(r) = a$, - $pos(r) \cap \Delta(M) = \emptyset$, - $neg(r) \cap \Gamma(M) = \emptyset$, and - either 1. $neg(r) \setminus \Delta(M) \ne \emptyset$, or 2. $pos(r) \setminus \Gamma(M) \ne \emptyset$. In what follows, by $R_a$ we denote the set of rules in $P$ whose head is $a$ and whose bodies are neither true nor false in $J$. If (i) is true, then there exists some $b \in neg(r) \setminus \Delta(M)$. Since $b \in neg(r)$, we have that $b \in NANT(P)$. This implies that $b \not\in M^-$ or $b \in WF_P^-$. The second case cannot happen since $WF_P \sqsubseteq J$ (Lemma \[l4\]). So, we must have that $b \not\in M^-$. This means that $b \in M^+$ (since $M$ is an answer set, and thus a complete interpretation), and hence, $b \in J^+$ (Case 1). This contradicts the fact that $neg(r) \cap \Gamma(M) = \emptyset$. Therefore, we conclude that (i) cannot happen. Since (i) is not true, we can conclude that $R_a \ne \emptyset$ and for every $r\in R_a$ and $b \in pos(r) \setminus \Gamma(M)$, $b \in J^-\setminus M^-$ and $R_b \ne \emptyset$. Let us consider the following sequence: $$\begin{array}{l} C_0 = \{a\} \;\;\;\;\; \\ C_1 = \bigcup_{r \in R_a} (pos(r) \setminus \Gamma(M)) \\ \ldots \\ C_i = \bigcup_{b \in C_{i-1}} (\bigcup_{r \in R_b} (pos(r) \setminus \Gamma(M))) \\ \end{array}$$ Let $C = \bigcup_{i=0}^\infty C_i$. It is easy to see that for each $c \in C$, it holds that $c \in M^- \setminus J^-$, $R_c \ne \emptyset$, and for each $r \in R_c$, $pos(r) \cap C \ne \emptyset$. This means that $C \in cycles(J)$. This is a contradiction with $C \subseteq (M^- \setminus J^-)$. $\Box$ [*Proposition \[gammadelta\].*]{} For a program $P$, we have that: - $\Gamma$ and $\Delta$ maintains the consistency of $J$, i.e., if $J$ is an interpretation, then $\langle \Gamma(J), \Delta(J) \rangle$ is also an interpretation; - $\Gamma$ and $\Delta$ are monotone w.r.t the argument $J$, i.e., if $J \sqsubseteq J'$ then $\Gamma(J) \subseteq \Gamma(J')$ and $\Delta(J) \subseteq \Delta(J')$; - $\Gamma(WF_P) = WF_P^+ $ and $\Delta(WF_P) = WF_P^-$; and - if $M$ is an answer set of $P$, then $\Gamma(M) = M^+ $ and $\Delta(M) = M^-$. [**Proof:**]{} 1. Follows immediately from the definition of $\Gamma$ and $\Delta$. 2. Since $J \models body(r)$ implies $J' \models body(r)$ and $S \in cycles(J)$ implies $S \in cycles(J')$ if $J \subseteq J'$, the conclusion of the item is trivial. 3. This derives from Lemma \[prop-wfs\]. 4. This derives from Lemma \[prop-ans\]. $\Box$ Proof of Proposition \[on-off\]. {#proof-of-proposition-on-off. .unnumbered} -------------------------------- To prove this proposition, we first prove Lemma \[just-free\] and \[conserve\]. We need the following definition. Let $G$ be an arbitrary graph whose nodes are in ${\cal A}^p \cup {\cal A}^n \cup \{assume,\top,\bot\}$ and whose edges are labeled with $+$ and $-$. Given $e \in {\cal A}^+ \cup {\cal A}^-$, the *subgraph* of $G$ with root $e$, denoted by $Sub(e, G)$, is the graph obtained from $G$ by 1. removing all the edges of $G$ which do not lie on any path starting from $e$, and 2. removing all nodes unreachable from $e$ in the resulting graph. Throughout this section, let $I_i$ denote $\langle \Gamma^i(J), \Delta^i(J)\rangle$. For a set of atoms $C$ and an element $b \in C$, let $$\label{kbc} K(b,C) = \{c \mid c \in C, \exists r\in P. \: (head(r)=b, c\in pos(r))\}.$$ Let $J$ be a p-interpretation, $A = {\cal TA}_P(J)$, and let $a \in \Delta^0(J)$ such that there exists at least one local consistent explanation of $a^-$ w.r.t. $\langle \emptyset,\emptyset\rangle$. Thus, there are no rules with $a$ as head in $P$, which implies that $a \in WF_P^-$. \[split-d0\] For a p-interpretation $J$ and $A = {\cal TA}_P(J)$, let $\Delta^0(J) = \Delta_1^0 \cup \Delta_2^0 \cup \Delta_3^0$ where $$\Delta_1^0 = \{a \in \Delta^0(J) \mid PE(a^-,\langle \emptyset,\emptyset\rangle) \ne \emptyset\},$$ $$\Delta_2^0 = \{a \in \Delta^0(J) \mid PE(a^-,\langle \emptyset,\emptyset\rangle) = \emptyset \textnormal{ and } a \in {\cal TA}_P(J)\},$$ and $$\Delta_3^0 = \{a \in \Delta^0(J) \mid PE(a^-,\langle \emptyset,\emptyset\rangle) = \emptyset \textnormal{ and } a \not\in {\cal TA}_P(J)\}.$$ The following properties hold: - $\Delta^0_1$, $\Delta^0_2$, and $\Delta^0_3$ are pairwise disjoint. - for each $a \in \Delta_3^0$ there exists a LCE $K_a$ of $a^-$ w.r.t. $\langle \Gamma^0(J),\Delta^0(J)\rangle$ and $A$ such that for each rule $r \in P$ with $head(r) = a$, $pos(r) \cap K_a \ne \emptyset$. [**Proof.**]{} The first item is trivial thanks to the definition of $\Delta^0_1$, $\Delta^0_2$, and $\Delta^0_3$. For the second item, for $a \in \Delta_3^0$, there exists some $C \in cycles(\langle \emptyset,\emptyset \rangle)$ such that $a \in C$ and $C \subseteq \Delta^0(J)$. From the definition of a cycle, there exists some $K_a \subseteq C \subseteq \Delta^0(J)$ which satisfies the condition of the second item. $\Box$ We will now proceed to prove Lemma \[just-free\]. For each $i$, we construct a dependency graph $\Sigma_i$ for elements in $(\Gamma_i(J))^p$ and $(\Delta_i(J))^n$ as follows. Again, we describe a graph by its set of edges. First, the construction of $\Sigma_0$ is as follows. 1. for each $a \in \Gamma^0(J)$, $\Sigma_0$ contains the edge $(a^+,\top,+)$. 2. let $\Delta^0_1$, $\Delta^0_2$, and $\Delta^0_3$ be defined as in Lemma \[split-d0\]. 1. For $a \in \Delta^0_1$, $\Sigma_0$ contains the edge $(a^-,\bot,-)$; 2. For $a \in \Delta^0_2$, $\Sigma_0$ contains the edge $(a^-,assume,-)$. 3. Let $a \in \Delta^0_3$. This implies that there exists some $C \subseteq J^-$ such that $a \in C$ and $C \in cycles(\langle \emptyset,\emptyset\rangle)$. For each $b \in C$, let $K_b$ be an explanation of $b^-$ w.r.t. $\langle \Gamma^0(J),\Delta^0(J)\rangle$ and ${\cal TA}_P(J)$ which satisfies the conditions of the second item in Lemma \[split-d0\]. Then, $\Sigma_0$ contains the set of edges $\bigcup_{b \in C} \{(b^-,c^-,+)\: \mid\: c \in K_b\})$. 3. no other edges are added to $\Sigma_0$. \[lbase\] Let $J$ be a p-interpretation and $A = {\cal TA}_P(J)$. The following holds for $\Sigma_0$: 1. for each $a \in \Gamma^0(J)$, $Sub(a^+, \Sigma_0)$ is a safe off-line e-graph of $a^+$ w.r.t. $I_0$ and $A$. 2. for each $a \in \Delta^0(J)$, $Sub(a^-, \Sigma_0)$ is a safe off-line e-graph of $a^-$ w.r.t. $I_0$ and $A$. [**Proof.**]{} - Consider $a \in \Gamma^0(J)$. Since $\Sigma_0$ contains $(a^+,\top,+)$ for every $a \in \Gamma^0(J)$ and $\top$ is a sink in $\Sigma_0$, we can conclude that $Sub(a^+,\Sigma_0) = (\{a^+,\top\},\{(a^+,\top,+)\})$ and $Sub(a^+,\Sigma_0)$ is a safe off-line e-graph of $a^+$ w.r.t. $I_0$ and $A$. - Consider $a \in \Delta^0(J)$. Let $\Delta^0_1$, $\Delta^0_2$, and $\Delta^0_3$ be defined as in Lemma \[split-d0\]. There are three cases: 1. $a \in \Delta^0_1$. Since $\Sigma_0$ contains $(a^-,\bot,-)$ and $\bot$ is a sink in $\Sigma_0$, we can conclude that $Sub(a^-,\Sigma_0) = (\{a^-,\bot\},\{(a^-,\bot,-)\})$ and $Sub(a^-,\Sigma_0)$ is a safe off-line e-graph of $a^-$ w.r.t. $I_0$ and $A$. 2. $a \in \Delta^0_2$. Since $\Sigma_0$ contains $(a^-,assume,-)$ and $assume$ is a sink in $\Sigma_0$, we can conclude that $Sub(a^-,\Sigma_0) = (\{a^-,assume\},\{(a^-,assume,-)\})$ and $Sub(a^-,\Sigma_0)$ is a safe off-line e-graph of $a^-$ w.r.t. $I_0$ and $A$. 3. for $a \in \Delta^0_3$, let $G = Sub(a^-,\Sigma_0) = (N,E)$. It is easy to see that $G$ is indeed a $(J,A)$-based e-graph of $a^-$ because, from the construction of $G$, we have that 1. every node in $N$ is reachable from $a^-$, and 2. if $b^- \in N$ then $support(b^-,G) = K_b \subseteq N$ is a local consistent explanation of $b^-$ w.r.t. $I_0$ and $A$. The safety of the e-graph derives from the fact that it does not contain any nodes of the form $p^+$. $\Box$ To continue our construction, we will need the following lemma. \[split-di\] Let $J$ be a p-interpretation and $A = {\cal TA}_P(J)$ and $i > 0$. Let $$\Delta^i_1 = \{a \in \Delta^i(J) \setminus \Delta^{i-1}(J) \mid PE(a^-,I_{i-1}) \ne \emptyset\}$$ and $$\Delta^i_2 = \{a \in \Delta^i(J) \setminus \Delta^{i-1}(J) \mid PE(a^-,I_{i-1}) = \emptyset\}.$$ Then, - $\Delta^i_1 \cap \Delta^i_2 = \emptyset$; - for each $a \in \Delta^i_1$ there exists some LCE $K_a$ of $a^-$ w.r.t. $I_i$ and $A$ such that $\{p \in {\cal A} \mid p \in K_a\} \subseteq \Delta^{i-1}(J)$ and $\{a \mid \naf a \in K_a\} \subseteq \Gamma^{i-1}(J)$; and - for each $a \in \Delta^i_2$ there exists some LCE $K_a$ of $a^-$ w.r.t. $I_i$ and $A$ such that $\{p \in {\cal A} \mid p \in K_a\} \subseteq \Delta^{i}(J)$ and $\{a \mid \naf a \in K_a\} \subseteq \Gamma^{i-1}(J)$. [**Proof.**]{} These properties follow immediately from the definition of $\Delta^i(J)$. $\Box$ Given $\Sigma_{i-1}$, we can construct $\Sigma_i$ by reusing all nodes and edges of $\Sigma_{i-1}$ along with the following nodes and edges. 1. for each $a \in \Gamma^i(J) \setminus \Gamma^{i-1}(J)$, from the definition of $\Gamma^i(J)$ we know that there exists a rule $r$ such that $head(r) = a$, $pos(r) \subseteq \Gamma^{i-1}(J)$, and $neg(r) \subseteq \Delta^{i-1}(J)$. $\Sigma_i$ contains the node $a^+$ and the set of edge $\{(a^+,b^+,+) \mid b \in pos(r)\} \cup \{(a^+,b^-,+) \mid b \in neg(r)\}$. 2. let $\Delta^i_1$ and $\Delta^i_2$ be defined as in Lemma \[split-di\]. 1. For $a \in \Delta^i_1$, let $K_a$ be a LCE of $a^-$ satisfying the second condition of Lemma \[split-di\]. Then, $\Sigma_i$ contains the following set of edges: $\{(a^-,b^-,+) \mid b \in K_a\} \cup \{(a^-,b^+,-) \mid \naf b \in K_a\}$; 2. For $a \in \Delta^i_2$, let $K_a$ be a LCE of $a^-$ satisfying the third condition of Lemma \[split-di\]. Then, $\Sigma_i$ contains the set of links $$\{(a^-,c^-,+) \mid c \in K_b\} \cup \{(a^-,c^+,-) \mid \naf c \in K_b\}.$$ 3. no other links are added to $\Sigma_i$. \[lprop-sigma\] Let $J$ be p-interpretation and $A = {\cal TA}_P(J)$. For every integer $i$, the following properties hold: 1. for each $a \in \Gamma^i(J) \setminus \Gamma^{i-1}(J)$, $Sub(a^+, \Sigma_i)$ is a safe off-line e-graph of $a^+$ w.r.t. $I_i$ and $A$. 2. for each $a \in \Delta^i(J) \setminus \Delta^{i-1}(J)$, $Sub(a^-, \Sigma_i)$ is a safe off-line e-graph of $a^-$ w.r.t. $I_i$ and $A$. [**Proof.**]{} The proof is done by induction on $i$. The base case is proved in Lemma \[lbase\]. Assume that we have proved the lemma for $j < i$. We now prove the lemma for $i$. We consider two cases: 1. $a \in \Gamma^i(J) \setminus \Gamma^{i-1}(J)$. Let $r$ be the rule with $head(r) = a$ used in Item 1 of the construction of $\Sigma_i$. For each $b \in pos(r)$, let $P_b = (NP_b,EP_b) = Sub(b^+,\Sigma_{i-1})$. For each $b \in neg(r)$, let $Q_b = (NQ_b,EQ_b) = Sub(b^-,\Sigma_{i-1})$. We have that $Sub(a^+,\Sigma_i) = (N,G)$ where $$\begin{array}{lll} N & = & \{a^+\}\ \cup \{b^+ \mid b \in pos(r)\} \cup \{b^- \mid b \in neg(r)\} \cup \\ & & \bigcup_{b \in pos(r)} NP_b \cup \bigcup_{b \in neg(r)} NQ_b \end{array}$$ and $$\begin{array}{lll} E & = & \{(a^+,b^+,+) \mid b \in pos(r)\} \cup \{(a^+,b^-,+) \mid b \in neg(r)\} \cup \\ & & \bigcup_{b \in pos(r)} EP_b \cup \bigcup_{b \in neg(r)} EQ_b \end{array}$$ From the inductive hypothesis, we have that $P_b$’s (resp. $Q_b$’s) are safe off-line e-graphs of $b^+$ (resp. $b^-$) w.r.t. $I_{i-1}$ and $A$. This implies that $G$ is a $(I_i,A)$-based e-graph of $a^+$. Furthermore, for every $(a^+,e,+) \in E$, $e \in (\Gamma^{i-1}(J))^p$ or $e \in (\Delta^{i-1}(J))^n$. Thus, $(a^+,a^+) \notin E^{*,+}$. 2. for each $a \in \Delta^i(J) \setminus \Delta^{i-1}(J)$, let $G = Sub(a^-,\Sigma_i) = (N,E)$. From the definition of $G$, every node in $N$ is reachable from $a^-$ and $support(e,G)$ is a local consistent explanation of $e$ w.r.t. $I_i$ and $A$ for every $e \in N$. Thus, $G$ is a $(I_i,A)$-based e-graph of $a^-$. Furthermore, it follows from the definition of $\Sigma_i$ that there exists no node $e \in N$ such that $e \in (\Gamma^i(J) \setminus \Gamma^{i-1}(J))^+$. Thus, if $c^+ \in N$ and $(c^+,c^+) \in E^{*,+}$ then we have that $Sub(c^+,\Sigma_{i-1})$ is not safe. This contradicts the fact that it is safe due to the inductive hypothesis. $\Box$ [*Lemma \[just-free\].*]{} Let $P$ be a program, $J$ a p-interpretation, and $A = {\cal TA}_P(J)$. The following properties hold: - For each atom $a \in \Gamma(J)$ (resp. $a \in \Delta(J)$), there exists a *safe* off-line e-graph of $a^+$ (resp. $a^-$) w.r.t. $J$ and $A$; - for each atom $a \in J^+ \setminus \Gamma(J)$ (resp. $a \in J^- \setminus \Delta(J)$) there exists an on-line e-graph of $a^+$ (resp. $a^-$) w.r.t. $J$ and $A$. [**Proof.**]{} The first item follows from the Lemma \[lprop-sigma\]. The second item of the lemma is trivial due to the fact that $(\{a^+,assume\}, \{(a^+,assume,+)\})$\ (resp. $(\{a^-,assume\}, \{(a^-,assume,-)\})$) is a $(J,A)$-based e-graph of $a^+$ (resp. $a^-$), and hence, is an on-line e-graph of $a^+$ (resp. $a^-$) w.r.t. $J$ and $A$. $\Box$ [*Lemma \[conserve\].*]{} Let $P$ be a program, $J$ be an interpretation, and $M$ be an answer set such that $J \sqsubseteq M$. For every atom $a$, if $(N,E)$ is a safe off-line e-graph of $a$ w.r.t. $J$ and $A$ where $A = J^- \cap {\cal TA}_P(M)$ then it is an off-line justification of $a$ w.r.t. $M$ and ${\cal TA}_P(M)$. [**Proof.**]{} The result is obvious from the definition of off-line e-graph and from the fact that $J^- \cap {\cal TA}_P(M) \subseteq {\cal TA}_P(M)$. $\Box$ [*Proposition \[on-off\].*]{} Let $M_0, \ldots, M_k$ be a general complete computation and\ $S(M_0), \ldots, S(M_k)$ be an on-line justification of the computation. Then, for each atom $a$ in $M_k$, the e-graph of $a$ in $S(M_k)$ is an off-line justification of $a$ w.r.t. $M_k$ and ${\cal TA}_P(M_k)$. [**Proof.**]{} This follows immediately from Lemma \[conserve\], the construction of the snapshots $S(M_i)$, the fact that $M_i \sqsubseteq M_k$ for every $k$, and $M_k$ is an answer set of $P$. $\Box$ [^1]: Abusing the notation, we often refer to a logic program under the answer set semantics as an “ASP program” whenever it is clear from the context what it refers to. [^2]: Remember that $ {\cal TA}_P(J) = \{ a \:|\: a \in NANT(P)\:\wedge\: a \in J^- \:\wedge\: a \not\in (WF_P^+\cup WF_P^-)\}$. [^3]: We omit the steps that do not change the interpretation. [^4]: Here, we refer to conflict in the same terms as [Smodels]{}. [^5]: Again, we define the graph by its set of edges.
--- abstract: 'The gluon propagator is investigated at finite temperature via lattice simulations. In particular, we discuss its interpretation as a massive-type bosonic propagator. Moreover, we compute the corresponding spectral density and study the violation of spectral positivity. Finally, we explore the dependence of the gluon propagator on the phase of the Polyakov loop.' author: - 'P. J. Silva' - 'O. Oliveira' - 'D. Dudal' - 'P. Bicudo' - 'N. Cardoso' date: 'Received: date / Accepted: date' title: 'Gluons at finite temperature [^1] ' --- Introduction {#intro} ============ The study of the dynamics of QCD at finite temperature and density has been the subject of an intensive study, motivated by the heavy-ion experiments running e.g. at CERN [@cernexp] and RHIC [@rhicexp]. From the theoretical point of view, the lattice formulation has been one of the most promising frameworks which allows to investigate the properties of the non-perturbative regime of QCD at non-zero temperature. For pure SU(3) Yang-Mills theory at finite temperature, a first-order transition is found at a critical temperature $T_c\sim 270$ MeV — see, for example, [@Tc] and references therein. For temperatures below $T_c$, the gluons are confined within color-singlets, whereas for $T>T_c$ the gluons become deconfined and behave as massive quasiparticles. The order parameter for the deconfinement phase transition is the Polyakov loop, defined on the lattice as $$L( \vec{x} ) = \mathrm{Tr} \prod^{N_t-1}_{t=0} \, \mathcal{U}_4(\vec{x},t) \quad , \quad L = \langle L( \vec{x} ) \rangle \, \propto \, e^{-F_q/T}$$ where $\mathcal{U}_4$ is the time-oriented link. Its space-averaged value $L$ is a measure of the free energy of a static quark. The behaviour of the Polyakov loop as a function of the temperature is connected with a spontaneous breaking of the center symmetry. The Wilson gauge action and the integration measure are invariant under a center transformation, where the temporal links on a hyperplane $x_4=const$ are multiplied by an element of the SU(3) center group $z \in Z_3 = \{e^{- i 2 \pi/3}, 1, e^{ i 2 \pi/3} \}$. Under such a center transformation, the Polyakov loop transforms as $L(\vec{x}) \rightarrow z L(\vec{x})$. For temperatures below $T_c$, the local phase of the Polyakov loop is equally distributed among the three sectors, and therefore $L = \langle L( \vec{x} ) \rangle \approx 0$. For $T>T_c$, the $Z_3$ sectors are not equally populated (a manifestation of a spontaneous breaking of the center symmetry) and thus $L \neq 0$. In this proceeding, we focus on several aspects of the Landau gauge gluon propagator at finite temperature, computed via lattice QCD simulations. Like other propagators of fundamental fields (e.g. quark and ghost propagators), gluon two-point functions encode information about non-perturbative phenomena, such as confinement and deconfinement. Lattice setup and propagators ============================= At finite temperature, the gluon propagator has the following tensor structure $$D^{ab}_{\mu\nu}(\hat{q})=\delta^{ab}\left(P^{T}_{\mu\nu} D_{T}(q_4,\vec{q})+P^{L}_{\mu\nu} D_{L}(q_4,\vec{q}) \right) \label{tens-struct}$$ where $P^{T}$ and $P^{L}$ are the transverse and longitudinal projectors respectively: $$P^{T}_{\mu\nu} = (1-\delta_{\mu 4})(1-\delta_{\nu 4})\left(\delta_{\mu \nu}-\frac{q_\mu q_\nu}{\vec{q}^2}\right) \quad , \quad P^{L}_{\mu\nu} = \left(\delta_{\mu \nu}-\frac{q_\mu q_\nu}{{q}^2}\right) - P^{T}_{\mu\nu} \, . \label{gluon-proj}$$ \[lattsetup\] [cccccc]{} Temp. (MeV) & $\beta$ & $L_s$ & $L_t$ & a \[fm\] & 1/a (GeV)\ 121 & 6.0000 & 64 & 16 & 0.1016 & 1.943\ 162 & 6.0000 & 64 & 12 & 0.1016 & 1.943\ 194 & 6.0000 & 64 & 10 & 0.1016 & 1.943\ 243 & 6.0000 & 64 & 8 & 0.1016 & 1.943\ 260 & 6.0347 & 68 & 8 & 0.09502 & 2.0767\ 265 & 5.8876 & 52 & 6 & 0.1243 & 1.5881\ 275 & 6.0684 & 72 & 8 & 0.08974 & 2.1989\ 285 & 5.9266 & 56 & 6 & 0.1154 & 1.7103\ 290 & 6.1009 & 76 & 8 & 0.08502 & 2.3211\ 305 & 5.9640 & 60 & 6 & 0.1077 & 1.8324\ 305 & 6.1326 & 80 & 8 & 0.08077 & 2.4432\ 324 & 6.0000 & 64 & 6 & 0.1016 & 1.943\ 366 & 6.0684 & 72 & 6 & 0.08974 & 2.1989\ 397 & 5.8876 & 52 & 4 & 0.1243 & 1.5881\ 428 & 5.9266 & 56 & 4 & 0.1154 & 1.7103\ 458 & 5.9640 & 60 & 4 & 0.1077 & 1.8324\ 486 & 6.0000 & 64 & 4 & 0.1016 & 1.943\ In Table \[lattsetup\] we describe our lattice setup used in the next two sections. The temperature is defined by adjusting the lattice temporal size, $T=1/(N_ta)$. Following our first reports [@rep2012], where we saw a measurable dependence of the gluon propagator at finite $T$ on the lattice volume, we have carefully chosen the lattice parameters of the various Monte Carlo simulations in order to keep the physical (spatial) volume at a constant value $\sim (6.5\,\mathrm{fm})^3$. Simulations have been made in Coimbra [@lca] with the help of Chroma [@chroma] and PFFT [@pfft] libraries. The results shown here are for renormalized longitudinal and transverse propagators, and for a renormalization scale $\mu=4\,$GeV. For details about the renormalization procedure see [@gluonmass]. In Fig. \[plot3d\] we show how the propagators behave as functions of momentum and temperature. Note the sharp transition for the longitudinal component at $T\sim T_c$, and the turnover of the tranverse component in the infrared region, for $T \gg T_c$. ![image](plot3d_long.eps){width="45.00000%"} ![image](plot3d_trans.eps){width="45.00000%"} Positivity violation and spectral densities =========================================== It is well known that a Euclidean momentum-space propagator of a (scalar) physical degree of freedom $$\mathcal{G}(p^2)\equiv\braket{\mathcal{O}(p)\mathcal{O}(-p)}$$ oughts to have a Källén-Lehmann spectral representation $$\mathcal{G}(p^2)=\int_{0}^{\infty}{\ensuremath{\mathrm{d}}}\mu\frac{\rho(\mu)}{p^2+\mu} \,,\qquad \textrm{with }\rho(\mu)\geq0 \textrm{ for } \mu\geq 0\,.$$ where the spectral density $\rho(\mu)$ contains information on the masses of physical states described by the operator $\mathcal{O}$. In [@spectral] a method is presented which allows to compute the spectral density of gluons and other (un)physical degrees of freedom, for which the spectral density is not strictly positive. The method relies on Tikhonov regularization combined with the Morozov discrepancy principle. Here we discuss some preliminary results [@latt2013] for the spectral density associated to the gluon propagator at finite temperature, together with the temporal correlator $$C(t) = \int_{-\infty}^{\infty} \frac{dp}{2\pi} D(p^2) \exp(-ipt)= \int_{0}^{\infty} d\omega \rho(\omega^2) e^{-\omega t}.$$ Note that $C(t) < 0$ in some range of $t$ implies a negative spectral density, hence positivity violation and gluon confinement. On the other hand, a positive $C(t)$ says nothing about the sign of $\rho(\mu)$. In Fig. \[postrans\] we plot $C(t)$ for the tranverse component. We conclude that the positivity is violated for all temperatures. Furthermore, a careful inspection reveals that the time scale for positivity violation decreases with the temperature. In Fig. \[speclong\] we plot, for a number of selected temperatures, the spectral density of the longitudinal component. The plots show that the momentum scale at which the spectral density becomes negative seems to increase with the temperature. All these results suggests that, for sufficiently high temperatures, the spectral density may be strictly positive and, therefore, gluons would behave as quasi-particles. ![image](positividade_trans_upTc){width="35.00000%"} ![image](positividade_trans_aboveTc){width="35.00000%"} ![image](spectral_densities_variousT.eps){width="75.00000%"} Gluon mass ========== In this section we investigate whether the gluon propagator at finite temperature behaves as a massive-type bosonic propagator [@gluonmass]. We consider a Yukawa-type ansatz $$D(p) = \frac{Z }{ p^2 + M^2} \label{yukawa}$$ where $M$ is the gluon mass and $Z^{\frac{1}{2}}$ the overlap between the gluon state and the quasi-particle massive state. The simplest definition for a gluon mass scale is given by $M = 1 / \sqrt{ D(0)} $. Our results for a mass scale considering this definition can be seen in the left plot of Fig. \[gmass\]. Of course, a more realistic estimate for the gluon mass can be obtained by fitting our lattice data in the infrared region to the ansatz described in Eq. (\[yukawa\]). It turned out that the lattice data for the transverse propagator is not compatible with such ansatz. The results for the longitudinal propagator can be seen in the right plot of Fig. \[gmass\]. ![image](naivegmass.eps){width="45.00000%"} ![image](yukawagmass.eps){width="45.00000%"} $Z_3$ dependence ================ As seen in Fig. \[plot3d\], $D_L$ and $D_T$ show quite different behaviours with $T$. The gluon propagator is usually computed such that $-\pi/3<\arg(L)\leq\pi/3$ i.e. in the so-called $Z_3$ sector 0. In this section we investigate the behaviour of the gluon propagator in the other $\pm1$ $Z_3$ sectors[^2] and compare with the results for the zero sector. To achieve such goal, for each configuration in a given ensemble, we applied a center transformation considering all $z\in Z_3$, thus obtaining 3 different configurations related by $Z_3$ transformations, with the very same value for the Wilson action. Each configuration is then rotated to the Landau gauge. Finally, each of the three gauge configurations is classified according to the phase of $L=|L|e^{i\theta}$. In Fig. \[z3-324\], we can see the typical behaviour of the gluon propagator in the different $Z_3$ sectors for a temperature well above $T_c$. For the longitudinal component, the propagator in the $\pm1$ sectors is strongly enhanced relative to the $0$ sector. On the other hand, the tranverse propagator in the $\pm1$ sectors is suppressed if compared with the $0$ sector. However, for temperatures below $T_c$, the picture changes — see Fig. \[z3-269\]. In this case, the three propagators for the different $Z_3$ sectors are indistinguishable. A comparison of the Markov chain history for temperatures below and above $T_c$ — see Fig. \[z3-mc\] — allows one to conclude that the difference of the longitudinal propagator between the different $Z_3$ sectors can be used as a criterion to identify whether a given configuration is in the confined or deconfined phase. More details about this work may be found in [@z3]. O. Oliveira, and P. J. Silva acknowledge financial support from FCT Portugal under contract with reference UID/FIS/04564/2016. P. J. Silva acknowledges support by FCT under contracts SFRH/BPD/40998/2007 and SFRH/BPD/109971/2015. P. Bicudo and N. Cardoso thank CFTP with FCT grant UID/FIS/00777/2013. N. Cardoso is supported by FCT under contract SFRH/BPD/109443/2015. The computing time was provided by the Laboratory for Advanced Computing at the University of Coimbra [@lca]. [3]{} C. Roland (2015), “Jets in heavy ion collisions at the LHC”, Int. J. Mod. Phys. **A** 30, 1546010. T. A. Trainor (2014), “A critical review of RHIC experimental results”, Int. J. Mod. Phys. **E** 23, 1430011. B. Lucini, M. Teper, U. Wenger (2004), “The high temperature phase transition in SU(N) gauge theories”, JHEP **01**, 061. O. Oliveira, P. J. Silva (2012), “The Lattice Landau Gauge Gluon Propagator at Zero and Finite Temperature”, Acta Phys. Polon. Supp. **5**, 1039. http://www.uc.pt/lca/ R. G. Edwards, B. Joó (2005), “The Chroma Software System for Lattice QCD”, Nucl. Phys. Proc. Suppl. **140**, 832. M. Pippig (2013), “PFFT: An Extension of FFTW to Massively Parallel Architectures”, SIAM J. Sci. Comput. **35**, C213. P. J. Silva, O. Oliveira, P. Bicudo, N. Cardoso (2014), “Gluon screening mass at finite temperature from the Landau gauge gluon propagator in lattice QCD”, Phys. Rev. **D** 89, 074503. D. Dudal, O. Oliveira, P. J. Silva (2014), “Källén-Lehmann spectroscopy for (un)physical degrees of freedom”, Phys. Rev. **D** 89, 014010. P. J. Silva, D. Dudal, O. Oliveira (2014), “Spectral densities from the lattice”, PoS(LATTICE 2013)366. P. J. Silva, O. Oliveira (2016), “Gluon dynamics, center symmetry, and the deconfinement phase transition in SU(3) pure Yang-Mills theory”, Phys. Rev. **D** 93, 114509. [^1]: Talk presented by P. J. Silva at the Light Cone 2016 Conference, Lisbon, Portugal, 5-8 September 2016. [^2]: The $Z_3$ sector $-1$ corresponds to $-\pi<\arg(L)\leq-\pi/3$, and the $Z_3$ sector $1$ is defined by $\pi/3<\arg(L)\leq\pi$.
--- abstract: 'Arrays of subwavelength resonators can mimic the biomechanical properties of the cochlea, at the same scale. We derive, from first principles, a modal time-domain expansion for the scattered pressure field due to such a structure and propose that these modes should form the basis of a signal processing architecture. We investigate the properties of such an approach and show that higher-order gammatone filters appear by cascading. Further, we propose an approach for extracting meaningful global properties from the coefficients, tailored to the statistical properties of so-called natural sounds.' author: - 'Habib Ammari[^1]Bryn Davies' bibliography: - '/u/bdavies/Documents/myacousticbib.bib' title: A biomimetic basis for the perception of natural sounds --- ![A graded array of subwavelength resonators.[]{data-label="fig:geom"}](array_diagram.pdf){width="\linewidth"} Introduction ============ Biomimetic signal processing ---------------------------- Humans are exceptionally good at recognising different sound sources in their environment and there have been many attempts at designing artificial approaches that can replicate this feat. The human auditory system first amplifies and filters sounds biomechanically in the peripheral auditory system before processing the transduced neural signals in the central auditory signal. With a view to trying to mimic this world-beating system, we consider using an artificial routine with a similar two-step architecture: physical filtering followed by additional processing stages. There has been much attention paid to designing biomimetic structures that replicate the biomechanical properties of the cochlea [@davies2020hopf; @rupin2019mimicking; @davies2019fully; @duke2008critical; @joyce2015developing; @joyce2014mimicking; @babbs2011quantitative]. At the heart of any such structure are graded material parameters, so as to replicate the spatial frequency separation of the cochlea. In particular, a size-graded array of subwavelength resonators can be designed to have similar dimensions to the cochlea and respond to an appropriate range of audible frequencies [@davies2020hopf]. An acoustic subwavelength resonator is a cavity with material parameters that are greatly different from the background medium [@davies2019fully]. Bubbly structures of this kind can be constructed, for example, by injecting air bubbles into silicone-based polymers [@leroy2009design; @leroy2009transmission]. A graded array of resonators effectively behaves as a distributed system of band-pass filters [@lyon2017human]. The choice of kernel filter for auditory processing has been widely explored. Popular options include windowed Fourier modes [@alm2002time; @cohen1995time], wavelets [@mallat1999wavelet; @daubechies1992ten; @yang1992auditory; @benedetto1993wavelet; @anden2014deep] and learned basis functions [@smith2006efficient]. In particular, gammatone filters (Fourier modes windowed by gamma distributions) have been shown to approximate auditory filters well and, thanks also to their relative simplicity, are used widely in modelling auditory function as a result [@lyon2017human; @hewitt1994computer; @patterson1988efficient; @bell2018cochlear]. We will prove that, at leading order, an array of $N$ subwavelength resonators behaves as an array of $N$ gammatone filters. The human auditory system is known to be adapted to the structure of the most important inputs and exhibits greatly enhanced neural responses to natural and behaviourally significant sounds such as animal and human vocalisations and environmental sounds [@theunissen2014neural]. It has been observed that such sounds, often known collectively as *natural sounds*, display certain statistical properties [@theunissen2014neural; @voss19751; @attias1997temporal; @attias1998coding]. By design, most music also falls into this class; music satisfying these properties sounds “much more pleasing" [@voss19751]. Thus, it is clear that the human auditory system is able to account for global properties of a sound and that a biomimetic processing architecture needs to replicate this. Many attempts have been made to extract these. We propose using the parameters of the observed statistical distributions as meaningful and tractable examples of global properties to be used in artificial representations of auditory signals. Main contributions ------------------ In we derive results that describe the resonant properties of a system of $N$ resonators in three dimensions and prove a modal decomposition in the time domain. This formula takes the form of $N$ spatial eigenmodes with first-order gammatone coefficients. Further to this, we show in that a cascade of these filters equates to filtering with higher-order gammatones and that extracting information from temporal averages is stable to deformations. Finally, in we focus our attention on the class of natural sounds, which we define as sounds satisfying certain (widely observed) statistical and spectral properties. Using these properties, we propose a parametric coding approach that extracts the global properties of a sound. Boundary integral operators =========================== Problem setting --------------- We are interested in studying wave propagation in a homogeneous background medium with $N\in\mathbb{N}$ disjoint bounded inclusions, which we will label as $D_1,D_2,\dots,D_N\subset\mathbb{R}^3$. We will assume that the boundaries are all Lipschitz continuous and will write $D=D_1\cup\dots\cup D_N$. In order to replicate the spatial frequency separation of the cochlea, we are interested in the case where the array has a size gradient, meaning each resonator is slightly larger than the previous, as depicted in . We will denote the density and bulk modulus of the material within the bounded regions $D$ by $\rho_b$ and $\kappa_b$, respectively. The corresponding parameters for the background medium are $\rho$ and $\kappa$. The wave speeds in $D$ and ${\mathbb{R}^3\setminus \overline{D}}$ are given by $$v_b=\sqrt{\frac{\kappa_b}{\rho_b}}, \qquad v=\sqrt{\frac{\kappa}{\rho}}.$$ We also define the dimensionless contrast parameter $$\delta=\frac{\rho_b}{\rho}.$$ We will assume that $\delta\ll1$, meanwhile $v_b=O(1)$, $v=O(1)$ and $v_b/v=O(1)$. Boundary integral operators --------------------------- The Helmholtz single layer potential associated to the domain $D$ and wavenumber $k\in\mathbb{C}$ is defined, for some density function $\varphi\in L^2({\partial D})$, as $${\mathcal{S}}_{D}^k[\varphi](x):=\int_{{\partial D}} G(x-y,k)\varphi(k){\: \mathrm{d}}\sigma(y), \qquad x\in\mathbb{R}^3,$$ where $G$ is the Helmholtz Green’s function, given by $$G(x,k):=-\frac{1}{4\pi|x|}e^{{\mathrm{i}}k|x|}.$$ The Neumann-Poincaré operator associated to $D$ and $k\in\mathbb{C}$ is defined as $${\mathcal{K}}_{D}^{k,*}[\varphi](x):=\int_{{\partial D}} \frac{\partial G(x-y,k)}{\partial \nu(x)}\varphi(y){\: \mathrm{d}}\sigma(y), \qquad x\in{\partial D},$$ where $\partial/\partial\nu$ denotes the outward normal derivative on ${\partial D}$. These two integral operators are related by the conditions $$\label{eq:jump} {\frac{\partial}{\partial\nu}}{\mathcal{S}}_{D}^k[\varphi]\big|_\pm(x)= \left(\pm\frac{1}{2}I+{\mathcal{K}}_{D}^{k,*}\right)[\varphi](x),\qquad x\in{\partial D},$$ where the subscripts $+$ and $-$ denote evaluation from outside and inside the boundary ${\partial D}$, respectively, and $I$ is the identity operator on $L^2({\partial D})$. Asymptotic properties --------------------- The single layer potential and the Neumann–Poincar[é]{} operator both have helpful asymptotic expansions as $k\to0$ (see *e.g.* [@ammari2018mathematical]). In particular, we have that $$\label{eq:S_expansion} {\mathcal{S}}_{D}^k[\varphi]={\mathcal{S}}_{D}[\varphi]+k{\mathcal{S}}_{D,1}[\varphi]+O(k^2),$$ where ${\mathcal{S}}_{D}:={\mathcal{S}}_D^0$ (*i.e.* the Laplace single layer potential) and $${\mathcal{S}}_{D,1}[\varphi](x):=\frac{1}{4\pi{\mathrm{i}}}\int_{{\partial D}} \varphi(y) {\: \mathrm{d}}\sigma(y).$$ One crucial property to note is that ${\mathcal{S}}_D$ is invertible. Similarly, $$\label{eq:K_expansion} {\mathcal{K}}_{D}^{k,*}[\varphi]={\mathcal{K}}_{D}^{*}[\varphi]+k{\mathcal{K}}_{D,1}[\varphi] + k^2{\mathcal{K}}_{D,2}[\varphi] +k^3{\mathcal{K}}_{D,3}[\varphi]+O(k^4),$$ where ${\mathcal{K}}_{D}^{*}:={\mathcal{K}}_{D}^{0,*}$, ${\mathcal{K}}_{D,1}=0$, $$\begin{aligned} {\mathcal{K}}_{D,2}[\varphi](x):=\frac{1}{8\pi}\int_{{\partial D}}\frac{(x-y)\cdot \nu(x)}{|x-y|}\varphi(y){\: \mathrm{d}}\sigma(y)\quad\text{and}\quad {\mathcal{K}}_{D,3}[\varphi](x):=\frac{{\mathrm{i}}}{12\pi}\int_{{\partial D}} (x-y)\cdot \nu(x)\varphi(y){\: \mathrm{d}}\sigma(y).\end{aligned}$$ Several of the operators in the expansion can be simplified when integrated over all or part of the boundary ${\partial D}$. As proved in *e.g.* [@ammari2017double Lemma 2.1], it holds that for any $\varphi\in L^2({\partial D})$ and $i=1,\dots,N$, $$\label{K_properties} \begin{split} \int_{{\partial D}_i}\left(-\frac{1}{2}I+{\mathcal{K}}_D^{*}\right)[\varphi]{\: \mathrm{d}}\sigma=0, \qquad&\int_{{\partial D}_i}\left(\frac{1}{2}I+{\mathcal{K}}_D^{*}\right)[\varphi]{\: \mathrm{d}}\sigma=\int_{{\partial D}_i}\varphi{\: \mathrm{d}}\sigma,\\ \int_{{\partial D}_i} {\mathcal{K}}_{D,2}[\varphi]{\: \mathrm{d}}\sigma=-\int_{D_i}{\mathcal{S}}_D[\varphi]{\: \mathrm{d}}x \quad&\text{and}\quad\int_{{\partial D}_i} {\mathcal{K}}_{D,3}[\varphi]{\: \mathrm{d}}\sigma=\frac{{\mathrm{i}}|D_i|}{4\pi}\int_{{\partial D}}\varphi{\: \mathrm{d}}\sigma. \end{split}$$ Subwavelength scattering decompositions {#sec:subw_scattering} ======================================= Scattering of pure tones {#sec:pure_tones} ------------------------ Suppose, first, that the incoming signal is a plane wave parallel to the $x_1$-axis with angular frequency $\omega$, given by $A\cos(kx_1-\omega t)$ where $k=\omega/v$. Then, the scattered pressure field is given by ${\operatorname{Re}}(u(x,\omega)e^{-{\mathrm{i}}\omega t})$ where $u$ satisfies the Helmholtz equation $$\label{eq:helmholtz_equation} \begin{cases} \left( \Delta + k^2 \right)u = 0 & \text{in} \ {\mathbb{R}^3\setminus \overline{D}}, \\ \left( \Delta + k_b^2 \right)u = 0 & \text{in} \ D, \end{cases}$$ where $k=\omega/v$ and $k_b=\omega/v_b$, along with the transmission conditions $$\label{eq:transmission} \begin{cases} u_+ - u_- = 0 & \text{on} \ {\partial D},\\ \frac{1}{\rho} {\frac{\partialu}{\partial\nu_x}}\big|_+ - \frac{1}{\rho_b} {\frac{\partialu}{\partial\nu_x}}\big|_- = 0 & \text{on} \ {\partial D}, \end{cases}$$ and the Sommerfeld radiation condition in the far field, which ensures that energy radiates outwards [@ammari2018mathematical], given by $$\label{eq:radiation} \left({\frac{\partial}{\partial|x|}}-ik\right)(u-u^{in})=o(|x|^{-1}) \quad \text{as} \ |x|\to\infty,$$ where, in this case, $u^{in}(x,\omega)=Ae^{{\mathrm{i}}kx_1}$. We define a *resonant frequency* to be $\omega=\omega(\delta)$ such that there exists a non-trivial solution to which satisfies and when $u^{in}=0$. We define a *subwavelength resonant frequency* to be a resonant frequency $\omega$ that depends continuously on $\delta$ and is such that $\omega(\delta)\to0$ as $\delta\to0$. A system of $N$ subwavelength resonators exhibits $N$ subwavelength resonant frequencies with positive real parts, up to multiplicity. This was proved in [@davies2019fully] and follows from the theory of Gohberg and Sigal [@gohberg2009holomorphic; @ammari2018mathematical]. ### Capacitance matrix analysis Our approach to solving the resonance problem is to study the *(weighted) capacitance matrix*, which offers a rigorous discrete approximation to the differential problem. We will see that the eigenstates of this $N\times N$-matrix characterise, at leading order in $\delta$, the resonant modes of the system. In order to introduce the notion of capacitance, we define the functions $\psi_j$, for $j=1,\dots,N$, as $$\psi_j:={\mathcal{S}}_D^{-1}[\chi_{{\partial D}_j}],$$ where $\chi_A:\mathbb{R}^3\to\{0,1\}$ is used to denote the characteristic function of a set $A\subset\mathbb{R}^3$. The capacitance coefficients $C_{ij}$, for $i,j=1,\dots,N$, are then defined as $$C_{ij}:=-\int_{{\partial D}_i} \psi_j{\: \mathrm{d}}\sigma.$$ We will need two objects involving the capacitance coefficients. Firstly, the weighted capacitance matrix ${C^\mathrm{vol}}=({C^\mathrm{vol}}_{ij})$, given by $${C^\mathrm{vol}}_{ij}:=\frac{1}{|D_i|} C_{ij},$$ which has been weighted to account for the different sized resonators (see *e.g.* [@ammari2017double; @davies2020close; @ammari2020exceptional] for other variants in slightly different settings). Secondly, we will want the capacitance sums contained in the matrix $C^{\mathrm{sum}}=(C_{ij}^{\mathrm{sum}})$, given by $$C^{\mathrm{sum}}:=JC,$$ where $C=(C_{ij})$ is the matrix of capacitance coefficients and $J$ is the $N\times N$ matrix of ones (*i.e.* $J_{ij}=1$ for all $i,j=1,\dots,N$). We define the functions $S_n^\omega$, for $n=1\dots,N$, as $$S_n^\omega(x) := \begin{cases} {\mathcal{S}}_{D}^{k}[\psi_n](x), & x\in{\mathbb{R}^3\setminus \overline{D}},\\ {\mathcal{S}}_{D}^{k_b}[\psi_n](x), & x\in D.\\ \end{cases}$$ \[lem:modal\] The solution to the scattering problem can be written, for $x\in\mathbb{R}^3$, as $$u(x)-Ae^{{\mathrm{i}}kx_1} = \sum_{n=1}^N q_nS_n^\omega(x) - {\mathcal{S}}_D^k\left[{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]\right](x) + O(\omega),$$ for constants $q_n$ which satisfy, up to an error of order $O(\delta \omega+\omega^3)$, the problem $$\label{eq:eval_C} \left({\omega^2}I-{v_b^2\delta}\,{C^\mathrm{vol}}\right)\begin{pmatrix}q_1\\ \vdots\\q_N\end{pmatrix} = {v_b^2\delta}\begin{pmatrix} \frac{1}{|D_1|} \int_{{\partial D}_1}{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]{\: \mathrm{d}}\sigma \\ \vdots\\ \frac{1}{|D_N|}\int_{{\partial D}_N}{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]{\: \mathrm{d}}\sigma \end{pmatrix}.$$ The solutions can be represented as $$\label{eq:layer_potential_representation} u(x) = \begin{cases} Ae^{{\mathrm{i}}kx_1}+{\mathcal{S}}_{D}^{k}[\psi](x), & x\in{\mathbb{R}^3\setminus \overline{D}},\\ {\mathcal{S}}_{D}^{k_b}[\phi](x), & x\in D, \end{cases}$$ for some surface potentials $(\phi,\psi)\in L^2({\partial D})\times L^2({\partial D})$, which must be chosen so that $u$ satisfies the transmission conditions across ${\partial D}$. Using , we see that in order to satisfy the transmission conditions on ${\partial D}$, the densities $\phi$ and $\psi$ must satisfy $$\begin{aligned} {\mathcal{S}}_{D}^{k_b}[\phi](x)-{\mathcal{S}}_{D}^{k}[\psi](x)&=Ae^{{\mathrm{i}}kx_1}, \qquad x\in {\partial D}, \\ \left(-\frac{1}{2}I+{\mathcal{K}}_{D}^{k_b,*}\right)[\phi](x)-\delta\left(\frac{1}{2}I+{\mathcal{K}}_{D}^{k,*}\right)[\psi](x)&=\delta {\frac{\partial}{\partial\nu}}(Ae^{{\mathrm{i}}kx_1}), \qquad x\in{\partial D}. \end{aligned}$$ Using the asymptotic expansions and , we can see that $$\label{eq:psi} \psi=\phi-{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]+O(\omega),$$ and, further, that $$ \left(-\frac{1}{2}I+{\mathcal{K}}_D^*+\frac{\omega^2}{{v}_b^2}{\mathcal{K}}_{D,2}-\delta\left(\frac{1}{2}I+{\mathcal{K}}_D^*\right)\right)[\phi]=-\delta \left(\frac{1}{2}I+{\mathcal{K}}_D^*\right){\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]+O(\delta\omega+\omega^3). \label{eq:phi}$$ Then, integrating over ${\partial D}_i$, for $1\leq i\leq N$, and using the properties gives us that $$-\omega^2\int_{D_i}{\mathcal{S}}_D[\phi]{\: \mathrm{d}}x -{v}_b^2\delta\int_{{\partial D}_i}\phi{\: \mathrm{d}}\sigma=-{v_b^2\delta}\int_{{\partial D}_i}{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]{\: \mathrm{d}}\sigma+O(\delta\omega+\omega^3). \label{eq:D}$$ At leading order, says that $\left(-\frac{1}{2}I+{\mathcal{K}}_D^{*}\right)[\phi]=0$ so, in light of the fact that $\{\psi_1,\dots,\psi_N\}$ forms a basis for $\ker\left(-\frac{1}{2}I+{\mathcal{K}}_D^{*}\right)$, the solution can be written as $$\label{eq:psi_basis} \phi=\sum_{n=1}^N q_n\psi_n+O(\omega^2+\delta),$$ for constants $q_1,\dots,q_N=O(1)$. Making this substitution we reach, up to an error of order $O(\delta \omega+\omega^3)$, the problem $$\label{eq:eval_C_proof} \left(-\omega^2I_N+v_b^2\delta{C^\mathrm{vol}}\right)\begin{pmatrix}q_1\\ \vdots\\q_N\end{pmatrix} =- {v_b^2\delta}\begin{pmatrix} \frac{1}{|D_1|} \int_{{\partial D}_1}{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]{\: \mathrm{d}}\sigma \\ \vdots\\ \frac{1}{|D_N|}\int_{{\partial D}_N}{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]{\: \mathrm{d}}\sigma \end{pmatrix}.$$ The result now follows by combining the above. \[thm:res\] As $\delta \rightarrow 0$, the subwavelength resonant frequencies satisfy the asymptotic formula $$\omega_n^\pm = \pm\sqrt{v_b^2\lambda_n\delta} -{\mathrm{i}}\tau_n\delta+ O(\delta^{3/2}),$$ for $n = 1,\dots,N$, where $\lambda_n$ are the eigenvalues of the weighted capacitance matrix ${C^\mathrm{vol}}$ and $ \tau_n$ are real numbers that depend on $D$, $v$ and $v_b$. If $u^{in} = 0$, we find from that there is a non-zero solution $q_1,\dots,q_N$ to the eigenvalue problem when $\omega^2/v_b^2\delta$ is an eigenvalue of ${C^\mathrm{vol}}$, at leading order. To find the imaginary part, we adopt the ansatz $$\label{eq:omega_ansatz} \omega_n^\pm=\pm\sqrt{v_b^2\lambda_n\delta} -{\mathrm{i}}\tau_n\delta+ O(\delta^{3/2}).$$ Using the expansions and with the representation we have that $$\psi=\phi+\frac{k_b-k}{4\pi{\mathrm{i}}}\left(\sum_{n=1}^N\psi_n\right)\int_{{\partial D}}\phi{\: \mathrm{d}}\sigma+O(\omega^2),$$ and, hence, that $$\left(-\frac{1}{2}I+{\mathcal{K}}_D^*+k_b^2{\mathcal{K}}_{D,2}+k_b^3{\mathcal{K}}_{D,3}-\delta\left(\frac{1}{2}I+{\mathcal{K}}_D^*\right)\right)[\phi]-\frac{\delta(k_b-k)}{4\pi{\mathrm{i}}}\left(\sum_{n=1}^N\psi_n\right)\int_{{\partial D}}\phi{\: \mathrm{d}}\sigma=O(\delta\omega^2+\omega^4).$$ We then substitute the decomposition and integrate over ${\partial D}_i$, for $i=1,\dots,N$, to find that $$\left(-\frac{\omega^2}{v_b^2}I-\frac{\omega^3}{v_b^3}\frac{{\mathrm{i}}}{4\pi}C^{\mathrm{sum}}+\delta{C^\mathrm{vol}}+\delta\omega\bigg(\frac{1}{v_b}-\frac{1}{v}\bigg)\frac{{\mathrm{i}}}{4\pi}{C^\mathrm{vol}}C^{\mathrm{sum}} \right) \underline{q}=O(\delta\omega^2+\omega^4).$$ Then, using the ansatz for $\omega_n$ and setting $\underline{q}=\underline{v}_n$ (the eigenvector corresponding to $\lambda_n$) we reach that $$\left(\tau_n I- \frac{v_b\lambda_n}{8\pi}C^{\mathrm{sum}}+\bigg(1-\frac{v_b}{v}\bigg)\frac{v_b}{8\pi}{C^\mathrm{vol}}C^{\mathrm{sum}}\right)\underline{v}_n=\underline{0}.$$ The resonant frequencies will have negative imaginary parts, due to the loss of energy (e.g. to the far field), thus $\tau_n\geq0$ for all $n=1,\dots,N$. Note that in some cases $\tau_n=0$ for some $n$, meaning the imaginary parts exhibit higher-order behaviour in $\delta$. For example, the second (dipole) frequency for a pair of identical resonators is known to be $O(\delta^{2})$ [@ammari2017double]. ![The resonant frequencies $\{\omega_n^+,\omega_n^-:n=1,\dots,N\}$ in the complex plane, for an array of 22 subwavelength resonators.[]{data-label="fig:spectrum"}](res_spec-eps-converted-to.pdf){width="0.6\linewidth"} The numerical simulations presented in this work were all carried out on an array of 22 cylindrical resonators. We approximate the problem by studying the two-dimensional cross section using the multipole expansion method, as in [@davies2020hopf]. The array of 22 resonators that simulated in this work measures 35 , has material parameters corresponding to air-filled resonators surrounded by water and has subwavelength resonant frequencies within the range 500  – 10 . Thus, this structure has similar dimensions to the human cochlea, is made from realistic materials and experiences subwavelength resonance in response to frequencies that are audible to humans. It is more illustrative to rephrase in terms of basis functions that are associated with the resonant frequencies. Denote by $\underline{v}_n=(v_{1,n},\dots,v_{N,n})$ the eigenvector of ${C^\mathrm{vol}}$ with eigenvalue $\lambda_n$. Then, we have a modal decomposition with coefficients that depend on the matrix $V=(v_{i,j})$, provided the system is such that $V$ is invertible. The invertibility of $V$ is a subtle issue and depends only on the geometry of the inclusions $D=D_1\cup\dots\cup D_N$. In the case that the resonators are all identical, $V$ is invertible since ${C^\mathrm{vol}}$ is symmetric. If the size gradient is not too drastic, we expect $V$ to also be invertible (this is supported by our numerical analysis, which typically simulates an array of resonators where each is approximately 1.05 times the size of the previous). \[lem:modal\_res\] Suppose the resonator’s geometry is such that the matrix of eigenvectors $V$ is invertible. Then if $\omega=O(\sqrt{\delta})$ the solution to the scattering problem can be written, for $x\in\mathbb{R}^3$, as $$u(x)-Ae^{{\mathrm{i}}kx_1} = \sum_{n=1}^N a_n u_n(x) - {\mathcal{S}}_D\left[{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]\right](x) + O(\omega),$$ for constants given by $$a_n=\frac{-A\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)},$$ where $\nu_n=\sum_{j=1}^{N} [V^{-1}]_{n,j}$, [i.e.]{} the sum of the $n$^th^ row of $V^{-1}$. In light of , we define the functions $$u_n(x)=\sum_{i=1}^N v_{i,n}\, {\mathcal{S}}_D[\psi_i](x),$$ for $n=1,\dots,N$. Then, by diagonalising ${C^\mathrm{vol}}$ with the change of basis matrix $V$, we see that the solution to the scattering problem can be written as $$u-u^{in} = \sum_{n=1}^N a_n u_n - {\mathcal{S}}_D\left[{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]\right] + O(\omega),$$ for constants $a_n$ given, at leading order, by $$ V\begin{pmatrix} \omega^2-v_b^2\delta\lambda_1 & & \\ & \ddots & \\ & & \omega^2-v_b^2\delta\lambda_N \end{pmatrix} \begin{pmatrix}a_1\\ \vdots\\a_N\end{pmatrix} = {v_b^2\delta} \begin{pmatrix} \frac{1}{|D_1|} \int_{{\partial D}_1}{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]{\: \mathrm{d}}\sigma \\ \vdots\\ \frac{1}{|D_N|}\int_{{\partial D}_N}{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]{\: \mathrm{d}}\sigma \end{pmatrix} +O(\omega^3). $$ Now, $\omega^2-v_b^2\delta\lambda_n=(\omega-\omega_n^+)(\omega-\omega_n^-)+O(\omega^3)$ so we have that up to an error of order $O(\omega^3)$ $$ \begin{pmatrix}a_1\\ \vdots\\a_N\end{pmatrix} = {v_b^2\delta} \begin{pmatrix} (\omega-\omega_1^+)^{-1}(\omega-\omega_1^-)^{-1} & & \\ & \ddots & \\ & & (\omega-\omega_N^+)^{-1}(\omega-\omega_N^-)^{-1} \end{pmatrix} V^{-1}\begin{pmatrix} \frac{1}{|D_1|} \int_{{\partial D}_1}{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]{\: \mathrm{d}}\sigma \\ \vdots\\ \frac{1}{|D_N|}\int_{{\partial D}_N}{\mathcal{S}}_D^{-1}[Ae^{{\mathrm{i}}kx_1}]{\: \mathrm{d}}\sigma \end{pmatrix}.$$ In order to simplify this further, we use the fact that $e^{{\mathrm{i}}kx_1}=1+{\mathrm{i}}kx_1+\dots=1+O(\omega)$ to see that $$ \begin{pmatrix}a_1\\ \vdots\\a_N\end{pmatrix} = \begin{pmatrix} \frac{-{\operatorname{Re}}(\omega_1^+)^2}{(\omega-\omega_1^+)(\omega-\omega_1^-)} & & \\ & \ddots & \\ & & \frac{-{\operatorname{Re}}(\omega_N^+)^2}{(\omega-\omega_N^+)(\omega-\omega_N^-)} \end{pmatrix} V^{-1} \begin{pmatrix} A \\ \vdots \\ A \end{pmatrix}+O(\omega).$$ Modal decompositions of signals {#sec:scattering} ------------------------------- Consider, now, the scattering of a more general signal, $s:[0,T]\to\mathbb{R}$, whose frequency support is wider than a single frequency and whose Fourier transform exists. Again, we assume that the radiation is incident parallel to the $x_1$-axis. Consider the Fourier transform of the incoming pressure wave, given for $\omega\in\mathbb{C}$, $x\in\mathbb{R}^3$ by $$\begin{aligned} u^{in}(x,\omega)&=\int_{-\infty}^{\infty} s(x_1/v-t) e^{i\omega t}{\: \mathrm{d}}t\\ &=e^{{\mathrm{i}}\omega x_1/v}\hat{s}(\omega) = \hat{s}(\omega)+O(\omega),\end{aligned}$$ where $\hat{s}(\omega):=\int_{-\infty}^{\infty} s(-u) e^{i\omega u}{\: \mathrm{d}}u$. The resulting pressure field satisfies the Helmholtz equation along with the conditions and . Working in the frequency domain, the scattered acoustic pressure field $u$ in response to the Fourier transformed signal $\hat s$ can be decomposed in the spirit of . We write that, for $x\in{\partial D}$, the solution to the scattering problem is given by $$\label{eq:gen_modal_decomp} u(x,\omega)= \sum_{n=1}^N \frac{-\hat{s}(\omega)\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)} u_n(x) + r(x,\omega),$$ for some remainder $r$. We are interested in signals whose energy is mostly concentrated within the subwavelength regime. In particular, we want that $$\label{eq:subw_regime} \sup_{x\in\mathbb{R}^3}\int_{-\infty}^\infty |r(x,\omega) | {\: \mathrm{d}}\omega = O(\delta).$$ Note that the condition is satisfied [e.g.]{} by a pure tone within the subwavelength regime, since if $\omega=O(\sqrt{\delta})$ then gives us that $\sup_x|r|=O(\omega)$. Now, we wish to apply the inverse Fourier transform to to obtain a time-domain decomposition of the scattered field. From now on we will simplify the notation for the resonant frequencies by assuming we can write that $\omega_n^+=\omega_n\in\mathbb{C}$ and $\omega_n^-=-{\operatorname{Re}}(\omega_n)+{\mathrm{i}}{\operatorname{Im}}(\omega_n)$ (which, by , is known to hold at least at leading order in $\delta$). \[thm:timedom\] For $\delta>0$ and a signal $s$ which is subwavelength in the sense of the condition , it holds that the scattered pressure field $p(x,t)$ satisfies, for $x\in{\partial D}$, $t\in\mathbb{R}$, $$p(x,t)= \sum_{n=1}^N a_n[s](t) u_n(x) + O(\delta),$$ where the coefficients are given by $a_n[s](t)=\left( s*h_n \right)(t)$ for kernels defined as $$\label{eq:hdef} h_n(t)= \begin{cases} 0, & t<0, \\ c_n e^{{\operatorname{Im}}(\omega_n)t} \sin({\operatorname{Re}}(\omega_n)t), & t\geq0, \end{cases}$$ for $c_n=\nu_n{\operatorname{Re}}(\omega_n)$. Applying the inverse Fourier transform to the modal expansion yields $$p(x,t)= \sum_{n=1}^N a_n[s](t) u_n(x) + O(\delta),$$ where, for $n=1,\dots,N$, the coefficients are given by $$a_n[s](t)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{-\hat{s}(\omega)\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)} e^{-i\omega t} {\: \mathrm{d}}\omega = \left( s*h_n \right)(t),$$ where $*$ denotes convolution and the kernels $h_n$ are defined for $n=1,\dots,N$ by $$\label{eq:h_int_defn} h_n(t)=\frac{1}{2\pi}\int_{-\infty}^\infty \frac{-\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)} e^{-{\mathrm{i}}\omega t} {\: \mathrm{d}}\omega.$$ We can use complex integration to evaluate the integral in . For $R>0$, let ${\mathcal{C}}_R^\pm$ be the semicircular arc of radius $R$ in the upper $(+)$ and lower $(-)$ half-plane and let ${\mathcal{C}}^\pm$ be the closed contour ${\mathcal{C}}^\pm={\mathcal{C}}_R^\pm\cup[-R,R]$. Then, we have that $$h_n(t) = \frac{1}{2\pi}\oint_{{\mathcal{C}}^\pm} \frac{-\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)} e^{-{\mathrm{i}}\omega t} {\: \mathrm{d}}\omega - \frac{1}{2\pi}\int_{{\mathcal{C}}_R^\pm} \frac{-\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)} e^{-{\mathrm{i}}\omega t} {\: \mathrm{d}}\omega.$$ The integral around ${\mathcal{C}}^\pm$ is easy to evaluate using the residue theorem, since it has simple poles at $\omega_n^\pm$. We will make the choice of $+$ or $-$ so that the integral along ${\mathcal{C}}_R^\pm$ converges to zero as $R\to\infty$. For large $R$ we have a bound of the form $$\label{eq:bound} \left|\int_{{\mathcal{C}}_R^\pm} \frac{-\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)} e^{-{\mathrm{i}}\omega t} {\: \mathrm{d}}\omega\right|\leq C_n R^{-1} \sup_{\omega\in{\mathcal{C}}_R^\pm} e^{{\operatorname{Im}}(\omega)t},$$ for a positive constant $C_n$. Suppose first that $t<0$. Then we choose to integrate over ${\mathcal{C}}_R^+$ in the upper complex plane so that converges to zero as $R\to\infty$. Thus, we have that $$h_n(t) = \frac{1}{2\pi}\oint_{{\mathcal{C}}^+} \frac{-\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)} e^{-{\mathrm{i}}\omega t} {\: \mathrm{d}}\omega =0, \qquad t<0,$$ since the integrand is holomorphic in the upper half plane. Conversely, if $t\geq0$ then we should choose to integrate over ${\mathcal{C}}_R^-$ in order for to disappear. Then, we see that $$\begin{aligned} h_n(t) &= \frac{1}{2\pi}\oint_{{\mathcal{C}}^-} \frac{-\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)} e^{-{\mathrm{i}}\omega t} {\: \mathrm{d}}\omega \\ &={\mathrm{i}}\,\mathrm{Res}\left(\frac{-\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)} e^{-{\mathrm{i}}\omega t},\omega_n^+\right) + {\mathrm{i}}\,\mathrm{Res}\left(\frac{-\nu_n{\operatorname{Re}}(\omega_n^+)^2}{(\omega-\omega_n^+)(\omega-\omega_n^-)} e^{-{\mathrm{i}}\omega t},\omega_n^-\right),\qquad t\geq0.\end{aligned}$$ Using the notation $\omega_n^+=\omega_n$, $\omega_n^-=-{\operatorname{Re}}(\omega_n)+{\mathrm{i}}{\operatorname{Im}}(\omega_n)$ we can simplify the expressions for the residues at the two simple poles to reach the result. The fact that $h_n(t)=0$ for $t<0$ ensures the causality of the modal expansion in . The asymmetry of the eigenmodes $u_n(x)$ means that the decomposition from replicates the cochlea’s famous travelling wave behaviour. That is, in response to an incoming wave the position of maximum amplitude moves from left to right in the array, see [@davies2019fully] for details. Subwavelength scattering transforms {#sec:transforms} =================================== ![The frequency support of the band-pass filters $h_n$. Shown here for the case of 22 resonators.[]{data-label="fig:bandpass"}](hat_phi_n-eps-converted-to.pdf){width="0.7\linewidth"} In we showed that when a subwavelength (*i.e.* audible) sound is scattered by a cochlea-mimetic array of resonators the resulting pressure field is described by a model decomposition. This decomposition takes the form of convolutions with the basis functions $h_n$ from . Since ${\operatorname{Im}}(\omega_n)<0$, each $h_n$ is a windowed oscillatory mode that acts as a band pass filter centred at ${\operatorname{Re}}(\omega_n)$. We wish to explore the extent to which these decompositions reveal useful properties of the sound and can be used as a basis for signal processing applications. In order to reveal richer properties of the sound, a common approach is to use the filters $h_n$ in a convolutional neural network. That is, a repeating cascade of alternating convolutions with $h_n$ and some activation function $\Theta$: $$\begin{aligned} a_{n_1}^{(1)}[s](t) &= \Theta\left( s*h_{n_1} \right)(t), \nonumber \\ a_{n_1,n_2}^{(2)}[s](t) &= \Theta\left( a_{n_1}^{(1)}[s]*h_{n_2} \right)(t), \label{eq:adef} \\[-0.6em] & \ \, \vdots \nonumber \\[-0.5em] a_{n_1,\dots,n_k}^{(k)}[s](t) &= \Theta\left( a_{n_1,\dots,n_{k-1}}^{(k-1)}[s]*h_{n_k} \right)(t), \nonumber\end{aligned}$$ where, in each case, the indices are such that $(n_1,n_2,\ldots,n_k)\in\{1,\ldots,N\}^k$. We will use the notation $P_k=(n_1,\ldots,n_k)$ from now on, and refer to the vector $P_k$ as the *path* of $a_{P_k}^{(k)}$. ![The emergence of gammatones at successively deeper layers in the cascade, shown for the first subwavelength resonant frequency in the case of 22 resonators.[]{data-label="fig:gammatones"}](gammatones-eps-converted-to.pdf){width="0.95\linewidth"} Example: identity activation ---------------------------- As an expository example, we consider the case where $\Theta:\mathbb{R}\to\mathbb{R}$ is the identity $Id(x)=x$. In this case, for any depth $k$ we have that $a_{P_k}^{(k)}[s]=s*h_{P_k}^{(k)}$ for some function $h_{P_k}^{(k)}$ which is the convolution of $k$ functions of the form , indexed by the path $P_k$. This simplification means that a more detailed mathematical analysis is possible. The basis functions $h_{P_k}^{(k)}$ take specific forms. In particular, the diagonal terms contain *gammatones*. A gammatone is a sinusoidal mode windowed by a gamma distribution: $$g(t;m,\omega,\phi)= t^{m-1} e^{{\operatorname{Im}}(\omega)t}\cos({\operatorname{Re}}(\omega) t-\phi), \qquad t\geq0,$$ for some order $m\in\mathbb{N}^+$ and constants $\omega\in\{z\in\mathbb{C}:{\operatorname{Im}}(z)<0\}$, $\phi\in\mathbb{R}$. Gammatones have been widely used to model auditory filters [@lyon2017human]. We notice that $h_n(t)=c_n g(t;1,\omega_n,\pi/2)$ and that higher order gammatones emerge at deeper levels in the cascade . For $k\in\mathbb{N}^+$ and $n\in\{1,\dots,N\}$, there exist non-negative constants $C_m^{n,k}$, $m=1,\dots,k$, such that $$h_{n,\dots,n}^{(k)}(t)=(c_n)^k\sum_{m=1}^{k}C_m^{n,k}g(t;m,\omega_n,m\tfrac{\pi}{2}).$$ In particular, $C_k^{n,k}\neq0$. Let us write $G_n^m(t):=g(t;m,\omega_n,m\tfrac{\pi}{2})$, for the sake of brevity. Firstly, it holds that $h_n(t)=c_n G_n^1(t)$. Furthermore, we have that $$\begin{aligned} (G_n^1*G_n^1)(t)&= \frac{1}{2}G_n^2(t)+\frac{1}{2{\operatorname{Re}}(\omega_n)}G_n^1(t), \end{aligned}$$ as well as, for $m\geq3$, the recursion relation $$\begin{aligned} (G_n^{m-1}*G_n^1)(t) &=\frac{1}{2(m-1)} G_n^{m}(t) +\frac{m-2}{2{\operatorname{Re}}(\omega_n)}(G_n^{m-2}*G_n^1) (t). \end{aligned}$$ The result follows by repeatedly applying this formula. In particular, we find that $$C_k^{n,k}=\frac{1}{2^{k-1}(k-1)!}>0.$$ While the gammatones appeared here through the cascade of filters, gammatones also arise directly from resonator scattering if higher-order resonators are used: resonators that exhibit higher-order singularities in the frequency domain [@lyon2017human; @neely1986model]. It was recently shown that if sources of energy gain and loss are introduced to an array of coupled subwavelength resonators then such higher-order resonant modes can exist [@ammari2020exceptional]. Since the imaginary part of the lowest frequency is much larger than the others (see ), $h_1$ acts somewhat as a low-pass filter (see ). For any depth $k\in\mathbb{N}$ and path $P_k\in\{1,\dots,N\}^k$ it holds that $h_{P_k}^{(k)}\in L^\infty(\mathbb{R})$ meaning that if $s\in L^1(\mathbb{R})$ then $a_{P_k}^{(k)}[s]\in L^\infty(\mathbb{R})$. If, moreover, $s$ is compactly supported then the decay properties of $h_{P_k}^{(k)}$ mean that $a_{P_k}^{(k)}[s]\in L^p(\mathbb{R})$ for any $p\in[1,\infty]$. Further, we have the following lemmas which characterise the continuity and stability of $s\mapsto a_{P_k}^{(k)}[s]$. \[lem:cty\] Consider the network coefficients given by with $\Theta$ being the identity. Given $k_{max}\in\mathbb{N}^+$, there exists a positive constant $C_1$ such that for any depth $k=1,\dots,k_{max}$, any path $P_k\in\{1,\dots,N\}^k$ and any signal $s \in L^1(\mathbb{R})$ it holds that $$\|a_{P_k}^{(k)}[s_1] - a_{P_k}^{(k)}[s_2]\|_{\infty}\leq C_1 \|s_1-s_2\|_1.$$ It holds that $$C_1:=\sup_{k\in\{1,\dots,k_{max}\}} \sup_{P_k\in\{1,\dots,N\}^k} \sup_{x\in\mathbb{R}}(1-c)\left|h_{P_k}^{(k)}(x)\right|<\infty.$$ Then, the result follows from the fact that $$\left|a_{P_k}^{(k)}[s_1](t) - a_{P_k}^{(k)}[s_2](t)\right|\leq \int_{-\infty}^\infty |s_1(u)-s_2(u)|\left|h_{P_k}^{(k)}(t-u)\right|{\: \mathrm{d}}u.$$ The continuity property proved in implies, in particular, that the representation of a signal $s$ is stable with respect to additive noise. Consider the network coefficients given by with $\Theta$ being the identity. For $\tau\in C^0(\mathbb{R};\mathbb{R})$, let $T_\tau$ be the associated time warping operator, given by $T_\tau f (t) = f(t+\tau(t))$. Then, given $k_{max}\in\mathbb{N}^+$ there exists a positive constant $C_2$ such that for any depth $k=1,\dots,k_{max}$, any path $P_k\in\{1,\dots,N\}^k$ and any signal $s \in L^1(\mathbb{R})$ it holds that $$\left\| a_{P_k}^{(k)}[s] - a_{P_k}^{(k)}[T_\tau s]\right\|_\infty \leq C_2 \|s\|_1 \|\tau\|_\infty.$$ Let $(h_{P_k}^{(k)})'$ denote the first derivative of $h_{P_k}^{(k)}$ (which is zero on $(-\infty,0)$ and does not exist at 0). Then, we see that $$C_1:=\sup_{k\in\{1,\dots,k_{max}\}} \sup_{P_k\in\{1,\dots,N\}^k} \sup_{x\in(0,\infty)}\left|(h_{P_k}^{(k)})'(x)\right|<\infty,$$ and, by the mean value theorem, that for $t\in\mathbb{R}$ $$\left|h_{P_k}^{(k)} (t-\tau(t)) - h_{P_k}^{(k)}(t)\right|\leq C_1 |\tau(t)|.$$ Thus, we see that for any $t\in\mathbb{R}$ $$\begin{aligned} |a_{P_k}^{(k)}[s] - a_{P_k}^{(k)}[T_\tau s]| &\leq \int_{-\infty}^\infty |s(t-u)|\left|h_{P_k}^{(k)}(u)-h_{P_k}^{(k)}(u-\tau(u))\right|{\: \mathrm{d}}u,\\ &\leq C_1 \|\tau(u)\|_\infty \int_{-\infty}^\infty |s(t-u)|{\: \mathrm{d}}u. \end{aligned}$$ A common approach to extracting information from the coefficients is to use their temporal averages. A particular advantage of such an approach is that it gives outputs that are invariant to translation and time-dilation (*cf.* the scattering transform [@bruna2013invariant; @mallat2012group]). Let $\langle a_{P_k}^{(k)}[s] \rangle_{(t_1,t_2)}$ denote the average of $a_{P_k}^{(k)}[s](t)$ over the interval $(t_1,t_2)$, given by $$\label{eq:temporal_average} \langle a_{P_k}^{(k)}[s] \rangle_{(t_1,t_2)}=\frac{1}{t_2-t_1}\int_{t_1}^{t_2} a_{P_k}^{(k)}[s](t) {\: \mathrm{d}}t.$$ \[lem:stab\] Consider the network coefficients given by with $\Theta$ being the identity. For $\tau\in C^1(\mathbb{R};\mathbb{R})$, let $T_\tau$ be the associated time warping operator, given by $T_\tau f (t) = f(t+\tau(t))$. Suppose that $\tau$ is such that $\|\tau'\|_\infty<\frac{1}{2}$. Then, given $k_{max}\in\mathbb{N}^+$ there exists a positive constant $C_2$ such that for any depth $k=1,\dots,k_{max}$, any path $P_k\in\{1,\dots,N\}^k$ and any signal $s \in L^1(\mathbb{R})$ it holds that $$\left|\langle a_{P_k}^{(k)}[s] \rangle_{(t_1,t_2)} - \langle a_{P_k}^{(k)}[T_\tau s] \rangle_{(t_1,t_2)} \right| \leq C_2 \|s\|_1 \left( \frac{2}{t_2-t_1}\|\tau\|_\infty + \|\tau'\|_\infty \right).$$ Since $\|\tau'\|_\infty\leq c<1$, $\varphi(t)=t-\tau(t)$ is invertible and $\|\varphi'\|_\infty\geq1-c$, $$\begin{aligned} \int_{t_1}^{t_2}\left(h_{P_k}^{(k)}(t-\tau(t))-h_{P_k}^{(k)}(t)\right) {\: \mathrm{d}}t &=\int_{\varphi(t_1)}^{\varphi(t_2)}h_{P_k}^{(k)}(t)\frac{1}{\varphi'(\varphi^{-1}(t))}{\: \mathrm{d}}t-\int_{t_1}^{t_2}h_{P_k}^{(k)}(t) {\: \mathrm{d}}t \\ & = \int_{I_1-I_2}h_{P_k}^{(k)}(t)\frac{1}{\varphi'(\varphi^{-1}(t))}{\: \mathrm{d}}t + \int_{t_1}^{t_2}h_{P_k}^{(k)}(t)\frac{\tau'(\varphi^{-1}(t))}{\varphi'(\varphi^{-1}(t))} {\: \mathrm{d}}t, \end{aligned}$$ for some intervals $I_1,I_2\subset\mathbb{R}$, each of which has length bounded by $\|\tau\|_\infty$. Now, define the constant $$C_2:=\sup_{k\in\{1,\dots,k_{max}\}} \sup_{P_k\in\{1,\dots,N\}^k} \sup_{x\in(0,\infty)}(1-c)\left|h_{P_k}^{(k)}(x)\right|<\infty.$$ Finally, we can compute that $$\begin{aligned} &\langle a_{P_k}^{(k)}[s] \rangle_{(t_1,t_2)} - \langle a_{P_k}^{(k)}[T_\tau s] \rangle_{(t_1,t_2)} = \frac{1}{t_2-t_1} \int_{-\infty}^\infty s(u) \int_{t_1}^{t_2}\left(h_{P_k}^{(k)}(t-u-\tau(t))-h_{P_k}^{(k)}(t-u)\right) {\: \mathrm{d}}t {\: \mathrm{d}}u \\ &\quad= \frac{1}{t_2-t_1} \int_{-\infty}^\infty s(u) \left( \int_{I_1-I_2}h_{P_k}^{(k)}(t-u)\frac{1}{\varphi'(\varphi^{-1}(t-u))}{\: \mathrm{d}}t + \int_{t_1}^{t_2}h_{P_k}^{(k)}(t-u)\frac{\tau'(\varphi^{-1}(t-u))}{\varphi'(\varphi^{-1}(t-u))} {\: \mathrm{d}}t \right) {\: \mathrm{d}}u, \end{aligned}$$ meaning that $$\begin{aligned} \left|\langle a_{P_k}^{(k)}[s] \rangle_{(t_1,t_2)} - \langle a_{P_k}^{(k)}[T_\tau s] \rangle_{(t_1,t_2)} \right| &\leq \frac{1}{t_2-t_1} \|s\|_1 \Big[2\|\tau\|_\infty C_2+(t_2-t_1)C_2\|\tau'\|_\infty\Big]. \end{aligned}$$ shows that temporal averages are approximately invariant to translations if the length of the window is large relative to the size of the translation ([i.e.]{} if $t_2-t_1\gg \|\tau\|_\infty$). ![The architecture considered in this work cascades the physically-derived subwavelength scattering, extracts the instantaneous amplitude and phase before, finally, estimating the parameters of the associated natural sound distributions.[]{data-label="fig:architecture"}](network_diagram.pdf){width="0.6\linewidth"} Representation of natural sounds ================================ Extracting meaning from the representation of a sound, beyond elementary statements about which tones are more prolific at different stages, is difficult. In this section, we propose a novel approach tailored to the class of natural sounds which exploits their observed statistical properties. Properties of natural sounds {#sec:nat_properties} ---------------------------- Let us briefly summarise what has been observed about the low-order statistics of natural sounds [@attias1997temporal; @theunissen2014neural; @voss19751; @attias1998coding]. For a sound $s(t)$, let $a_\omega(t)$ be the component at frequency $\omega$ (obtained *e.g.* through the application of a band-pass filter centred at $\omega$). Then we can write that $$a_\omega(t) = A_\omega(t) \cos(\omega t + \phi_\omega(t)),$$ where $A_\omega(t)\geq0$ and $\phi_\omega(t)$ are the instantaneous amplitude and phase, respectively. We view $A_\omega(t)$ and $\phi_\omega(t)$ as stochastic processes and wish to understand their statistics. It has been widely observed that several properties of the frequency components of natural sounds vary according to the inverse of the frequency. In particular, it is well known that the power spectrum (the square of the Fourier transform) of the amplitude satisfies a relationship of the form $$\label{eq:SA} S_{A_\omega}(f)=|\hat{A}_\omega(f)|^2\propto \frac{1}{f^\gamma}, \qquad 0<f<f_{max},$$ for a positive parameter $\gamma$ (which often lies in a neighbourhood of 1) and some maximum frequency $f_{max}$. Further, this property is independent of the frequency band that is studied [@attias1997temporal]. Consider the log-amplitude, $\log_{10} A_\omega(t)$. It has been observed that for a variety of natural sounds (including speech, animal vocalisations, music and environmental sounds) the log-amplitude is locally stationary. Suppose we normalise the log-amplitude so that it has zero mean and unit variance, giving a quantity that is invariant to amplitude scaling. Then, the normalised log-amplitude averaged over some time interval $[t_1,t_2]$ has a distribution of the form [@attias1998coding] $$\label{eq:pA} p_A(x) = \beta \exp(\beta x - \alpha -e^{\beta x-\alpha}),$$ where $\alpha$ and $\beta$ are real-valued parameters and $\beta>0$. Further, this property is scale invariant in the sense that it is true irrespective of the scale over which the temporal average is taken. It also known that the curves for different frequency bands fall on top of one another, meaning that $p_A$ does not depend on $\omega$ (the frequency band). Further, the power spectrum $S_{\phi_\omega}$ of the instantaneous phase also satisfies a $1/f$-type relationship, of the same form as . On the other hand, the instantaneous phase is non-stationary (even locally), making it difficult to describe through the above methods. A more tractable quantity is the instantaneous frequency (IF), defined as $$\lambda_\omega = {\frac{\mathrm{d}\phi_\omega}{\mathrm{d}t}}.$$ It has been observed that $\lambda_\omega(t)$ is locally stationary for natural sounds and the temporal mean of its modulus satisfies a distribution $p_\lambda$ of the form [@attias1997temporal] $$\label{eq:pphi} p_\lambda(x) \propto (\zeta^2+x^2)^{-\eta/2},$$ for positive parameters $\zeta$ and $\eta>1$. ‘ [0.4]{} ![The fitted distributions for a trumpet playing a single note.](trumpet_pA-eps-converted-to.pdf "fig:"){width="\linewidth"} [0.4]{} ![The fitted distributions for a trumpet playing a single note.](trumpet_plam-eps-converted-to.pdf "fig:"){width="\linewidth"} [0.4]{} ![The fitted distributions for a trumpet playing a single note.](trumpet_SA-eps-converted-to.pdf "fig:"){width="\linewidth"} [0.4]{} ![The fitted distributions for a trumpet playing a single note.](trumpet_Sphi-eps-converted-to.pdf "fig:"){width="\linewidth"} Representation algorithm {#sec:algorithm} ------------------------ For a given natural sound, we wish to find the parameters that characterise its global properties, according to –. Given a signal $s$ we first compute the convolution with the band-pass filter $h_n$ to yield the spectral component at the frequency ${\operatorname{Re}}(\omega_n)$, given by $$a_n[s](t)=A_n(t)\cos({\operatorname{Re}}(\omega_n)t+\phi_n(t)).$$ We extract the functions $A_n$ and $\phi_n$ from $a_n[s]$ using the Hilbert transform [@attias1997temporal; @flanagan1980parametric; @boashash1992estimating]. In particular, we have that $$a_n[s](t)+{\mathrm{i}}H(a_n[s])(t)=a_n[s](t)+\frac{{\mathrm{i}}}{\pi}\int_{-\infty}^\infty \frac{a_n[s](u)}{t-u}{\: \mathrm{d}}u = A_n(t) e^{{\mathrm{i}}({\operatorname{Re}}(\omega_n)t+\phi_n(t))},$$ from which we can extract $A_n$ and $\phi_n$ by taking the complex modulus and argument, respectively. It is not obvious that the Hilbert transform $H(a_n[s])$ is well-defined. Indeed, we must formally take the principal value of the integral. For a signal that is integrable and has finite support, $H(a_n[s])(t)$ exists for almost all $t\in\mathbb{R}$. Given the functions $A_n$ and $\phi_n$, the power spectra $S_{A_n}(f)$ and $S_{\phi_n}(f)$ can be computed by applying the Fourier transform and squaring. We estimate the relationships of the form by first averaging the $N$ power spectra, to give $\overline{S}_A(f):=\frac{1}{N}\sum_n {S}_{A_n}(f)$ and $\overline{S}_\phi(f):=\frac{1}{N}\sum_n {S}_{\phi_n}(f)$ before fitting curves $f^{-\gamma_A}$ and $f^{-\gamma_\phi}$ using least-squares regression. We estimate the parameters of the probability distributions and by normalising both $\log_{10}A_n(t)$ and $\lambda_n(t)$ so that $$\langle\log_{10}A_n\rangle=0, \qquad \langle(\log_{10}A_n)^2\rangle=1,$$ and similarly for $\lambda_n(t)$, before repeatedly averaging the normalised functions over intervals $[t_1,t_2]\subset\mathbb{R}$. Curves of the form and are then fitted to the resulting histograms (which combine the temporal averages from different filters $n=1,\dots,N$ and different time intervals $[t_1,t_2]$) using non-linear least-squares optimisation. ------------------------------- ------- -------- -------- ------- ------- ------- ------- -------- $\gamma_A$ 1.767 1.563 1.528 1.415 1.763 1.808 1.466 1.571 $\alpha$ 1.244 0.375 0.284 0.474 0.517 0.528 0.336 0.649 $\beta$ 2.390 0.783 0.841 0.596 0.747 0.894 0.484 0.896 $\gamma_\phi$ 0.763 0.871 0.6977 0.446 1.192 1.125 1.088 0.908 $\zeta$ [$(\times10^{-6})$]{} 2.878 3.433 6.1149 6.322 4.773 5.176 5.200 4.212 $\eta$ 8.579 11.824 8.679 8.315 9.660 9.358 9.290 10.475 ------------------------------- ------- -------- -------- ------- ------- ------- ------- -------- : Values of the estimated distribution parameters for different samples of natural sounds.[]{data-label="table"} Discussion ---------- The observations of give us six coefficients $(\gamma_A,\alpha,\beta,\gamma_\phi,\zeta,\eta)\in\mathbb{R}^6$ that portray global properties of a natural sound. shows some examples of these parameters, estimated using the approach described in . Our hypothesis is that these parameters capture, in some sense, the quality of a signal. Thus, incorporating these parameters into the representation of a signal, alongside *e.g.* temporal averages , will improve the ‘perceptual’ abilities of any classification algorithm based on this representation. The space of natural sounds that is characterised by the six parameters is likely to have a highly non-trivial (and non-Euclidean) structure which will need to be learnt from data, *cf.* [@qi2017pointnet; @qi2017pointnet++]. Concluding remarks ================== We have studied an array of subwavelength resonators that has similar dimensions to the cochlea and mimics its biomechanical properties. We proved, from first principles, that the pressure field scattered by this structure satisfies a modal expansion with spatial eigenmodes and gammatone time dependence. We, then, explored how these basis functions could be used as kernels in a convolutional signal processing routine. In particular, we proposed an algorithm for extracting meaningful global properties from the band-pass coefficients tailored to the class of natural sounds. An advantage of two-step approach (physical scattering followed by neural processing) is that the subtleties of the auditory system can be readily incorporated, *e.g.* the non-linear amplification that takes place in the cochlea [@hudspeth2008making]. This work studied linear representations of signals followed by a non-linear algorithm for extracting the natural sound parameters. While analyses of non-linear networks (*i.e.* with the activation function different from the identity) have been conducted in other settings [@bruna2013invariant; @mallat2012group], an amplification mechanism based on a compressive non-linearity can be incorporated directly into the resonator-array model, as studied in [@davies2020hopf; @rupin2019mimicking]. Acknowledgements {#acknowledgements .unnumbered} ================ The numerical experiments in this work were carried out on a variety of sound recordings from the Univeristy of Iowa’s archive of musical instrument samples[^2] and the McDermott Lab’s natural sounds stimulus set[^3]. The code used for this study is available for download from github[^4]. [^1]: Department of Mathematics, ETH Zurich, Rämistrasse 101, CH-8092 Zürich, Switzerland. <habib.ammari@math.ethz.ch> <bryn.davies@sam.math.ethz.ch> [^2]: [theremin.music.uiowa.edu/MIS.html](http://theremin.music.uiowa.edu/MIS.html) [^3]: [mcdermottlab.mit.edu/svnh/Natural-Sound/Stimuli.html](http://mcdermottlab.mit.edu/svnh/Natural-Sound/Stimuli.html) [^4]: [github.com/davies-b/nat-sounds](https://github.com/davies-b/nat-sounds)
--- abstract: 'We propose UniViLM: a **Uni**fied **Vi**deo and **L**anguage pre-training **M**odel for multimodal understanding and generation. Motivated by the recent success of BERT based pre-training technique for NLP and image-language tasks, VideoBERT and CBT are proposed to exploit BERT model for video and language pre-training using narrated instructional videos. Different from their works which only pre-train understanding task, we propose a unified video-language pre-training model for both understanding and generation tasks. Our model comprises of 4 components including two single-modal encoders, a cross encoder and a decoder with the Transformer backbone. We first pre-train our model to learn the universal representation for both video and language on a large instructional video dataset. Then we fine-tune the model on two multimodal tasks including understanding task (text-based video retrieval) and generation task (multimodal video captioning). Our extensive experiments show that our method can improve the performance of both understanding and generation tasks and achieves the state-of-the art results.' author: - | Huaishao Luo^1^[^1] , Lei Ji^2,3,4^, Botian Shi^5^, Haoyang Huang^2^,\ **Nan Duan^2^, Tianrui Li^1^, Xilin Chen^3,4^, Ming Zhou^2^**\ ^1^School of Information Science and Technology, Southwest Jiaotong University, China\ ^2^Microsoft Research Asia, Beijing, China, ^5^Beijing Institute of Technology, Beijing, China\ ^3^Institute of Computing Technology, Chinese Academy of Science, Beijing, China\ ^4^University of Chinese Academy of Sciences, Beijing, China\ `huaishaoluo@gmail.com,{leiji,haohua,nanduan,mingzhou}@microsoft.com`\ `botianshi@bit.edu.cn,trli@swjtu.edu.cn, xlchen@ict.ac.cn`\ bibliography: - 'acl\_vl.bib' title: 'UniViLM: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation' --- Introduction ============ With the recent advances of self-supervised learning, pre-training techniques play a vital role in learning good representation for visual and language. The paradigm is to pre-train the model on a large scale *unlabeled* data, and then fine-tune the downstream tasks using task specific *labeled* data. Inspired by the success of BERT [@devlin2019bert] model for NLP tasks, numerous multimodal image-language pre-training models [@lu2019vilbert; @li2019unicoder; @li2019visualbert] have been proposed and demonstrated the effectiveness on various visual and language tasks such as VQA (visual question answering) and image-text match etc. Nevertheless, there are still few works on video-linguistic pre-training. ![A showcase of video and language pre-train based model for multimodal understanding (retrieval) and generation (captioning).[]{data-label="fig:showcase"}](figures/Demo.pdf){width="48.00000%"} Videos contain rich visual, acoustic and language information for people to acquire knowledge or learn how to perform a task. This motivates researchers to investigate whether AI agents can learn task completion from videos like human with both low-level visual and high-level semantic language signal. Therefore, multimodal video-language tasks are of great importance to investigate for both research and applications. In this work, we first propose to pre-train a unified video-language model using video and acoustic speech recognition (ASR) transcript in instructional videos to learn a joint representation of both video and language. Then, we fine-tune this model on two typical multimodal tasks including text-based video retrieval for understanding and multimodal video captioning for generation. Figure \[fig:showcase\] presents a showcase of our pre-training and fine-tuning flow and both tasks take video and language as input. Take multimodel video captioning as an example, the model input video and ASR transcript and predict a captioning sentence. VideoBERT and CBT [@sun2019videobert; @sun2019contrastive] are the first pioneers to investigate video-language pre-training with regard to video representation on instructional videos. They have demonstrated the effectiveness of the BERT based model for capturing video temporal and language sequential features. Our work differs from VideoBERT and CBT on two aspects: 1) previous work only pre-trains the model on understanding task, while we explore to pre-train on both understanding and generation tasks; 2) they fine-tune the downstream tasks for a better video representation with only video as input, while our goal is to learn video and language joint representation by downstream multimodal tasks. In this paper, we propose UniViLM: a **Uni**fied **Vi**deo and **L**anguage pre-training **M**odel for multimodal understanding and generation. Our UniViLM model adopts Transformer [@vaswani2017attention] as backbone and has 4 components including two single-modal encoders, a cross encoder and a decoder. In detail, we first encode the text and visual separately by two single-modal encoders. Then we adopt the Transformer based encoder-decoder model to perform the understanding and generation pre-training by 4 tasks: 1) masked language model (MLM for language corruption); 2) masked frame model (MFM for video corruption); 3) video-text alignment and 4) language reconstruction. As shown in Figure \[fig:showcase\], we fine-tune our pre-trained model on two typical video-language tasks: text-based video retrieval and multimodal video captioning. For the first task, we remove the decoder and fine-tune the alignment task. For the second task, we directly fine-tune the pre-trained encoder-decoder model. We list our contributions below: 1\) We propose a multimodal video-language pre-training model trained on a large scale instructional video dataset, which is a unified model for both video-language understanding and generation tasks. 2\) The pre-training stage consists of 4 tasks including MLM (masked language model), MFM (masked video frame model), video-text alignment, and language reconstruction. 3\) We fine-tune our pre-trained model on two typical multimodal video-language tasks: text-based video retrieval and multimodal video captioning. The extensive experiments demonstrate the effectiveness of our unified pre-trained model on both understanding and generation tasks and achieves state-of-the-art results. Related Works ============= #### Single Modal Pre-Training Self supervised representation learning has been shown to be effective for sequential data including language and video. Language pre-training models including BERT [@devlin2019bert], GPT [@radford2018improving], RoBERTa [@liu2019roberta], XLNet [@yang2019xlnet], MASS [@song2019mass], UniLM [@dong2019unified], BART [@lewis2019bart] have achieved great success on NLP tasks. BERT [@devlin2019bert] is a denoising auto-encoder network using Transformer with MLM (masked language model) and NSP (next sentence prediction) as pre-training tasks and has strong performance for understanding task. MASS [@song2019mass] focus on pre-training for generation tasks. UniLM [@dong2019unified] and BART [@lewis2019bart] continuously study a unified pre-training model for both understanding and generation tasks. Video representation learning mostly focuses on the video sequence reconstruction or future frames prediction as pre-training (pretext) tasks. Early works like [@mathieu2015deep; @srivastava2015unsupervised; @han2019video] aim to synthetic video frames through the image patches. Similarly, @wang2015unsupervised adopt Siamese-triplet network to rank continuous patches more similar than patches of different videos. Other works predict the feature vectors in latent space using auto-regressive models with the noise contrastive estimation (NCE) [@lotter2016deep; @oord2018representation]. @sun2019contrastive adopt NCE to make prediction on corrupted (masked) latent space using auto-encoder model. #### Multimodal Pre-Training Recently, numerous visual-linguistic pre-training models [@lu2019vilbert; @li2019visualbert; @tan2019lxmert; @li2019unicoder; @zhou2019unified; @lu2019vilbert; @sun2019videobert; @li2019visualbert] are proposed for multimodel tasks. For image and text pre-training, ViLBERT [@lu2019vilbert], LXMERT [@tan2019lxmert] adopt two separate Transformers for image and text encoding independently. Other models like Unicoder-VL [@li2019unicoder], VL-BERT [@lu2019vilbert], UNITER [@zhou2019unified] use one shared BERT model. These models employ MLM and image-text matching as pre-training tasks which are effective for downstream multimodal tasks. VLP [@zhou2019unified] proposes a unified image-language model for understanding and generation task. Different from these works, we focus on video and text pre-training for universal representation. VideoBERT [@sun2019videobert] and CBT [@sun2019contrastive] are the first works of video and language pre-training models which are the most similar works to ours. Although VideoBERT and CBT pre-train the model on multimodal data, the downstream tasks only take video representation for further prediction. We believe that video-language pre-training can learn a universal representation of video and text. Besides, previous works only pre-train the encoder and suffer from uninitialized decoder for generation tasks. We further pre-train the decoder for generation task and experimental results show that the pre-trained decoder is effective for generation. #### Multimodal Retrieval and Captioning Multimodal video and language learning is a nascent research area. In this work, we fine-tune and evaluate our pre-trained model on two multimodal tasks including text-based video retrieval and multimodal video captioning. Text-based video retrieval task is to predict whether the video and text query match each other. @yu2018joint densely align each token with each frame. @miech2019howto100m embed text and video into the same latent space through a joint embedding network on 1.2 million videos. Multimodel video captioning task is to generate captions given an input video together with ASR transcript. Different from existing works [@sun2019videobert; @sun2019contrastive; @krishna2017dense; @zhou2018towards; @zhou2018end; @shi2019dense; @palaskar2019multimodal; @hessel2019case] which only use video signal, recent works [@shi2019dense; @palaskar2019multimodal; @hessel2019case] study the multimodal captioning by taking both video and transcript as input, and show that incorporating transcript can largely improve the performance. Our model achieves state-of-the-art results in both tasks. Method ====== The problem is defined as: given the input video and the corresponding ASR transcript pairs, pre-train a model to learn a joint video and text representation and fine-tune downstream tasks. In this section, we describe the details of the architecture, and the pre-training tasks. Model Architecture ------------------ Figure \[fig:main\_structure\] presents the model structure as an encoder-decoder architecture. First, the model extracts representations of the input text tokens and the video frame sequences using various feature extractors. Then a text encoder adopts the BERT model to embed the text and a video encoder utilizes the Transformer model to embed the video frames. Next, we employ a Transformer based cross encoder for interacting between the text and the video. Finally, another Transformer based decoder learns to reconstruct the input text. ![image](figures/Framework.pdf){width="100.00000%"} #### Pre-processing First we pre-process video and language before feeding to this model. For the input text, we tokenize all words by WordPieces [@wu2016google] following the pre-processing method in BERT to obtain the token sequence $\mathbf{t}=\big\{t_i | i \in [1, n] \big\}$, where $t_i$ is the $i$-th token and $n$ is the length of token sequence. For each video clip, we sample a frame sequence $\mathbf{v}=\big\{v_j | j \in [1, m] \big\}$ to represent the video clip, where $v_j$ is the $j$-th video frame and $m$ is the length of the frame sequence. #### Single Modal Encoder We encode the text and video separately. First we adopt the BERT-base model to encode the token sequence $\mathbf{t}$. The text encoding is $\mathbf{T}_{BERT} \in \mathbb{R}^{n \times d}$, $$\begin{aligned} \mathbf{T}_{BERT} = \text{BERT}(\mathbf{t}),\end{aligned}$$ where $d$ is hidden size of text encoding. Next, we adopt the off-the-shelf image feature extractors to generate input feature matrix for the video frame sequence $\mathbf{v}$ before feeding to the video encoder. While image representation only considers spatial feature, video representation encodes both spatial and temporal feature. We extract video feature using 2D and 3D CNNs for spatial and spatial-temporal representation. Then, we concatenate two features to one unified video feature $\mathbf{F}_v \in \mathbb{R}^{m \times d^f_v}$. The $d^f_v$ represents hidden size of video feature. Finally, the $\mathbf{F}_v$ is fed to the video encoder to embed the contextual information, $$\begin{aligned} \mathbf{V}_{Transformer} = \text{Transformer}(\mathbf{F}_v).\end{aligned}$$ The dimension of $\mathbf{V}_{Transformer}$ is $\mathbb{R}^{m \times d}$. #### Cross Encoder To make the text and video fully interact with each other, we design a cross encoder to fuse these features. We first combine the text encoding $\mathbf{T}_{BERT}$ and the video encoding $\mathbf{V}_{Transformer}$ to get the encoding $\mathbf{M} \in \mathbb{R}^{(n+m) \times d}$. Then, the Transformer based cross encoder takes the encoding $\mathbf{M}$ as input to generate the attended encoding $\mathbf{M}_{attended} \in \mathbb{R}^{(n+m) \times d}$, $$\begin{aligned} &\mathbf{M} = \left[\mathbf{T}_{BERT} ; \mathbf{V}_{Transformer}\right], \\ &\mathbf{M}_{attended} = \text{Transformer}(\mathbf{M}),\end{aligned}$$ where $[;]$ denotes the combination operation. #### Decoder The decoder learns to reconstruct the input text during pre-training, as well as generating captions during fine-tuning and inferring. The input is the attended encoding $\mathbf{M}_{attended}$ of text and video. We unexceptionally exploit Transformer to get the decoded feature $\mathbf{D} \in \mathbb{R}^{l \times d}$ from $\mathbf{M}_{attended}$, $$\begin{aligned} \mathbf{D} = \text{Transformer}(\mathbf{M}_{attended}),\end{aligned}$$ where $l$ is the decoder length. Pre-training Objectives ----------------------- We have four pre-training objectives: 1) masked language model (for text corruption); 2) masked frame model (for video corruption); 3) video-text alignment and 4) language reconstruction. #### MLM: Masked Language Model Following BERT, we randomly mask 15% tokens with the special token $\text{[MASK]}$ in the sentence and the objective is to re-produce the masked tokens. Since the ASR transcript is automatically extracted from speech, which is noisy and in low quality, we further conditionally mask key concepts. Specifically, we conditionally mask 15% verbs or nouns in the sentences[^2] to compel the encoder to learn these key concepts. This loss function is defined as: $$\begin{aligned} \mathcal{L}_{MLM}(\theta) = -E_{t_m \sim \mathbf{t}} \log P_{\theta}\left( t_m \mid t_{\neg m}, \mathbf{v} \right),\end{aligned}$$ where $t_{\neg m}$ means the contextual tokens surrounding the masked token $t_m$, $\theta$ is the trainable parameters. #### MFM: Masked Frame Model Similarly, we also propose a masked frame model to predict the correct frames given contextual frames. This loss function is NCE [@sun2019contrastive]. We randomly mask 15% vectors (also 15% frames) with zeros. The objective is to identify the correct frame compared to negative distractors. The loss is defined as: $$\begin{aligned} &\mathcal{L}_{MFM}(\theta) = -E_{v_m \sim \mathbf{v}} \log \text{NCE}\left( v_m \mid v_{\neg m}, \mathbf{t} \right), \\ &\text{NCE}\left( v_m \mid v_{\neg m}, \mathbf{t} \right) = \frac{\exp(\mathbf{f}_{v_m}\mathbf{m}_{v_m}^{\top})}{\mathcal{Z}}, \\ &\mathcal{Z} = \exp(\mathbf{f}_{v_m}\mathbf{m}_{v_m}^{\top}) +\!\! \sum\nolimits_{v_j \in \mathcal{N}(v_m)}\exp(\mathbf{f}_{v_m}\mathbf{m}_{v_j}^{\top}),\end{aligned}$$ where $v_{\neg m}$ means the surrounding frames except $v_m$, $\mathbf{f}_{v_m} \in \mathbb{R}^{1 \times d}$ is a linear output of $\mathbf{f}^{v}_{v_m} \in \mathbf{F}_v$, $\mathbf{F_v}$ is the real-valued vectors of video feature, $\mathbf{m}_{v_m} \in \mathbf{M}_{attended}^{(v)}$, and $\mathbf{M}_{attended}^{(v)}$ is the feature matrix of the video part in $\mathbf{M}_{attended}$. We take other frames in the same batch as negative cases defined as $\mathcal{N}(v_m)$. #### Video-Text Alignment We use the fused representation that corresponds to the special token $\text{[CLS]}$ to predict scores for the Video-Text Alignment task. Specifically, a BertPooler layer and a linear layer are designed to project the first hidden state of $\mathbf{M}_{attended}$ to scores which is similar to the BERT sentence pair classification task. We also adopt the NCE loss to learn to discriminate the positive from negative video-text pairs. To enhance this capability, we not only randomly sample negative cases but also re-sample video clips from the same video [@han2019video]. The reason is that the frames inside the same video are more similar than frames of different videos. This loss function is defined as follows, $$\begin{aligned} &\mathcal{L}_{Align}(\theta) = -E_{(\mathbf{t}, \mathbf{v}) \sim \mathbf{B}} \log \frac{\exp\big(s(\mathbf{t},\mathbf{v})\big)}{\mathcal{Z}}, \label{loss_align} \\ &\mathcal{Z} = \exp\big(s(\mathbf{t},\mathbf{v})\big) +\!\! \sum\nolimits_{\mathbf{u} \in \mathcal{N}(\mathbf{v})}\exp\big(s(\mathbf{t},\mathbf{u})\big),\end{aligned}$$ where $s(\cdot)$ means the BertPooler layer and linear layer operations. We take other video clips in the same batch $\mathbf{B}$ as negative cases $\mathcal{N}(\mathbf{v})$. #### Language Reconstruction An auto-regressive decoder is also involved in our pre-training objective, and the loss function is, $$\begin{aligned} \mathcal{L}_{Decoder}(\theta) = -E_{\hat{t}_i \sim \mathbf{\hat{t}}} \log P_{\theta}\left( \hat{t}_i \mid \hat{t}_{< i}, \mathbf{t}, \mathbf{v} \right).\end{aligned}$$ It is note that $\mathbf{t}$ is the mask of ground-truth text $\mathbf{\hat{t}}$ when pre-training. As shown in BART [@lewis2019bart], pre-training decoder benefits generation tasks. #### Loss Function We jointly optimize our model by a weighted loss: $$\begin{aligned} \mathcal{L}_{UniViLM} = &w_{MLM}\mathcal{L}_{MLM} + w_{MFM}\mathcal{L}_{MFM} \notag\\ &+ w_{Align}\mathcal{L}_{Align} + w_{Decoder}\mathcal{L}_{Decoder},\end{aligned}$$ where $w_{MLM}$, $w_{MFM}$, $w_{Align}$, and $w_{Decoder}$ are set to 1 in this paper. ![image](figures/DownstreamTasks.pdf){width="100.00000%"} Downstream tasks ================ Figure \[fig:main\_downstream\_tasks\] presents the two downstream tasks: text based video retrieval (left) and multimodal video captioning (right). Text based Video Retrieval -------------------------- Text based video retrieval is defined to retrieve a relevant video/clip given an input text query. During inference, the model takes the input text query and each candidate video to calculate the similarity score, and then rank to select the best matched video clip. The model encodes query and video through text encoder and video encoder respectively, then feed the embeddings to the cross encoder, and make final prediction through the fused representation corresponding to $\text{[CLS]}$ by $s(\cdot)$ in Eq. (\[loss\_align\]). We use $\mathcal{L}_{Align}$ as the loss during the fine-tuning stage. Multimodal Video Captioning --------------------------- Given a video, multimodal video captioning aims to generate a sequence of descriptive sentences. In this work, we focus on generating better captions and use the ground-truth segments in the experiment. Similarly, the model encodes the input video frames as well as transcripts inside the clips through video encoder and text encoder respectively, then feeds the embeddings to the cross encoder to get a unified representation, and finally generates token sequence by the decoder. We use $\mathcal{L}_{Decoder}$ as the loss during the fine-tuning stage. Experiment ========== We first pre-train our model on the large scale dataset HowTo100M [@miech2019howto100m], then fine-tune our pre-trained model on two downstream multimodal tasks including text-based video retrieval and multimodel video captioning. Finally, we evaluate our model on both In-domain Youcook2 [@zhou2018towards] dataset and Out-domain MSR-VTT [@xu2016msr] dataset. Dataset ------- #### HowTo100M [@miech2019howto100m] [^3] is the pre-training dataset. We download videos in the *Food and Entertaining* domain with ASR transcript from Howto100M dataset. After filtering the unavailable ones, we finally get 380K videos for pre-training our model. On average, the duration of each video is 6.5 minutes with 110 clip-text pairs. #### Youcook2 [@zhou2018towards] [^4] is the In-domain dataset for both downstream tasks. It contains 2,000 cooking videos on 89 recipes with 14K video clips. The overall duration is 176 hours (5.26 minutes on average). Each video clip is annotated with one captioning sentence. We evaluate both text-based video retrieval and multimodel video captioning task on this dataset. For the first task, we follow the same experimental setting in [@miech2019howto100m], and use the captions as the input text queries to find the corresponding video clips. For the second task, we use the same setting as in [@shi2019dense]. We filter the data and make sure there is no overlap between pre-training and evaluation data. In all, we have 1,261 training videos and 439 test videos, that is, 9,776 training clip-text pairs and 3,369 test clip-text pairs. #### MSR-VTT [@xu2016msr] is the Out-domain dataset for downstream task. It has open domain video clips, and each clip has 20 captioning sentences labeled by human. In all, there are 200K clip-text pairs from 10K videos in 20 categories including sports, music, etc. Following JSFusion [@yu2018joint], we randomly sampled 1,000 clip-text pairs as test data to evaluate the performance of our model on text-based video retrieval task. Experimental Details -------------------- #### Text encoding for text encoding, we apply WordPiece embeddings [@wu2016google] with a 30,000 token vocabulary to input to BERT model. We exploit the BERT-base model [@devlin2019bert] with 12 layers of Transformer blocks. Each block has 12 attention heads and the hidden size is 768. #### Video encoding Similar to Miech’s work [@miech2019howto100m], we extract both 2D and 3D features from video clips. We use an off-the-shelf ResNet-152 [@he2016deep] that pre-trained on the ImageNet dataset to extract 2D feature. For 3D feature extraction, we employ ResNeXt-101 [@hara2018can] that pre-trained on Kinetics to extract 3D features. The fps of 2D and 3D feature extractor are 1 and 1.5 respectively. Then we directly concatenate both 2D and 3D features to one unified 4,096 dimensional vector. For video encoding, we employ Transformer [@vaswani2017attention] with 1 layer. Each block has 12 attention heads and the hidden size is 768. #### Model setting The model consumes the clip-text pairs. The maximal input tokens of text is 32 and the maximal frames of video is 48. For short sentence and clip, we concatenate contextual tokens and frames. For cross encoder and decoder, we use a 2 layers Transformer as the encoder and a 1 layer Transformer as the decoder with 12 heads. For generation task during the inference stage, we use the beam search with the size of 5. #### Training time We pre-train our model on 4 NVIDIA Tesla V100 GPUs. The batch size is set to 96 and the model is trained 12 epochs for 5 days. We use the Adam optimizer [@kingma2014adam] with an initial learning rate of 1e-4, and employ a linear decay learning rate schedule with warm up strategy. To fasten the pre-training speed, we adopt two-stage training fashion. For the first stage, we only preserve the text BERT and video Transformer to learn the weights using alignment similarity like the work in [@miech2019howto100m]. Next we freeze the single modal encoders with the learned weights and continue to further pre-train the subsequent cross encoder and decoder. Task I: Text-based Video Retrieval ---------------------------------- We fine-tune our pre-trained model for text-based video retrieval task on both Youcook2 and MSR-VTT datasets. The evaluation metrics are Recall@n (R@n) and Median R. #### Youcook2 provides the ground-truth video clip and caption pairs. We use the caption to retrieve the relevant video clip. Miech [@miech2019howto100m] reports baseline methods including Random and KGLMM FV CCA [@klein2015associating] and their model results, which we directly apply as our baseline methods. Table \[tab:result\_of\_retrieval\_youcook\] lists the results of all baselines and our models. We can see that our model can improve the performance over all baseline methods and achieve state-of-the-art result. Since our 380K data are all food domain related videos, we investigate whether this domain specific data biases the model performance. So we re-run the HowTo100M model on our 380K dataset and fine-tune on Youcook2 dataset. The performance drops a lot which demonstrates that the data does not bias the model. Through the comparison of our model pre-trained on various data sizes, the performance increases with increment of data. #### MSR-VTT Besides the Food domain videos, we also evaluate text-based video retrieval on open domain MSR-VTT dataset. We present several baseline methods with/without pre-training. For Out-domain dataset, our pre-trained method (Our model.2^nd^ vs. 3^rd^) has generalization capability on other domain but not as significant as in-domain data. We also notice that without fine-tuning, our pre-trained model performs worse than the HowTo100M model, which shows that the fine-tuning is a very important stage for our model. Our full model (3^rd^) achieves the state-of-the-art results on R@1 and Median R metrics. The best results on R@5 and R@10 are achieved by the HowTo100M model pre-trained on 1.2M dataset which contains more open domain videos that could benefit the results on MSR-VTT. This motivates us to further examine the HowTo100M model pre-trained on our 380K dataset. The experimental results demonstrate our model.3^rd^ outperforms the HowTo100M model pre-trained on the same dataset(380K) on all metrics. According to our extensive experiments on text based video retrieval, we find that: 1) our model can largely increase the performance of video and language understanding task; 2) with the increase of the training data, our model performs consistently better; 3) Our model outperforms baselines on both In-domain and Out-domain data and achieves the state-of-the-art results. The performance boost is more remarkable for In-domain data. Task II: Multimodal Video Captioning ------------------------------------ We adopt the corpus-level generation evaluation metric using open-source tool[^5] including BLEU [@papineni2002bleu], METEOR [@banerjee2005meteor], ROUGE-L [@lin2004automatic] and CIDEr [@vedantam2015cider]. First we compare our pre-trained model with several baseline methods. We classify the methods with two settings: 1) with/without pre-training; 2) the input is video-only or video+transcript. @zhou2018towards propose an end-to-end model for both procedural segmentation and captioning. @sun2019videobert [@sun2019contrastive] adopt the pre-training strategy and evaluate the captioning with only video as input. @shi2019dense and @hessel2019case discuss the multimodal input with both video and transcript. Table \[tab:caption\_result\] presents the results of baseline models and the performance of our model in various settings. We study the video-only captioning models and find that our model (our model.1^st^) can get comparable results with CBT. Furthermore, we compare our model with various data sizes (our model.2^nd^, 3^rd^, 5^th^), the performance of our models improves with the increasing of the pre-training data size. Moreover, according to the comparison of our models with or without pre-trained decoder (our model.4^th^ vs. 5^th^), pre-training the decoder improves the performance of generation task, and our full model (our model.5^th^) on the largest pre-training dataset achieves the best results. According to our extensive experiments on multimodal video captioning, our *key* findings are: 1) our pre-trained model can improve the performance of generation task with the help of pre-trained decoder; 2) our model outperforms baseline models for multimodal video captioning task and achieves the state-of-the-art results. Conclusion and Discussion ========================= In this paper, we study the self-supervised learning for video and language representation on large scale videos and pre-train a multimodal model using video and corresponding ASR transcript. We propose a unified pre-training model for both understanding and generation tasks. Then, we conduct extensive experiments on evaluating our models for two downstream tasks including text-based video retrieval and multimodel video captioning. From the experiments, we find that 1) our pre-trained model can improve the performance to a large extent over the baseline models and achieve the state-of-the-art results on two typical multimodal tasks; 2) The pre-trained decoder can benefit the generation tasks such as captioning. For the future work, we will investigate the performance of our model on a larger dataset and more downstream tasks. Supplementary Material ====================== Figure \[fig:cs1\] presents two randomly selected case studies comparing our results with groundtruth captioning, from which we noticed that most of the results are semantically aligned with the groundtruth sentences. ![Case studies for multimodal video dense captioning[]{data-label="fig:cs1"}](figures/casestudy1.png "fig:"){width="45.00000%"} ![Case studies for multimodal video dense captioning[]{data-label="fig:cs1"}](figures/casestudy2.png "fig:"){width="45.00000%"} [^1]:  This work was done during the first author’s internship in MSR Asia [^2]: We use package scapy (https://scapy.net) to extract verbs and nouns automatically. [^3]: https://www.di.ens.fr/willow/research/howto100m/ [^4]: http://youcook2.eecs.umich.edu/ [^5]: https://github.com/Maluuba/nlg-eval
--- abstract: | We consider Mc Kean-Vlasov stochastic differential equations (MVSDEs), which are SDEs where the drift and diffusion coefficients depend not only on the state of the unknown process but also on its probability distribution. This type of SDEs was studied in statistical physics and represents the natural setting for stochastic mean-field games. We will first discuss questions of existence and uniqueness of solutions under an Osgood type condition improving the well known Lipschitz case. Then we derive various stability properties with respect to initial data, coefficients and driving processes, generalizing known results for classical SDEs. Finally, we establish a result on the approximation of  the solution of a MVSDE associated to a relaxed control by the solutions of the same equation associated to strict controls. As a consequence, we show that the relaxed and strict control problems have the same value function. This last property improves known results proved for a special class of MVSDEs, where the dependence on the distribution was made via a linear functional. **Key words**: Mc Kean-Vlasov stochastic differential equation – Stability – Martingale measure - Wasserstein metric – Existence – Mean-field control – Relaxed control. **2010 Mathematics Subject Classification**. 60H10, 60H07, 49N90. author: - '[Khaled Bahlali]{}[^1]' - '[Mohamed Amine Mezerdi ]{}[^2]' - 'Brahim Mezerdi[^3]' title: | Stability of Mc Kean-Vlasov stochastic differential\ equations and applications --- Introduction ============ We will investigate some properties of a particular class of stochastic differential equations (SDE), called Mc Kean-Vlasov stochastic differential equations (MVSDE) or mean-field stochastic differential equations. These are SDEs described by $\left\{ \begin{array} [c]{l}dX_{t}=b(t,X_{t},\mathbb{P}_{X_{t}})ds+\sigma(t,X_{t},\mathbb{P}_{X_{t}})dB_{s}\\ X_{0}=x, \end{array} \right. $ where $b$ is the drift, $\sigma$ is the diffusion coefficient and $\left( B_{t}\right) $ is a Brownian motion. For this type of equations the drift and diffusion coefficient depend not only on the state variable $X_{t},$ but also on its marginal distribution $\mathbb{P}_{X_{t}}$. This fact brings a non trivial additional difficulty compared to classical Itô SDEs. The solutions of such equation are known in the literature as non linear diffusions. MVSDEs were first studied in statistical physics by M. Kac [@Ka], as a stochastic counterpart for the Vlasov equation of plasma [@Vla]. The probabilistic study of such equation has been performed by H.P. Mc Kean [@McK], see [@Sn] for an introduction to this research field. These equations were obtained as limits of some weakly interacting particle systems as the number of particles tends to infinity. This convergence property is called in the literature as the propagation of chaos. The MVSDE, represents in some sense the average behavior of the infinite number of particles. One can refer to [@CarDel; @Gra; @JMW] for details on the existence and uniqueness of solutions for such SDEs, see also [@BuDjLiPe; @BuLiPe] for the case of Mc Kean Vlasov backward stochastic differential equations (MVBSDE). Existence and uniqueness with less regularity on the coefficients have been established in [@Ch; @ChFr; @Chia; @HSS; @MiVe; @Sheu]. Recently there has been a renewed interest for MVSDEs, in the context of mean-field games (MFG) theory, introduced independently by P.L. Lions and J.M. Lasry [@LasLio] and Huang, Malhamé Caines [@HMC] in 2006. MFG theory has been introduced to solve the problem of existence of an approximate Nash equilibrium for differential games, with a large number of players (see [@Ben]). Since the earlier papers, MFG theory and mean-field control theory has raised a lot of interest, motivated by applications to various fields such as game theory, mathematical finance, communications networks and management of oil ressources. One can refer to the most recent and updated reference on the subject [@CarDel] and the complete bibliographical list therein. Our main objective in this paper is to study some properties of such equations such as existence, uniqueness, and stability properties. In particular, we prove an existence and uniqueness theorem for a class of MVSDEs under Osgood type condition on the coefficients, improving the well known globally Lipschitz case. It is well known that stability properties of deterministic or stochastic dynamical systems are crucial in the study of such systems. It means that the trajectories do not change too much under small perturbations. We study stability with respect to initial conditions, coefficients and driving processes, which are continuous martingales and bounded variation processes. These properties will be investigated under Lipschitz condition with respect to the state variable and the distribution and generalize known properties for classical Itô SDEs, see [@BMO; @IW]. Furthermore, we prove that in the context of stochastic control of systems driven by MVSDEs, the relaxed and strict control problems have the same value function. As it is well known when the Filipov type convexity condition is not fulfilled, there is no mean to prove the existence of a strict control. The idea is then to embedd the usual strict controls into the set of measure valued controls, called relaxed controls, which enjoys good compactness properties. So for the relaxed control to be a true extension of the initial problem, the value functions of both control problems must be the same. Under the Lipschitz condition we prove that the value functions are equal. Note that this result extends to general Mc Kean Vlasov equations known results [@BMM1; @BMM2] established for a special class of MVSDEs, where the dependence of the coefficient on the distribution variable is made via a linear form of the distribution. Formulation of the problem and preliminary results ================================================== Assumptions ----------- Let $(\Omega,\mathcal{F},P)$ be a probability space$,$ equipped with a filtration $\left( \mathcal{F}_{t}\right) ,$ satisfying the usual conditions and $\left( B_{t}\right) $ a $d$-dimensional $\left( \mathcal{F}_{t},P\right) -$Brownian motion. Let us consider the following Mc Kean-Vlasov stochastic differential equation called also mean-field stochastic differential equation (MVSDE) $$\left\{ \begin{array} [c]{l}dX_{t}=b(t,X_{t},\mathbb{P}_{X_{t}})ds+\sigma(t,X_{t},\mathbb{P}_{X_{t}})dB_{s}\\ X_{0}=x \end{array} \right. \label{MVSDE}$$ Note that for this kind of SDEs, the drift $b$ and diffusion coefficient $\sigma$ depend not only on the position, but also on the marginal distribution of the solution. The following assumption will be considered throughout this paper. Let us denote $\mathcal{P}_{2}(\mathbb{R}^{d})$ the space of probability measures with finite second order moment. That is for each $\mu\in \mathcal{P}_{2}(\mathbb{R}^{d})$ ${\displaystyle\int} \left\vert x\right\vert ^{2}\mu(dx)<+\infty.$ (**H**$_{\mathbf{1}}$**)** Assume that $\begin{array} [c]{c}b:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\longrightarrow\mathbb{R}^{d}\\ \sigma:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\longrightarrow\mathbb{R}^{d}\otimes\mathbb{R}^{d}\end{array} $ are Borel measurable functions and there exist $C>0$ such that for every $(t,x,\mu)\in\lbrack0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d}):$ $|b(t,x,\mu)|+|\sigma(t,x,\mu)|\leq C\left( 1+\left\vert x\right\vert \right) $ (**H**$_{\mathbf{2}}$**)** There exist $L>0$ such that for any $t\in\lbrack0,T],x,$ $x^{\prime}\in\mathbb{R}^{d}$ and $\mu,$ $\mu^{\prime}\in\mathcal{P}_{2}(\mathbb{R}^{d}),$ $|b(t,x,\mu)-b(t,x^{\prime},\mu^{\prime})|\leq L[|x-x^{\prime}|+W_{2}(\mu ,\mu^{\prime})]$ $|\sigma(t,x,\mu)-\sigma(t,x^{\prime},\mu^{\prime})|\leq L[|x-x^{\prime }|+W_{2}(\mu,\mu^{\prime})]$ where $W_{2}$ denotes the 2-Wasserstein metric. Wasserstein metric ------------------ Let $\mathcal{P}(\mathbb{R}^{d})$ be the space of probability measures on $\mathbb{R}^{d}$ and for any $p>1$, denote by $\mathcal{P}_{p}(\mathbb{R}^{d})$ the subspace of $\mathcal{P}(\mathbb{R}^{d})$ of the probability measures with finite moment of order $p$. For $\mu,\nu\in\mathcal{P}_{p}(\mathbb{R}^{d}),$ define the $p$-Wasserstein distance $W_{p}(\mu,\nu)$ by: $$W_{p}(\mu,\nu)=\inf_{\pi\in\Pi(\mu,\nu)}[\int_{E\times E}\left\vert x-y\right\vert ^{p}d\pi(x,y)]^{1/p}$$ where $\Pi(\mu,\nu)$ denotes the set of probability measures on $\mathbb{R}^{d}\times\mathbb{R}^{d}$ whose first and second marginals are respectively $\mu$ and $\nu$. In the case $\mu=\mathbb{P}_{X}$ and $\nu=\mathbb{P}_{Y}$ are the laws of $\mathbb{R}^{d}$-valued random variable $X$ and $Y$ of order $p$, then $$W_{p}(\mu,\nu)^{p}\leq\mathbb{E[}\left\vert X-Y\right\vert ^{p}].$$ Indeed $$\begin{aligned} W_{p}(\mu,\nu) & =\inf_{\pi\in\Pi(\mu,\nu)}[\int_{E\times E}\left\vert x-y\right\vert ^{p}d\pi(x,y)]^{1/p}\\ W_{p}(\mu,\nu)^{p} & =\inf_{\pi\in\Pi(\mu,\nu)}[\int_{E\times E}\left\vert x-y\right\vert ^{p}d\pi(x,y)]\\ & \leq\int_{E\times E}\left\vert x-y\right\vert ^{p}d(\mathbb{P}_{\left( X,Y\right) }(x,y)\\ & =\mathbb{E[}\left\vert X-Y\right\vert ^{p}]\end{aligned}$$ In the literature the Wasserstein metric is restricted to $W_{2}$ while $W_{1}$ is often called the Kantorovich-Rubinstein distance because of the role it plays in optimal transport. Existence and uniqueness of solutions ===================================== The globally Lipschitz case --------------------------- The following theorem states that under global Lipschitz condition, (\[MVSDE\]) admits a unique solution. Its complete proof is given in [@Sn] for a drift depending linearly on the law of $X_{t}$ that is $b(t,x,\mu)={\displaystyle\int\limits_{\mathbb{R}^{d}}} b^{\prime}(t,x,y)\mu(dy)$ and a constant diffusion. The general case as in (\[MVSDE\]) is treated in [@CarDel] Theorem 4.21 or [@JMW] Proposition 1.2 and is based on a fixed point theorem on the space of continuous functions with values in $\mathcal{P}_{2}(\mathbb{R}^{d})$. Note that in [@Gra; @JMW] the authors consider MVSDEs driven by general Lévy process instead of a Brownian motion. Under assumptions $\mathbf{(H_{1})}$, $\mathbf{(H_{2})}$, (\[MVSDE\]) admits a unique solution such that $E[\sup_{t\leq T}|X_{t}|^{2}]<+\infty$ Let us give the outline of the proof. Let $\mu\in\mathcal{P}_{p}(\mathbb{R}^{d})$ be fixed, the classical Itô’s theorem gives the existence and uniqueness of a solution denote by $\left( X_{t}^{\mu}\right) $ satisfying $E[\sup_{t\leq T}|X_{t}^{\mu}|^{2}]<+\infty.$ Now let us consider the mapping $\Psi:\mathcal{C}(\left[ 0,T\right] ,\mathcal{P}_{2}(\mathbb{R}^{d}))\longrightarrow\mathcal{C}(\left[ 0,T\right] ,\mathcal{P}_{2}(\mathbb{R}^{d}))$ $\mu\longrightarrow\Psi(\mu)=\left( \mathcal{L}(X_{t}^{\mu})\right) _{t\geq0},$ the distribution of $X_{t}^{\mu}.$ $\Psi$ is well defined as $X_{t}^{\mu}$ has continuous paths and $E[\sup_{t\leq T}|X_{t}^{\mu}|^{2}]<+\infty.$ To prove the existence and uniqueness of(\[MVSDE\]), it is sufficient to prove that the mapping $\Psi$ has a unique fixed point. By using usual arguments from stochastic calculus and relation and the property of Wasserstein metric it is easy to show that: $\sup_{t\leq T}W_{2}(\left( \Psi^{k}(\mu)\right) _{t},\left( \Psi^{k}(\nu)\right) _{t})^{2}\leq C\dfrac{T^{k}}{k!}\sup_{t\leq T}W_{2}(\mu_{t},\nu_{t})^{2}$ For large $k,$ $\Psi^{k}$ is a strict contraction which implies that $\Psi$ admits a unique fixed point in the complete metric space $\mathcal{C}(\left[ 0,T\right] ,\mathcal{P}_{2}(\mathbb{R}^{d})).$ The following version MVSDEs is also considered in the control literature $$\left\{ \begin{array} [c]{l}dX_{t}=b(t,X_{t},{\displaystyle\int} \varphi(y)\mathbb{P}_{X_{t}}(dy))dt+\sigma(t,X_{t},{\displaystyle\int} \psi(y)\mathbb{P}_{X_{t}}(dy))dW_{t}\\ X_{0}=x \end{array} \right. \label{MVSDE1}$$ where $\mathbf{(H}_{\mathbf{3}}\mathbf{)}$ $b,$ $\sigma,\varphi$ and $\psi$ are Borel measurable bounded functions such that $b(t,.,.),$ $\sigma(t,.,.),$ $\varphi$ and $\psi$ are globally lipshitz functions in $\mathbb{R}^{d}\times\mathbb{R}^{d}$. Under assumptions $\mathbf{(H}_{\mathbf{1}}\mathbf{)}$ and $\left( \mathbf{H}_{\mathbf{3}}\right) $the MVSDE (\[MVSDE1\]) has a unique strong solution. Moreover for each $p>0$ we have $E(\left\vert X_{t}\right\vert ^{p})<+\infty.$ Let us define $\overline{b}(t,x,\mu)$ and $\overline{\sigma}(t,x,\mu)$ on $\left[ 0,T\right] \times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})$ by $\overline{b}(t,x,\mu)=b(.,.,{\displaystyle\int} \varphi(x)d\mu(x),.)$, $\overline{\sigma}(t,x,\mu)=\sigma(t,x,{\displaystyle\int} \psi(x)d\mu(x)).$ According to the last Theorem it is sufficient to check that $\overline{b}$ and $\overline{\sigma}$ are Lipschitz in $\left( x,\mu\right) $. Indeed since the coefficients $b$ and $\sigma$ are Lipschitz continuous in $x,$ then $\overline{b}$ and $\overline{\sigma}$ are also Lipschitz in $x.$ Moreover one can verify easily that $\overline{b}$ and $\overline{\sigma}$ are also Lipshitz continuous in $\mu,$ with respect to the Wasserstein metric $\begin{array} [c]{l}W_{2}(\mu,\nu)=\inf\left\{ \left( E^{Q}\left\vert X-Y\right\vert ^{2}\right) ^{1/2};Q\in\mathcal{P}_{2}(\mathbb{R}^{d}\times\mathbb{R}^{d}),\text{ with marginals }\mu,\nu\right\} \\ =\sup\left\{ {\displaystyle\int} hd\left( \mu-\nu\right) ;\text{ }\left\vert h(x)-h(y)\right\vert \leq\left\vert x-y\right\vert \right\} , \end{array} $ Note that the second equality is given by the Kantorovich-Rubinstein theorem [@CarDel]. Since the mappings $b$ and $\varphi$ in the the MFSDE are Lipschitz continuous in $y$ we have $\begin{array} [c]{l}\left\vert b(.,.,{\displaystyle\int} \varphi(y)d\mu(y),.)-b(.,.,{\displaystyle\int} \varphi(y)d\nu(y),.)\right\vert \\ \leq K\left\vert {\displaystyle\int} \varphi(y)d(\mu(y)-\nu(y))\right\vert \\ \leq K^{\prime}.W_{2}\left( \mu,\nu\right) \end{array} $ Therefore $\overline{b}(t,.,.)$ is Lipschitz continuous in the variable $(x,\mu)\in\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})$ uniformly in $t\in\left[ 0,\left[ T\right] \right] $ Similar arguments can be used for $\sigma.$ The uniqueness under Osgood type condition ------------------------------------------ In this section we relax the global Lipschitz condition in the state variable. We will prove the existence and uniqueness of a solution when the coefficients are globally Lipschitz in the distribution variable and satisfy an Osgood type condition in the state variable. To be more precise let us consider the following MVSDE $$\left\{ \begin{array} [c]{c}dX_{t}=b(t,X_{t},\mathbb{P}_{X_{t}})ds+\sigma(t,X_{t})dB_{s}\\ X_{0}=x \end{array} \right. \label{MVSDE2}$$ Assume that $b$ and $\sigma$ are real valued bounded Borel measurable functions satisfying: $\mathbf{(H}_{\mathbf{4}}\mathbf{)}$ There exist $C>0,$ such that for every $x\in\mathbb{R}$ and $\left( \mu,\nu\right) \in\mathcal{P}_{1}(\mathbb{R})\times\mathcal{P}_{1}(\mathbb{R}):$ $|b(t,x,\mu)-b(t,x,\nu)|\leq CW_{1}(\mu,\nu)$ $\mathbf{(H}_{\mathbf{5}}\mathbf{)}$There exists a strictly increasing function $\rho(u)$ on $[0,+\infty)$ such that $\rho(0)=0$ and $\rho^{2}$ is convex satisfying ${\displaystyle\int\limits_{0^{+}}} \rho^{-2}(u)du=+\infty,$ such that for every $(x,y)\in\mathbb{R}\times\mathbb{R}$ and $\mu\in\mathcal{P}_{2}(\mathbb{R}),$ $|\sigma (t,x)-\sigma(t,y)|\leq\rho(|x-y|).$ $\mathbf{(H}_{\mathbf{6}}\mathbf{)}$ There exists a strictly increasing function $\kappa(u)$ on $[0,+\infty)$ such that $\kappa(0)=0$ and $\kappa$ is concave satisfying ${\displaystyle\int\limits_{0^{+}}} \kappa^{-1}(u)du=+\infty,$ such that for every $(x,y)\in\mathbb{R}^{d}\times\mathbb{R}^{d}$ and $\mu\in\mathcal{P}_{2}(\mathbb{R}),$ $|b(t,x,\mu )-b(t,y,\mu)|\leq\kappa(|x-y|).$ In the next Theorem we derive the pathwise uniqueness for (\[MVSDE2\]) under an Osgood type condition in the state variable. This result improves [@IW],Theorem 3.2, established for classical Itô’s SDEs and [@CarDel] Theorem 4.21, at least for MVSDEs with a diffusion coefficient not depending on the distribution variable. Under assumptions $\mathbf{(H_{4})}-\mathbf{(H_{6})}$, the MVSDE ( \[MVSDE2\]) enjoys the property of pathwise uniqueness. The following proof is inspired from [@CarDel] Theorem 4.21. Since ${\displaystyle\int\limits_{0^{+}}} \rho^{-2}(u)du=+\infty$, there exist a decreasing sequence $(a_{n})$ of positive real numbers such that $1>a_{1}$ satisfying ${\displaystyle\int\limits_{a_{1}}^{1}} \rho^{-2}(u)du=1$, ${\displaystyle\int\limits_{a_{2}}^{a_{1}}} \rho^{-2}(u)du=2,....,{\displaystyle\int\limits_{a_{n}}^{a_{n-1}}} \rho^{-2}(u)du=n,.....$ Clearly $\left( a_{n}\right) $ converges to $0$ as $n$ tends to $+\infty$. The properties of $\rho$ allow us to construct a sequence of functions $\psi_{n}(u),$ $n=1,2,...$ , such that i\) $\psi_{n}(u)$ is a continuous function such that its support is contained in $\left( a_{n},a_{n-1}\right) $ ii\) $0\leq\psi_{n}(u)\leq\dfrac{2}{n}\rho^{-2}(u)$ and ${\displaystyle\int\limits_{a_{n}}^{a_{n-1}}} \psi_{n}(u)du=1$ Let $\varphi_{n}(x)={\displaystyle\int\limits_{0}^{\left\vert x\right\vert }} dy{\displaystyle\int\limits_{0}^{y}} \psi_{n}(u)du,$ $x\in\mathbb{R}$ It is clear that $\varphi_{n}\in\mathcal{C}^{2}(\mathbb{R})$ such that $\left\vert \varphi_{n}^{\prime}\right\vert \leq1$ and $\left( \varphi _{n}\right) $ is an increasing sequence converging to $\left\vert x\right\vert .$ Let $X_{t}^{1}$ and $X_{t}^{2}$ two solutions of corresponding to the same Brownian motion and the same MVSDE $X_{t}^{1}-X_{t}^{2}={\displaystyle\int\limits_{0}^{t}} \left( \sigma(s,X_{s}^{1}\right) -\sigma(s,X_{s}^{2}))dW_{s}+{\displaystyle\int\limits_{0}^{t}} \left( b(s,X_{s}^{1},\mathbb{P}_{X_{s}^{1}}\right) -b(s,X_{s}^{2},\mathbb{P}_{X_{s}^{2}}))dW_{s}$ By using Itô’s formula we obtain $\begin{array} [c]{cl}\varphi_{n}(X_{t}^{1}-X_{t}^{2})= & {\displaystyle\int\limits_{0}^{t}} \varphi_{n}^{\prime}(X_{s}^{1}-X_{s}^{2})\left( \sigma(s,X_{s}^{1}\right) -\sigma(s,X_{s}^{2}))dW_{s}\\ & +{\displaystyle\int\limits_{0}^{t}} \varphi_{n}^{\prime}(X_{s}^{1}-X_{s}^{2})\left( b(s,X_{s}^{1},\mathbb{P}_{X_{s}^{1}})-b(s,X_{s}^{2},\mathbb{P}_{X_{s}^{2}})\right) ds\\ & +\dfrac{1}{2}{\displaystyle\int\limits_{0}^{t}} \varphi_{n}^{\prime\prime}(X_{s}^{1}-X_{s}^{2})\left( \sigma(s,X_{s}^{1})-\sigma(s,X_{s}^{2})\right) ^{2}ds \end{array} $ $\varphi_{n}^{\prime}$ and $\sigma$ being bounded, then the process under the sign integral is sufficiently integrable. Then the first term is a true martingale, so that its expectation is $0.$ Therefore $\begin{array} [c]{cl}E\left( \varphi_{n}(X_{t}^{1}-X_{t}^{2})\right) = & E\left[ {\displaystyle\int\limits_{0}^{t}} \varphi_{n}^{\prime}(X_{s}^{1}-X_{s}^{2})\left( b(s,X_{s}^{1},\mathbb{P}_{X_{s}^{1}})-b(s,X_{s}^{2},\mathbb{P}_{X_{s}^{2}})\right) ds\right] \\ & +\dfrac{1}{2}E\left[ {\displaystyle\int\limits_{0}^{t}} \varphi_{n}^{\prime\prime}(X_{s}^{1}-X_{s}^{2})\left( \sigma(s,X_{s}^{1})-\sigma(s,X_{s}^{2})\right) ^{2}ds\right] \\ & =I_{1}+I_{2}\end{array} $ But we know that $W_{1}(\mathbb{P}_{X_{s}^{1}},\mathbb{P}_{X_{s}^{2}})=E\left( \left\vert X_{s}^{1}\mathbb{-}X_{s}^{2}\right\vert \right) $ Then $\left\vert I_{1}\right\vert \leq E{\displaystyle\int\limits_{0}^{t}} \kappa(\left\vert X_{s}^{1}\mathbb{-}X_{s}^{2}\right\vert )ds+{\displaystyle\int\limits_{0}^{t}} CE\left( \left\vert X_{s}^{1}\mathbb{-}X_{s}^{2}\right\vert \right) ds$ Then by Growall lemma, there exist a constant $M$ such that $\left\vert I_{1}\right\vert \leq M.E{\displaystyle\int\limits_{0}^{t}} \kappa(\left\vert X_{s}^{1}\mathbb{-}X_{s}^{2}\right\vert )ds$ On the other hand $\begin{array} [c]{cc}\left\vert I_{2}\right\vert = & \dfrac{1}{2}E\left[ {\displaystyle\int\limits_{0}^{t}} \varphi_{n}^{\prime\prime}(X_{s}^{1}-X_{s}^{2})\left( \sigma(s,X_{s}^{1})-\sigma(s,X_{s}^{2})\right) ^{2}ds\right] \\ & \leq\dfrac{1}{2}E\left[ {\displaystyle\int\limits_{0}^{t}} \dfrac{2}{n}\rho^{-2}(X_{s}^{1}-X_{s}^{2})\rho^{2}(X_{s}^{1}-X_{s}^{2})ds\right] =\dfrac{t}{n}\end{array} $ Then $\left\vert I_{2}\right\vert $ tends to $0$ as $n$ tends to $+\infty.$ Letting $n$ tending to $+\infty$ it holds that: $E\left( \left\vert X_{t}^{1}-X_{t}^{2}\right\vert \right) \leq M.E{\displaystyle\int\limits_{0}^{t}} \kappa(\left\vert X_{s}^{1}\mathbb{-}X_{s}^{2}\right\vert )ds$. Since ${\displaystyle\int\limits_{0^{+}}} \kappa^{-1}(u)du=+\infty$ $\ $we conclude that $E\left( \left\vert X_{t}^{1}-X_{t}^{2}\right\vert \right) =0.$ **Remark.** *The continuity and boundness of the coefficients imply the existence of a weak solution (see [@JMW] Proposition 1.10 ). Then by the well known Yamada - Watanabe theorem applied to equation (\[MVSDE2\]) (see [@Kur] example 2.14, page 10), the pathwise uniqueness proved in the last theorem implies the existence and uniqueness of a strong solution.* Convergence of the Picard successive **approximation** ======================================================= Assume that $b(t,x,\mu)$ and **** $\sigma(t,x,\mu)$ satisfy assumptions $\mathbf{(H_{1})}$, $\mathbf{(H_{2}).}$ We will prove the convergence of the Picard iteration scheme. This scheme is useful for numerical computations of the unique solution of (\[MVSDE\]). Let $(X_{t}^{0})=x$ for all $t\in\left[ 0,T\right] $ and define $\left( X_{t}^{n+1}\right) $ as the solution of the following SDE $\left\{ \begin{array} [c]{c}dX_{t}^{n+1}=b(t,X_{t}^{n},\mathbb{P}_{X_{t}^{n}})dt+\sigma(t,X_{t}^{n},\mathbb{P}_{X_{t}^{n}})dB_{t}\\ X_{0}^{n+1}=x \end{array} \right. $ Under assumptions $\mathbf{(H_{1})}$, $\mathbf{(H_{2})}$, the sequence $\left( X^{n}\right) $ converges to the unique solution of  (\[MVSDE\]) $$E[\sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}]\rightarrow0$$ Let $n\geq0,$ by applying usual arguments such as Schwartz inequality and Burkholder-Davis Gundy inequality for the martingale part, we get $$\begin{aligned} |X_{t}^{n+1}-X_{t}^{n}|^{2} & \leq2(\int_{0}^{t}|b(s,X_{s}^{n},P_{X_{s}^{n}})-b(s,X_{s}^{n-1},P_{X_{s}^{n-1}})|ds)^{2}\\ & +2(\int_{0}^{t}|\sigma(s,X_{s}^{n},P_{X_{s}^{n}})-\sigma(s,X_{s}^{n-1},P_{X_{s}^{n-1}})|dB_{s})^{2}\\ E[\sup_{t\leq T}|X_{t}^{n+1}-X_{s}^{n}|^{2}] & \leq2TE[\int_{0}^{T}|b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-b(s,X_{s}^{n-1},\mathbb{P}_{X_{s}^{n-1}})|^{2}ds]\\ & +2C_{2}E[\int_{0}^{T}|\sigma(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-\sigma(s,X_{s}^{n-1},\mathbb{P}_{X_{s}^{n-1}})|^{2}ds]\end{aligned}$$ the coefficients $b$ and $\sigma$ being Lipschitz continuous in $\left( x,\mu\right) $ we get $$\begin{aligned} E[\sup_{t\leq T}|X_{t}^{n+1}-X_{t}^{n}|^{2}] & \leq2(T+C_{2})L^{2}\int _{0}^{T}E[|X_{s}^{n}-X_{s}^{n-1}|^{2}]+W_{2}(\mathbb{P}_{X_{s}^{n}},\mathbb{P}_{X_{s}^{n-1}})ds\\ & \leq4(T+C_{2})L^{2}\int_{0}^{T}E[|X_{s}^{n}-X_{s}^{n-1}|^{2}]ds\\ & \leq4(T+C_{2})L^{2}\int_{0}^{T}E[\sup_{t\leq T}|X_{s}^{n}-X_{s}^{n-1}|^{2}]ds\end{aligned}$$ Then for all $n\geq1$, and $t\leq T$ $$\begin{aligned} E[\sup_{t\leq T}|X_{t}^{1}-X_{s}^{0}|^{2}] & \leq2T\int_{0}^{T}b|(s,x,\mu)|^{2}ds+C_{2}\int_{0}^{T}\sigma|(s,x,\mu)|^{2}ds\\ & \leq2(C_{2}+T)M(1+E(|x|^{2}))T\\ & \leq A_{1}T\end{aligned}$$ where the constant $A_{1}$ only depends on $C_{2},M,T$ and $E[|x|^{2}]$. So by induction on $n$ we obtain $$E[\sup_{t\leq T}|X_{t}^{n+1}-X_{t}^{n}|^{2}]\leq\frac{A_{2}^{n+1}T^{n+1}}{(n+1)!}$$ This implies in particular that $\left( X_{t}^{n}\right) $ is a Cauchy sequence in $L^{2}(\Omega,\mathcal{C}(\left[ 0,T\right] ,\mathbb{R}^{d}))$ which is complete. Therefore $\left( X_{t}^{n}\right) $ converges to a limit $\left( X_{t}\right) $ which is the unique solution of (\[MVSDE\]) **Stability with respect to initial condition** =============================================== In this section, we will study the stability of MFSDEs with respect to small perturbations of the initial condition. We denote by $\left( X_{t}^{x}\right) $ the unique solution of (\[MVSDE\]) such that $X_{0}^{x}=x$ $$\left\{ \begin{array} [c]{c}dX_{t}^{x}=b(t,X_{t}^{x},\mathbb{P}_{X_{t}^{x}})dt+\sigma(t,X_{t}^{x},\mathbb{P}_{X_{t}^{x}})dB_{t}\\ X_{0}^{x}=x \end{array} \right.$$ Assume that $b(t,x,\mu)$ and $\sigma(t,x,\mu)$ satisfy $\mathbf{(H_{1})}$, $\mathbf{(H_{2}),}$ then the mapping $\Phi:\mathbb{R}^{d}\longrightarrow L^{2}(\Omega,\mathcal{C}(\left[ 0,T\right] ,\mathbb{R}^{d}))$ defined by $\left( \Phi(x)_{t}\right) =\left( X_{t}^{x}\right) $ is continuous. Let $\left( x_{n}\right) $ be a sequence in $\mathbb{R}^{d}$ converging to $x.$ Let us prove that $\underset{n\longrightarrow+\infty}{\lim}E\left[ \sup\limits_{t\leq T}|X_{t}^{n}-X_{t}|^{2}\right] =0,$ where $X_{t}^{n}=X_{t}^{x_{n}}.$ We have $$\begin{aligned} |X_{t}^{n}-X_{t}|^{2} & =|x_{n}-x+\int_{0}^{t}(b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}}))ds\\ & +\int_{0}^{t}(\sigma(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-\sigma(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}}))dB_{s}|^{2}$$$$\begin{aligned} & \leq3|x_{n}-x|^{2}+3(\int_{0}^{t}|b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})|ds)^{2}\\ & +3(\int_{0}^{t}|\sigma(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-\sigma (s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})|dB_{s})^{2}$$ $$\begin{aligned} E\left[ \sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}\right] & \leq3|x_{n}-x|^{2}+3E[\sup_{s\leq t}\int_{0}^{t}|b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})|ds]^{2}\\ & +3E[\sup_{s\leq t}\int_{0}^{t}|\sigma(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-\sigma(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})|dB_{s}]^{2}$$ we apply Schwartz and Burkholder Davis Gundy inequalities to obtain $$\begin{aligned} E\left[ \sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}\right] & \leq3|x_{n}-x|^{2}+3TE[\int_{0}^{t}|b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})|^{2}ds]\\ & +3C_{2}E[\int_{0}^{t}|\sigma(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-\sigma(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})|^{2}ds]\end{aligned}$$ The Lipschitz condition implies that$\ $$$E\left[ \sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}\right] \leq3|x_{n}-x|^{2}+3(T+C_{2})L^{2}[\int_{0}^{t}E|X_{s}^{n}-X_{s}|^{2}+W_{2}(\mathbb{P}_{X_{s}^{n}},\mathbb{P}_{X_{s}})]ds$$ Since$$W_{2}^{2}(\mathbb{P}_{X_{t}^{n}},\mathbb{P}_{X_{t}})\leq E[|X_{s}^{n}-X_{s}|^{2}],$$ then $$\begin{aligned} E\left[ \sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}\right] & \leq3|x_{n}-x|^{2}+6(T+c_{2})L^{2}\int_{0}^{t}E|X_{s}^{n}-X_{s}|^{2}ds\\ & \leq3|x_{n}-x|^{2}+6(T+c_{2})L^{2}\int_{0}^{t}E[\sup_{t\leq T}|X_{s}^{n}-X_{s}|^{2}]ds.\end{aligned}$$ Finally we apply Gronwall lemma to conclude that $$E\left[ \sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}\right] \leq3|x_{n}-x|^{2}\exp[6(T+c_{2})L^{2}t]$$ Therefore $\lim_{n\rightarrow\infty}x_{n}=x$ implies that $\lim_{n\rightarrow \infty}E\left[ \sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}\right] =0.$ **Stability with respect to the coefficients** ============================================== In this section, we will establish the stability of the MVSDE with respect to small perturbation of the cofficients $b$ and $\sigma.$ Let us consider sequences of functions $\left( b_{n}\right) $ and $\left( \sigma _{n}\right) $ and consider the corresponding MFSDE:$$\begin{aligned} dX_{t}^{n} & =b_{n}(t,X_{t}^{n},\mathbb{P}_{X_{t}^{n}})dt+\sigma_{n}(t,X_{t}^{n},\mathbb{P}_{X_{t}^{n}})dB_{t}\label{MVSDEn}\\ X_{0}^{n} & =x\nonumber\end{aligned}$$ The following theorem gives us the continuous dependence of the solution with respect to the coefficients. Assume that the functions $b(t,x,\mu),$ $b_{n}(t,x,\mu),$ $\sigma(t,x,\mu)$ and $\sigma_{n}(t,x,\mu)$ satisfy $\mathbf{(H_{1})}$, $\mathbf{(H_{2}).}$ Further suppose that for each $T>0,$ and each compact set $K$ there existe $C>0$ such that $i)\sup_{t\leq T}(|b_{n}(t,x,\mu)|+|\sigma_{n}(t,x,\mu)|)\leq C(1+|x|),$ $ii)\lim_{n\rightarrow\infty}\sup_{t\leq T}\sup_{x\in K}\sup_{\mu \in\mathcal{P}_{2}(\mathbb{R}^{d})}||b_{n}(t,x,\mu)-b(t,x,\mu)||+||\sigma _{n}(t,x,\mu)-\sigma(t,x,\mu)||=0$ then $$\lim_{n\rightarrow\infty}E\left[ \sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}\right] =0$$ where $\left( X_{t}^{n}\right) $ and $\left( X_{t}\right) $ are respectively solutions of (\[MVSDEn\]) and (\[MVSDE\]). For each $n\in\mathbb{N}$, let $\left( X_{t}^{n}\right) $ be a solution of (\[MVSDEn\]), then by using $$\begin{aligned} |X_{t}^{n}-X_{t}|^{2} & \leq3(\int_{0}^{t}|b_{n}(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-b_{n}(s,X_{s},\mathbb{P}_{X_{s}})|ds)^{2}\\ & +3(\int_{0}^{t}|b_{n}(s,X_{s},\mathbb{P}_{X_{s}})-b(s,X_{s},\mathbb{P}_{X_{s}})|ds)^{2}\\ & +3\left\vert \int_{0}^{t}\left( \sigma_{n}(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-\sigma_{n}(s,X_{s},\mathbb{P}_{X_{s}})\right) dB_{s}\right\vert ^{2}\\ & +3\left\vert \int_{0}^{t}\left( \sigma_{n}(s,X_{s},\mathbb{P}_{X_{s}})+\sigma(s,X_{s},\mathbb{P}_{X_{s}})\right) dB_{s}\right\vert ^{2}$$ By using the Lipschitz continuity and Burkholder Davis Gundy inequality, it holds that $$\begin{aligned} E\left[ \sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}\right] & \leq3(T+C_{2})L^{2}\int_{0}^{t}E[|X_{s}^{n}-X_{s}|^{2}]+W_{2}(\mathbb{P}_{X_{s}^{n}},\mathbb{P}_{X_{s}})^{2}]ds\\ & +3(T+C_{2})E[\int_{0}^{t}|b_{n}(s,X_{s},\mathbb{P}_{X_{s}})-b(s,X_{s},\mathbb{P}_{X_{s}})|^{2}ds]\\ & +3(T+C_{2})E[\int_{0}^{t}|\sigma_{n}(s,X_{s},\mathbb{P}_{X_{s}})-\sigma(s,X_{s},\mathbb{P}_{X_{s}})|^{2}ds]\\ & \leq6(T+C_{2})L^{2}\int_{0}^{T}E[|X_{s}^{n}-X_{s}|^{2}]ds+K_{n}\\ & \leq6(T+C_{2})L^{2}\int_{0}^{T}E\left[ \sup_{s\leq t}|X_{s}^{n}-X_{s}|^{2}\right] dt+K_{n}$$ such that $$K_{n}=3(T+C_{2})E[\int_{0}^{T}\left( |b_{n}(s,X_{s},\mathbb{P}_{X_{s}})-b(s,X_{s},\mathbb{P}_{X_{s}})|^{2}+|\sigma_{n}(s,X_{s},\mathbb{P}_{X_{s}})+\sigma(s,X_{s},\mathbb{P}_{X_{s}})|^{2}\right) ds]$$ An application of Gronwall lemma allows us to get $$E\left[ \sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}\right] \leq K_{n}\exp 6(T+C_{2})L^{2}.T$$ By using assumptions i) and ii) it is easy to see that $K_{n}$ $\longrightarrow0.$as $n\longrightarrow+\infty,$which achieves the proof. Stability **with respect to the driving processes** =================================================== In this section, we consider McKean-Vlasov SDE driven by continuous semi-martingales. Let  $b:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\rightarrow\mathbb{R}^{d}$ and $\sigma:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\rightarrow\mathbb{R}^{d\times d}$ be bounded continuous functions. We consider MVSDEs driven by continuous semi-martingales of the following form $$\left\{ \begin{array} [c]{c}dX_{t}=b(t,X_{t},\mathbb{P}_{X_{t}})dA_{t}+\sigma(t,X_{t},\mathbb{P}_{X_{t}})dM_{t}\\ X_{0}=x \end{array} \right. \label{MVSDE3}$$ where $A_{t}$ is an adapted continuous process of bounded variation and $M_{t}$ is a continuous local martingale. Let us consider the following sequence of MVSDEs $$\left\{ \begin{array} [c]{c}dX_{t}^{n}=b(t,X_{t}^{n},\mathbb{P}_{X_{t}^{n}})dA_{t}^{n}+\sigma(t,X_{t}^{n},\mathbb{P}_{X_{t}^{n}})dM_{t}^{n}\\ X_{0}^{n}=x \end{array} \right. \label{MVSDE3n}$$ where $(A^{n})$ is a sequence of $\mathcal{F}_{t}$-adapted continuous process of bounded variaton and $M^{n}$ is continuous $(\mathcal{F}_{t},\mathbb{P)}$-local martingales. Let us assume that $(A,A^{n},M,M^{n})$ satisfy: (**H**$_{\mathbf{7}}$) 1\) The family $(A,A^{n},M,M^{n})$ is bounded in $\mathbb{C} ([0,1])^{4}.$ 2\) $\left( M^{n}-M\right) $ converges to $0$ in probability in $\mathbb{C} ([0,1])$ as n tends to $+\infty.$ 3)The total variation $(A^{n}-A)$ converges to $0$ in probability as n tends to $+\infty.$ Let $b(t,x,\mu)$ and $\sigma(t,x,\mu)$ satisfy $\mathbf{(H_{1})}$, $\mathbf{(H_{2})}$. Further assume that $(A,A^{n},M,M^{n})$ satisfy ($\mathbf{H}_{\mathbf{7}}$). Then for each $\varepsilon>0$$$\lim_{n\rightarrow\infty}E[\sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}]=0$$ where $\left( X_{t}^{n}\right) $ and $\left( X_{t}\right) $ are respectively solutions of (\[MVSDE3n\]) and (\[MVSDE3\]). Let $n\in\mathbb{N}$, then by using similar arguments as in the preceding theorems, we have $$\begin{aligned} \mathbb{E}[\sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}] & \leq3(E[\sup_{t\leq T}\int_{0}^{t}|b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-b(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})|]dA_{s}^{n})^{2}\\ & +3(E\left[ \sup_{t\leq T}\left\vert \int_{0}^{t}\left( \sigma(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})-\sigma(s,X_{s}^{n},\mathbb{P}_{X_{s}^{n}})\right) dM_{s}^{n}\right\vert ^{2}\right] \\ & +3E[(\sup_{t\leq T}\int_{0}^{t}\left\vert b(t,X_{t},\mathbb{P}_{X_{t}})\right\vert d\left\vert A_{s}^{n}-A_{s}\right\vert )^{2}+\sup_{t\leq T}\left\vert \int_{0}^{t}\sigma(t,X_{t},\mathbb{P}_{X_{s}})d(M_{s}^{n}-M_{s})\right\vert ^{2}]\end{aligned}$$ Let$$K_{n}=3E[(\sup_{t\leq T}\int_{0}^{t}\left\vert b(t,X_{t},\mathbb{P}_{X_{t}})\right\vert d\left\vert A_{s}^{n}-A_{s}\right\vert )^{2}+\sup_{t\leq T}\left\vert \int_{0}^{t}\sigma(t,X_{t},\mathbb{P}_{X_{s}})d(M_{s}^{n}-M_{s})\right\vert ^{2}]$$ By using Schwartz and Burkholder Davis Gundy inequalities along with the Lipschitz condition, we obtain $$\begin{aligned} \mathbb{E}[\sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}] & \leq C(T)[\int_{0}^{T}(E\left( \sup_{s\leq t}|X_{s}^{n}-X_{s}|^{2}\right) +\mathbb{W}_{2}(\mathbb{P}_{X_{s}^{n}},\mathbb{P}_{X_{s}})^{2})(dA_{s}^{n}+d<M^{n},M^{n}>_{s})]+K_{n}\\ & \leq2C(T)\int_{0}^{T}E[\sup_{s\leq t}|X_{s}^{n}-X_{s}|^{2}](dA_{s}^{n}+d<M^{n},M^{n}>_{s})+K_{n}$$ where $C(T)$ is a positive constant which may change from line to line$.$ Since $(A_{s}^{n}+d<M^{n},M^{n}>_{s})$ is an increasing process, then according to the Stochastic Gronwall lemma [@Met] Lemma 29.1, page 202, we have $$\mathbb{E}[\sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}]\leq2K_{n}CE(A_{T}^{n}+<M^{n},M^{n}>_{T}))<+\infty,$$ where $C$ is a constant. By using assumption $\left( \mathbf{H}_{\mathbf{7}}\right) $ it is easy to that $$\lim_{n\rightarrow\infty}K_{n}=0$$ Therefore $$\lim_{n\rightarrow\infty}\mathbb{E}[\sup_{t\leq T}|X_{t}^{n}-X_{t}|^{2}]=0$$ Approximation of relaxed control problems ========================================= It is well known that in the deterministic as well as in stochastic control problems , an optimal control does not necessarily exist in the space of strict controls, in the absence of convexity conditions. The classical method is then to introduce measure valued controls which describe the introduction of a stochastic parameter see [@EKNJ] and the references therein. These measure valued controls called relaxed controls generalize the strict controls in the sense that the set of strict controls may be identified as a dense subset of the set of the relaxed controls. The relaxed control problem is a true extension of the strict control problem if they have the same value function. That is the infimum among strict controls is equal to the infimum among relaxed controls. This last property is based on the continuity of the dynamics and the cost functional with respect to the control variable. We show that under Lipschitz condition and continuity with respect to the control variable of the coefficients that the strict and relaxed control problems have the same value function. Our result extends those in [@BMM1; @BMM2], to general MFSDEs of the type \[CMFSDE\]. Let $\mathbb{A}$ be some compact metric space called the action space. A strict control $\left( u_{t}\right) $ is a measurable, $\mathcal{F}_{t}-$ adapted process with values in the action space $\mathbb{A}$. We denote $\mathcal{U}_{ad}$ the space of strict controls. The state process corresponding to a strict control is the unique solution, of the following MFSDE $$\left\{ \begin{array} [c]{l}dX_{t}=b(t,X_{t},\mathbb{P}_{X_{t}},u_{t})ds+\sigma(t,X_{t},\mathbb{P}_{X_{t}},u_{t})dB_{s}\\ X_{0}=x \end{array} \right. \label{CMFSDE}$$ and the corresponding cost functional is given by $J(u)=E\left[ \int_{0}^{T}h(t,X_{t},\mathbb{P}_{X_{t}},u_{t}dt+g(X_{T},\mathbb{P}_{X_{T}})\right] .$ The problem is to minimize $J(u)$ over the space $\mathcal{U}_{ad}$ of strict controls and to find $u^{\ast}\in$ $\mathcal{U}_{ad}$ such that $J(u^{\ast})=\inf\left\{ J(u),u\in\mathcal{U}_{ad}\right\} .$ Let us consider the following assumptions in this section. $\mathbf{(H}_{\mathbf{4}}\mathbf{)}$ $b:\left[ 0,T\right] \times \mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathbb{A}\longrightarrow\mathbb{R}^{d}$, $\sigma:\left[ 0,T\right] \times \mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})\times\mathbb{A}\longrightarrow\mathbb{R}^{d}\otimes\mathbb{R}^{d},$ are continuous bounded functions $.$  $\mathbf{(H}_{\mathbf{5}}\mathbf{)}$ $b(t,.,.,a)$ and $\sigma(t,.,.,a)$ are Lipschitz continuyous uniformly in $(t,a)\in\left[ 0,T\right] \times \mathbb{A}.$ $\mathbf{(H}_{\mathbf{6}}\mathbf{)}$ $h:\left[ 0,T\right] \times \mathbb{R}\times\mathbb{R}\times\mathbb{A}\longrightarrow\mathbb{R}$ and $g:\mathbb{R}\times\mathbb{R}\longrightarrow\mathbb{R}$, are bounded continuous functions, such that $h(t,.,.,a)$ is Lipschitz in $(x,\mu).$ It is clear that under assumptions $\mathbf{(H}_{\mathbf{4}}\mathbf{)}$ and $\mathbf{(H}_{\mathbf{5}}\mathbf{)}$ and according to Theorem 3.1 , for each $u\in\mathcal{U}_{ad},$ the MFSDE (\[CMFSDE\]) has a unique strong solution, such that for every $p>0$, $E(\left\vert X_{t}\right\vert ^{p})<+\infty.$ Moreover for each $u\in$ $\mathcal{U}_{ad}$ $\left\vert J(u)\right\vert <+\infty$. Let $\mathbb{V}\ $be the set of product measures $\mu$ on $[0,T]\times \mathbb{A}$ whose projection on $\left[ 0,T\right] $ coincides with the Lebesgue measure $dt$. $\mathbb{V}$ as a closed subspace of the space of positive Radon measures $\mathbb{M}_{+}([0,T]\times\mathbb{A})$ is compact for the topology of weak convergence. A relaxed control on the filtered probability space $\left( \Omega ,\mathcal{F},\mathcal{F}_{t},P\right) $ is a random variable $\mu=dt.\mu _{t}(da)$ with values in $\mathbb{V}$, such that $\mu_{t}(da)$ is progressively measurable with respect to $(\mathcal{F}_{t})$ and such that for each $t$, $1_{(0,t]}.\mu$ is $\mathcal{F}_{t}-$measurable. The set $\mathcal{U}_{ad}$ of strict controls is embedded into the set of relaxed controls by identifying $u_{t}$ with $dt\delta_{u_{t}}(da).$ It was proved in [@EM] for classical control problems and in [@BMM2] that the relaxed state process corresponding to a relaxed control must satisfy a MFSDE driven by a martingale measure instead of a Brownian motion. That is the relaxed state process satisfies $$\left\{ \begin{array} [c]{c}dX_{t}={\textstyle\int_{\mathbb{A}}} b(t,X_{t},\mathbb{P}_{X_{t}},a)\,\mu_{t}(da)dt+{\textstyle\int_{\mathbb{A}}} \sigma(t,X_{t},\mathbb{P}_{X_{t}},a)\,M(da,dt)\\ X_{0}=x, \end{array} \right. \label{REMFSDE}$$ where $M\ $is an orthogonal continuous martingale measure, with intensity ** $dt\mu_{t}(da).$Using the same tools as in Theorem 3.1, it is not difficult to prove that (\[REMFSDE\]) admits a unique strong solution. The following Lemma, known in the control literature as Chattering Lemma states that the set of strict controls is a dense subset in the set of relaxed controls. i\) **** *Let* $(\mu_{t})$* be a relaxed control*$.$* Then there exists a sequence of adapted processes* $(u_{t}^{n})$* with values in* $\mathbb{A}$*, such that the sequence of random measures* $\left( \delta_{u_{t}^{n}}(da)\,dt\right) $* converges in* $\mathbb{V}$ *to* $\mu_{t}(da)\,dt,$ ** $P-a.s.$ ii\) For any $g$ continuous in $\left[ 0,T\right] \times\mathbb{M}_{1}(\mathbb{A})$ such that $g(t,.)$ is linear, we have $\underset{n\rightarrow+\infty}{\lim}\int_{0}^{t}g(s,\delta_{u_{s}^{n}})ds=\int_{0}^{t}g(s,\mu_{s})ds$ uniformly in $t\in\left[ 0,T\right] ,$ $P-a.s.$ See [@EKNJ] *Let* $X_{t}^{n}$ be the solution  of the state equation*(*\[CMFSDE\]) corresponding to $u^{n},$ where $u^{n}$ is a strict control defined as in the last Lemma. If we denote $M^{n}(t,F)=\int\nolimits_{0}^{t}\int\nolimits_{F}\delta_{u_{s}^{n}}(da)dW_{s},$ then $M^{n}(t,F)$ is an orthogonal martingale measure and $X_{t}^{n}$ may be written in a relaxed form as follows $\left\{ \begin{array} [c]{l}dX_{t}^{n}={\displaystyle\int\limits_{\mathbb{A}}} b(t,X_{t}^{n},\mathbb{P}_{X_{t}^{n}},a)\delta_{u_{t}^{n}}(da)dt+{\displaystyle\int\limits_{\mathbb{A}}} \sigma(t,X_{t},\mathbb{P}_{X_{t}^{n}},a)M^{n}(dt,da)\\ X_{0}=x \end{array} \right. $ Therefore $X_{t}^{n}$ may be viewed as the solution of *(*\[REMFSDE\]) corresponding to the relaxed control $\mu^{n}=dt\delta _{u_{t}^{n}}(da).$ Since $\left( \delta_{u_{t}^{n}}(da)\,dt\right) $* converges weakly to* $\mu_{t}(da)\,dt,$ ** $P-a.s.,$ then for every bounded predictable process $\varphi:\Omega\times\left[ 0,T\right] \times\mathbb{A}\rightarrow\mathbb{R}$, such that $\varphi(\omega,t,.)$ is continuous$,$ we have$$E\left[ \left( \int\nolimits_{0}^{T}\int\nolimits_{\mathbb{A}}\varphi (\omega,t,a)M^{n}(dt,da)-\int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}\varphi(\omega,t,a)M(dt,da)\right) ^{2}\right] \rightarrow0\text{ }as\text{ }n\longrightarrow+\infty. \label{martin-measure}$$ see ([@BDM; @Mel]). The following proposition gives the continuity of the dynamics *(*\[REMFSDE\]) with respect to the control variable. i\) *If* $X_{t},$ $X_{t}^{n}$ *denote the solutions of state equation (*\[REMFSDE\]) corresponding to $\mu$ and $\mu^{n},$ *then  *$\overset{}{\text{For each}}t\leq T,$ $\underset{n\rightarrow+\infty}{\lim }E(\left\vert X_{t}^{n}-X_{t}\right\vert ^{2})=0.$ ii\) Let $J(u^{n})$ and $J(\mu)$ be the expected costs corresponding respectively to $u^{n}$ and $\mu,$ then $\left( J\left( u^{n}\right) \right) $ converges to $J\left( \mu\right) .$ 1\) Let $X_{t},$ $X_{t}^{n}$ the solutions of the MVSDE *(*\[REMFSDE\]) corresponding to $\mu$ and $u^{n}$.  We have $$\begin{array} [c]{cl}\left\vert X_{t}-X_{t}^{n}\right\vert & \leq\left\vert \int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}b\left( s,X_{s},\mathbb{P}_{X_{t}},u\right) \mu_{s}(da).ds-\int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}b\left( s,X_{s}^{n},\mathbb{P}_{X_{t}^{n}},u\right) \delta_{u_{s}^{n}}(da)ds\right\vert \\ & +\left\vert \int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}\sigma\left( s,X_{s},\mathbb{P}_{X_{t}},a\right) M(ds,da)-\int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}\sigma\left( s,X_{s}^{n},\mathbb{P}_{X_{t}^{n}},a\right) M^{n}(ds,da)\right\vert \\ & \leq\left\vert \int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}b\left( s,X_{s},\mathbb{P}_{X_{t}},u\right) \mu_{s}(da).ds-\int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}b\left( s,X_{s},\mathbb{P}_{X_{t}},a\right) \delta_{u_{s}^{n}}(da)ds\right\vert \\ & +\left\vert \int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}b\left( s,X_{s},\mathbb{P}_{X_{t}},u\right) \delta_{u_{s}^{n}}(da).ds-\int \nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}b\left( s,X_{s}^{n},\mathbb{P}_{X_{t}},a\right) \delta_{u_{s}^{n}}(da)ds\right\vert \\ & +\left\vert \int\nolimits_{0}^{s}\int\nolimits_{\mathbb{A}}\sigma\left( v,X_{v},\mathbb{P}_{X_{v}},a\right) M(dv,da)-\int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}\sigma\left( v,X_{v},\mathbb{P}_{X_{v}},a\right) M^{n}(dv,da)\right\vert \\ & +\left\vert \int\nolimits_{0}^{s}\int\nolimits_{\mathbb{A}}\sigma\left( v,X_{v},\mathbb{P}_{X_{v}},a\right) M^{n}(dv,da)-\int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}\sigma\left( v,X_{v}^{n},\mathbb{P}_{X_{v}^{n}},a\right) M^{n}(dv,da)\right\vert \end{array}$$ Then by using Burkholder-Davis-Gundy inequality for the martingale part and the fact that all the functions in equation *(*\[REMFSDE\]) are Lipschitz continuous, it holds that $$E\left( \left\vert X_{t}-X_{t}^{n}\right\vert ^{2}\right) \leq C\int\nolimits_{0}^{T}E\left( \left\vert X_{s}-X_{s}^{n}\right\vert ^{2}+\mathbb{W}_{2}(\mathbb{P}_{X_{s}^{n}},\mathbb{P}_{X_{s}})^{2}\right) dt+K_{n},$$ where $C$ is a nonnegative constant and $\begin{array} [c]{c}K_{n}=E\left( \left\vert \int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}b\left( s,X_{s},\mathbb{P}_{X_{t}},u\right) \mu_{s}(da)ds-\int \nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}b\left( s,X_{s},\mathbb{P}_{X_{t}},a\right) \delta_{u_{s}^{n}}(da)ds\right\vert ^{2}\right) \\ +E\left( \left\vert \int\nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}\sigma\left( s,X_{s},\mathbb{P}_{X_{t}},a\right) M(ds,da)-\int \nolimits_{0}^{t}\int\nolimits_{\mathbb{A}}\sigma\left( s,X_{s},\mathbb{P}_{X_{t}},a\right) M^{n}(ds,da)\right\vert ^{2}\right) \\ =I_{n}+J_{n}\end{array} $ Using the fact that $\mathbb{W}_{2}(\mathbb{P}_{X_{s}^{n}},\mathbb{P}_{X_{s}})^{2}\leq E\left( \left\vert X_{s}-X_{s}^{n}\right\vert ^{2}\right) $, we get $$E\left( \left\vert X_{t}-X_{t}^{n}\right\vert ^{2}\right) \leq 2C\int\nolimits_{0}^{T}E\left( \left\vert X_{s}-X_{s}^{n}\right\vert ^{2}\right) dt+K_{n}.$$ Since the sequence $\left( \delta_{u_{t}^{n}}(da)\,dt\right) $* converges weakly to* $\mu_{t}(da)\,dt,$ ** $P-a.s.$ and $b$ is bounded and continuous in the control variable, then by applying the Lebesgue dominated convergence theorem we get $\underset{n\rightarrow+\infty}{\lim }I_{n}=0$. On the other hand since $\sigma$ is bounded and continuous in $a$, applying (\[martin-measure\]) we get $\underset{n\rightarrow+\infty}{\lim }J_{n}=0.$ We conclude by using Gronwall’s Lemma. ii\) Let $u^{n}$ and $\mu$ as in i) then $$\begin{array} [c]{cl}\left\vert J\left( u^{n}\right) -J\left( \mu\right) \right\vert & \leq E\left[ \int\limits_{0}^{T}\int_{\mathbb{A}}\left\vert h(t,X_{t}^{n},\mathbb{P}_{X_{t}^{n}},a)-\,h(t,X_{t},\mathbb{P}_{X_{t}},a)\right\vert \delta_{u_{t}^{n}}(da)\,dt\right] \\ & +E\left[ \left\vert \int\limits_{0}^{T}\int_{\mathbb{A}}h(t,X_{t},\mathbb{P}_{X_{t}},a)\delta_{u_{t}^{n}}(da)\,dt-\int\limits_{0}^{T}\int_{\mathbb{A}}h(t,X_{t},\mathbb{P}_{X_{t}},a)\mu_{t}(da)\,dt\right\vert \right] \\ & +E\left[ \left\vert g(X_{T}^{n},\mathbb{P}_{X_{T}^{n}})-g(X_{T},\mathbb{P}_{X_{T}})\right\vert \right] \end{array}$$ The first assertion implies that the sequence $\left( X_{t}^{n}\right) $ converges to $X_{t}$ in probability$,$ then by using the assumptions on the coeffcients $h$ and $g$ and the dominated convergence theorem it is easy to conclude . According to the last Proposition, it is clear that the infimum among relaxed controls is equal to the infimum among strict controls, which implies the value functions for the relaxed and strict models are the same. [99]{} Bahlali,S., Djehiche,B., Mezerdi, B., Approximation and optimality necessary conditions in relaxed stochastic control problems. *J. Appl. Math. Stoch. Anal., Vol. 2006, Article ID 72762, 1–23*. Bahlali, K. , Mezerdi, M., Mezerdi, B, On the relaxed mean-field stochastic control problem. *Stoch. Dyn. 18 (2018), No. 3, 1850024, 20 pp.* Bahlali, K, Mezerdi, M., Mezerdi, B., Existence and optimality conditions for relaxed mean-field stochastic control problems. *Systems Control Lett. **102** (2017), 1–8.* Bahlali, K., ,Mezerdi, B., Ouknine, Y., Pathwise uniqueness and approximation of stochastic differential equations. *Sém. de Probabilités, Vol. XXXII (1998), Edit. J. Azema, M.Yor, P.A Meyer, Lect. Notes in Math.1651, Springer Verlag.* Bensoussan, A., Frehse,A., Yam, P., Mean-field games and mean-field type control theory, *Springer briefs in mathematics (2013), Springer Verlag.* Buckdahn, R., Djehiche, B., Li, J., Peng, S. Mean-Field Backward Stochastic Differential Equations. A limit Approach.*The* *Annals of Probab. **37**(4) (2009), 1524-1565.* Buckdahn, R., Li, J., Peng, S., Mean-Field backward stochastic differential equations and related partial differential equations. *Stoch. Proc. Appl., 119* $\left( 2009\right) $ *3133-3154.* Carmona, R., Delarue, F., Probabilistic theory of mean field games with applications. I. Mean field FBSDEs, control, and games*.* *Probability Theory and Stochastic Modelling, **83**. Springer, Cham, 2018.* Chaudru de Raynal, P.E., Strong well-posedness of mckean-vlasov stochastic differential equations with hölder drift. *ArXiv e-prints arXiv:1512.08096v2, 2015.* Chaudru de Raynal, P.E., Frikha, N., Well-posedness for some non-linear diffusion processes and related PDE on the Wasserstein space. *Arxiv:1811.06904v1, 2018*. Chiang, T.S., McKean-Vlasov equations with discontinuous coeffcients. *Soochow J. Math., 20(4):507{526, 1994. Dedicated to the memory of Professor Tsing-Houa Teng.* El Karoui, N, Méléard, S., Martingale measures and stochastic calculus, *Probab. Th. and Rel. Fields 84 (1990), no. 1, 83–101.* El Karoui,N., Nguyen, D.H., Jeanblanc-Picqué, M., Compactification methods in the control of degenerate diffusions: existence of an optimal control*, Stochastics, 20 (1987), No. 3, 169-219.* Graham, C., McKean-Vlasov Itô-Skorohod equations, and nonlinear diffusions with discrete jump sets. *Stoch. Proc. Appl., 40ss(1):69–82, 1992.* Hammersley, W., Šiška, D., Szpruch, L., McKean-Vlasov SDEs under measure dependent Lyapunov conditions. *Preprint Arxiv arXiv:1802.03974, 2018* Huang, M., Malhamé, R. P., Caines, P. E., Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the nash certainty equivalence principle*.* *Comm. in Inf. and Systems, 6(3) (2006), 221–252.* Ikeda, N., Watanabe, S., Stochastic differntial equations and diffusions processes, *2nd Edition (1989), North- Holland Publishing Company, Japan.* Jourdain, B., Méléard, S., Woyczynski, W., Nonlinear SDEs driven by Lévy processes and related PDEs. *Alea 4 (2008), 1–29.* Kac, M., Foundations of kinetic theory. *In Proceedings of the 3rd Berkeley Symposium on Mathematical Statistics and Probability, Vol. 3 (1956), 171-197.* Lasry, J.M., Lions, P.L., Mean-field games. *Japan. J. Math., 2* $(2007)$ *229–260.* Kurtz, T., Weak and strong solutions of general stochastic models. *Electron. Comm.. Probab. Volume 19 (2014), paper no. 58, 16 pp.* McKean, H.P., A class of Markov processes associated with nonlinear parabolic equations. *Proc. Nat. Acad. Sci. U.S.A., 56:1907-1911, 1966.* Méléard, S., Representation and approximation of martingale measures. *Stoch. Partial Diff. Equ. and Their Appl., Lect. Notes in Control and Inf. Sc., Vol. 176, 1992, 188-199.* Métivier, M., Semi-martingales, a course on stochastic processes. *De Gruyter, Berlin New York, 1982.* Mishura, Y .S., Veretennikov, A. Y., Existence and uniqueness theorems for solutions of McKean–Vlasov stochastic equations. *Preprint Arxiv arXiv:1603.02212, 2018.* Scheutzow, M., Uniqueness and non-uniqueness of solutions of Vlasov-McKean equations. *J. of the Austr. Math. Soc. (Series A), 43:246–256.* Sznitman, A.S., Topics in propagation of chaos. *In Ecole de Probabilités de Saint Flour, XIX-1989. Lecture Notes in Math. 1464, pp. 165–251. Springer, Berlin (1989).* Vlasov, A.A., ** The vibrational properties of an electron gas. *Physics-Uspekhi, 10(6):721-733, 1968*. [^1]: Laboratoire IMATH, Université du Sud-Toulon-Var, B.P 20132, 83957 La Garde Cedex 05, France. *(E-mail: bahlali@univ-tln.fr)* [^2]: Laboratory of Applied Mathematics, University of Biskra, Po. Box 145, Biskra (07000), Algeria. *(E-mail: mohamed@live.com)* [^3]: King Fahd University of Petroleum and Minerals, Department of Mathematics and Statistics, P.O. Box 1916, Dhahran 31261, Saudi Arabia. (*E-mail: brahim.mezerdi@kfupm.edu.sa)*
--- abstract: 'We prove that finding an $\epsilon$-Nash equilibrium in a succinctly representable game with many players is -hard for constant $\epsilon$. Our proof uses [*succinct games*]{}, i.e. games whose (exponentially-large) payoff function is represented by a smaller circuit. Our techniques build on a recent query complexity lower bound by Babichenko [@Bab13_query_complexity].' author: - 'Aviad Rubinstein[^1]' bibliography: - 'succint\_games.bib' title: Computational Complexity of Approximate Nash Equilibrium in Large Games --- Introduction ============ Nash equilibrium is the central concept in Game Theory. Much of its importance and attractiveness comes from its [*universality*]{}: by Nash’s Theorem [@Nash], every finite game has at least one. The result that finding a Nash equilibrium is -complete, and therefore intractable [@NASH-is-PPAD-hard_DGP09; @2-player_nash_CDT09] casts this universality in doubt, since it suggests that there are games whose Nash equilibria, though existent, are in any practical sense inaccessible. Can approximation repair this problem? Chen et al. [@2-player_nash_CDT09] proved that it is also hard to find an $\epsilon$-Nash equilibrium for any $\epsilon$ that is polynomially small - even for two-player games. The only remaining hope is a PTAS, i.e. an approximation scheme constant $\epsilon>0$. [*Whether there is a PTAS for the Nash equilibrium problem is the most important remaining open question in equilibrium computation.*]{} When we say “$\epsilon$-Nash equilibrium,” we mean the additive sort; for multiplicative $\epsilon$-Nash equilibria, Daskalakis [@Das13_multiplicative_hardness] shows that the problem is -hard even for two-player games; notice that such a result is unlikely for additive approximation and a constant number of players, since a quasi-polynomial time approximation algorithm exists [@LMM03_quasi_poly]. ### Our results {#our-results .unnumbered} In this paper we make a modest step towards the inapproximability, in the standard additive sense, of Nash equilibrium: \[thm:pwsn\]There exist constants $\epsilon,k>0$, such that given a game with $m$ players and $k$ actions, and a poly-size circuit that computes the vector of payoffs for each vector of pure strategies, it is -hard to compute an $\epsilon$-Nash equilibrium. Even though it is the first result establishing inapproximability of an additive notion of Nash equilibrium, it has two deficiencies: (a) it is about [*$\epsilon$-Nash-equilibrium*]{} (sometimes[^2] also called [*$\epsilon$-well-supported Nash equilibrium*]{}) [@NASH-is-PPAD-hard_DGP09], requiring that all actions in the support be approximately best responses (instead of the mixture being best response in $\epsilon$-approximate Nash equilibrium); and (b) it holds for a somewhat awkward class of multiplayer games we call [*succinct games*]{}[^3], in which the utility of each player for an action profile is calculated by a circuit. For example, on such games computing the exact best response is -hard. En route to proving Theorem \[thm:pwsn\], we also prove a similar statement about the computational complexity of finding an approximate fixed-point for a continuous (in fact, Lipschitz) function, which may be of separate interest. \[thm:pafp\]There exist constants $\epsilon,M>0$, such that given a $M$-Lipschitz, poly-time computable function $f\colon\left[0,1\right]^{n}\rightarrow\left[0,1\right]^{n}$, it is -hard to find an $\epsilon$-approximate fixed point of $f$. #### Pure equilibria and -hardness It is interesting to note that all our hard game instances [*have pure equilibria*]{}. To the best of our knowledge, this is the first setting where -hardness is proved for games which have pure equilibria. Naturally, in any game with a poly-size payoff matrix, it is easy to find any pure equilibrium. Related works ------------- Schoenebeck and Vadhan [@SV12_succint_games] studied comprehensively the computational complexity of Nash Equilibria in succinct games and other concise representations. They characterized the computational complexity, for different levels of approximation, of the following questions: is a given strategy profile a Nash equilibium? does there exist a pure equilibrium? does there exist a Nash equilibrium in which the payoff to a given player meets a given guarantee? They also prove that the problem of finding an $\epsilon$-Nash equilibrium (for constant $\epsilon$) in circuit games where each player has two strategies is -hard (for a promise-problem version of BPP), and belongs to $\P^{\MA}$. Finding a tighter complexity classification is stated as an open question. We note that different variants of succinct games were also studied by [@PY94_bounded-rationality; @FKS95_poly-definable-games; @FIKU08_succinct-zero-sum]. ### Related works on query complexity {#related-works-on-query-complexity .unnumbered} There are several interesting results on the query complexity of approximate Nash equilibria, where the algorithm is assumed to have black-box access to the exponential-size payoff function. In other words, in this setting the payoff function is allowed to be arbitrarily complex. A recent paper by Hart and Nisan [@HN13_query_complexity_correlated] proves that any deterministic algorithm needs to query at least an exponential number of queries to compute any $\epsilon$-Nash equilibrium - and even for any $\epsilon$-correlated equilibrium. For $\epsilon$-correlated equilibrium, on the other hand, Hart and Nisan show a randomized algorithm that uses a number of queries polynomial in $n$ and $\epsilon^{-1}$. Of particular interest to us, is a very recent paper by Babichenko [@Bab13_query_complexity], that extends the hardness result of Hart and Nisan to show that any randomized algorithm still requires an exponential number of queries to find an $\epsilon$-Nash equilibrium. Our proof is inspired by Babichenko’s work and builds on his techniques. Finally, yet a newer paper by Goldberg and Roth [@GR14_query_complexity_concise-WSNE] characterizes the query complexity of approximate coarse correlated equilibrium in games with many players. More important for our purpose is their polynomial upper bound on the query complexity of $\epsilon$-Nash equilibria for any family of games that have [*any*]{} concise representation. This result is to be contrasted with (a) Babichenko’s query complexity lower bound, which uses a larger family of games, and (b) our result which applies exactly to this setting and gives a lower bound on the [*computational complexity*]{}. A much older yet very interesting and closely related result is that of Hirsch, Papadimitriou, and Vavasis [@HPV89]. Hirsch et al. show that any deterministic algorithm for computing a Brouwer fixed point in the oracle model must make an exponential -in the dimension $n$ and the approximation $\epsilon$- number of queries for values of the function. The techniques in [@HPV89] have proven particularly useful both in Babichenko’s work, and in ours. ### Related works on succinct representations of other objects {#related-works-on-succinct-representations-of-other-objects .unnumbered} Although our notion of succinct representation is somewhat non-standard in game theory, similar succinct problems have been considered before. Galperin and Wigderson [@GW84_succint_graphs] and Papadimitriou and Yannakakis [@PY86_succint_graphs] studied the computation of graph properties for exponential-size graphs given access to a circuit that locally computes the adjacency matrix. Much more recently, Dobzinski and Vondrak studied optimization of succinctly-represented objects for submodular optimization [@DV12_computational_submodular] and for combinatorial auctions [@DV12_computational_auctions]. As in our case, in both of those settings similar hardness results were previously known in the value oracle model. Techniques\[sec:Techniques\] ---------------------------- Our techniques in this paper are significantly different from previous works on -hardness in games such as Daskalakis et al. [@NASH-is-PPAD-hard_DGP09] and Chen et al. [@2-player_nash_CDT09]. In particular two ingredients of our reduction are fundamentally different from the so-called “DGP framework”: the construction of the Brouwer function, and the reduction from Brouwer to Nash. We note that in both cases our techniques are much closer to the recent work of Babichenko [@Bab13_query_complexity]. #### From Brouwer to Nash Our reduction from Brouwer to Nash follows an argument presented in Eran Shmaya’s blog [@Shm12_blog]. Each player has actions in $\left[0,1\right]$, and the players are divided into two groups: the first group tries to imitate the second group, while the second group tries to imitate the Brouwer function applied to the actions chosen by the first group. In [@NASH-is-PPAD-hard_DGP09; @2-player_nash_CDT09], per contra, the probability assigned to each action corresponds, roughly, to a variable in $\left[0,1\right]$. While this construction is more stringent on the number of actions, it requires a relatively complex averaging gadget. More importantly, this averaging gadget as implemented by Chen et al. seems to require a polynomial blow-up of the error. Thus, it is not clear how to achieve a constant hardness of approximation in this way. #### The Brouwer function In order to construct the hard Brouwer function we use a construction due to Hirsch, Papadimitriou, and Vavasis [@HPV89] that embeds paths as mappings over $\left[0,1\right]^{n}$, in a delicate way which we describe later. Chen et al. [@2-player_nash_CDT09], on the other hand, divide the unit hypercube into subcubes of constant edge-length, and specify a color for each subcube. The $i$-th color corresponds to $\xi_{i}$, i.e. the $i$-th unit vector, whereas a special red color corresponds to $-\sum\xi_{i}$. Any vector of players’ mixed strategies corresponds to a distribution over neighboring subcubes, and we get a fixed point whenever the expectation of the corresponding vectors is $0^{n}$. Clearly, any distribution corresponding to an exact fixed point must have support over a panchromatic neighborhood, i.e. a neighborhood with subcubes corresponding to all $n+1$ colors. Finally, it is shown that it is -hard to find such panchromatic neighborhood. In fact, a panchromatic neighborhood is still necessary for any $\Theta\left(1/n\right)$-approximate[^4] fixed point. However, a $\left(1/n\right)$-approximate fixed point can be achieved from a distribution over only $n$ colors: for example, taking each of the colors $i\in\left[n\right]$ with probability $1/n$ results in the expected vector $\left(1/n,\dots,1/n\right)$. Preliminaries ============= Throughout this paper we use the max-norm as the default measure of distance. In particular, when we say that $f$ is $M$-Lipschitz we mean that for every $\mathbf{x}$ and **$\mathbf{y}$** in the domain of $f$, $\left\Vert f\left(\mathbf{x}\right)-f\left(\mathbf{y}\right)\right\Vert _{\infty}\leq M\left\Vert \mathbf{x}-\mathbf{y}\right\Vert _{\infty}$. ### The [EndOfTheLine]{} problem {#the-endoftheline-problem .unnumbered} Our reduction starts form the [EndOfTheLine]{} problem. This problem was implicit in [@PPAD_Pap94], and explicitly by defined Daskalakis et al. [@NASH-is-PPAD-hard_DGP09]. [EndOfTheLine]{}: ([@NASH-is-PPAD-hard_DGP09]) Given two circuits $S$ and $P$, with $n$ input bits each, such that $P\left(0^{n}\right)=0^{n}\neq S\left(0^{n}\right)$, find an input $x\in\left\{ 0,1\right\} ^{n}$ such that $P\left(S\left(x\right)\right)\neq x$ or $S\left(P\left(x\right)\right)\neq x\neq0^{n}$. (Essentially [@PPAD_Pap94]) [EndOfTheLine]{} is -complete (for poly-size $S$ and $P$). ### Succinct normal form games {#succinct-normal-form-games .unnumbered} We consider games with $n$ players. Player $i$ chooses one of $k$ actions $a_{i}\in A_{i}$. The utility of player $i$ for each vector of actions is given by $u_{i}\colon\times_{j}A_{j}\rightarrow\left[0,1\right]$. Explicitly describing the $u_{i}$’s requires space exponential in $n$. In this paper we restrict our attention to games that can be described [*succinctly*]{}; namely, there is a poly-size circuit that computes each of the $u_{i}$’s given a vector of actions $\mathbf{a}\in\times_{j}A_{j}$. ### $\epsilon$-Nash equilibrium vs $\epsilon$-approximate Nash equilibrium {#epsilon-nash-equilibrium-vs-epsilon-approximate-nash-equilibrium .unnumbered} A mixed strategy of player $i$ is a distribution $x_{i}\in\Delta A_{i}$. We say that a vector of mixed strategies **$\mathbf{x}\in\times_{j}\Delta A_{j}$** is a [*Nash equilibrium*]{} if every strategy $a_{i}$ in the support of $x_{i}$ is a best response to the actions of the mixed strategies of the rest of the players, $x_{-i}$. Formally, $$\mathbb{E}_{a_{-i}\sim x_{-i}}\left[u_{i}\left(a_{i},a_{-i}\right)\right]=\max_{a'\in A_{i}}\mathbb{E}_{a_{-i}\sim x_{-i}}\left[u_{i}\left(a',a_{-i}\right)\right]\,.$$ Equivalently, $\mathbf{x}$ is a Nash equilibrium if each mixed strategy $x_{i}$ is a best mixed response to $x_{-i}$: $$\mathbb{E}_{\mathbf{a}\sim\mathbf{x}}\left[u_{i}\left(\mathbf{a}\right)\right]=\max_{x_{i}'\in\Delta A_{i}}\mathbb{E}_{\mathbf{a}\sim\mathbf{x'}}\left[u_{i}\left(\mathbf{a}\right)\right]\,.$$ Each of those equivalent definitions can be generalized to include approximation in a different way. (Of course, there are also other interesting generalizations of Nash equilibria to approximate settings.) We say that $\mathbf{x}$ is an [*$\epsilon$-approximate Nash equilibrium*]{} ([*$\epsilon$-ANE*]{}) if each $x_{i}$ is an $\epsilon$-best mixed response to $x_{-i}$: $$\mathbb{E}_{\mathbf{a}\sim\mathbf{x}}\left[u_{i}\left(\mathbf{a}\right)\right]\geq\max_{x_{i}'\in\Delta A_{i}}\mathbb{E}_{\mathbf{a}\sim\mathbf{x'}}\left[u_{i}\left(\mathbf{a}\right)\right]-\epsilon\,.$$ On the other hand, we generalize the first definition of Nash equilibrium by saying that $\mathbf{x}$ is a [*$\epsilon$-Nash equilibrium*]{} ([*$\epsilon$-NE*]{}; sometimes also $\epsilon$-well-supported Nash equilibrium) if each $a_{i}$ in the support of $x_{i}$ is an $\epsilon$-best response to $x_{-i}$: $$\mathbb{E}_{a_{-i}\sim x_{-i}}\left[u_{i}\left(a_{i},a_{-i}\right)\right]\geq\max_{a'\in A_{i}}\mathbb{E}_{a_{-i}\sim x_{-i}}\left[u_{i}\left(a',a_{-i}\right)\right]-\epsilon\,.$$ It is easy to see every $\epsilon$-NE is also an $\epsilon$-ANE, but the converse is false. Given an $\epsilon$-ANE it is possible to find a $\Theta\left(\sqrt{\epsilon}n\right)$-NE (see e.g. [@NASH-is-PPAD-hard_DGP09]); however computational hardness of $n^{-c}$-approximate Nash equilibrium is a corollary of [@2-player_nash_CDT09]. Main result =========== Given a game over $m$ players with $k$ actions, and a poly-size circuit that computes the vector of payoffs for each vector of pure strategies, [SuccinctNash]{}$\left(n,k,\epsilon\right)$ is the problem of computing an $\epsilon$-NE for this game. [\[thm:pwsn\]]{} [SuccinctNash]{}$\left(n,10^{4},10^{-8}\right)$ is -hard. ### Proof overview {#proof-overview .unnumbered} We begin our proof with the [EndOfTheLine]{} problem on $\left\{ 0,1\right\} ^{n}$ [@NASH-is-PPAD-hard_DGP09]. In the first step, we embed the [EndOfTheLine]{} as a collection $H$ of vertex-disjoint paths on the $\left(2n+1\right)$-dimensional hypercube graph. Given $H$, our second step is to construct a continuous mapping $f\colon\left[0,1\right]^{2n+2}\rightarrow\left[0,1\right]^{2n+2}$ whose fixed points correspond to ends of paths in $H$. This step is done using a technique introduced by Hirsch et al. [@HPV89]. Our third and final step is to reduce the problem of finding approximate fixed points of $f$ to the problem of finding approximate NE in a $4n+4$-players game via a reduction which appeared in Shmaya’s blog [@Shm12_blog]. Embedding the [EndOfTheLine]{} graph as paths in $\left\{ 0,1\right\} ^{2n+1}$ ------------------------------------------------------------------------------ Our first step in the reduction is to embed an [EndOfTheLine]{} graph as vertex-disjoint paths on the $\left(2n+1\right)$-dimensional hypercube graph. We first recall that the input to the [EndOfTheLine]{} problem is given as two circuits $S$ and $P$, which define a directed graph over $G$ over $\left\{ 0,1\right\} ^{n}$. Given $S$ and $P$, we construct a collection $H$ of vertex-disjoint paths over the $\left(2n+1\right)$-dimensional hypercube graph, such that each starting or end point of a path in $H$ corresponds to a unique starting or end point of a line in $G$. In order to construct our embedding we divide the $2n+1$ coordinates as follows: the first $n$ coordinates store current vertex $\mathbf{u}$, the next $n$ coordinates for next vertex in the line, $\mathbf{v}$, and finally, the last coordinate $b$ stores a compute-next vs copy bit. When $b=0$, the path proceeds to update $\mathbf{v}\leftarrow S\left(\mathbf{u}\right)$, bit-by-bit. When this update is complete, the value of $b$ is changed to $1$. Whenever $b=1$, the path proceeds by copying $\mathbf{u}\leftarrow\mathbf{v}$ bit-by-bit, and then changes that value of $b$ again. Finally, when $\mathbf{u}=\mathbf{v}=S\left(\mathbf{u}\right)$ and $b=0$, the path reaches an end point. Notice that the paths in $H$ do not intersect. Furthermore, given a vector in $\mathbf{p}\in\left\{ 0,1\right\} ^{2n+1}$, we can output in polynomial time whether $\mathbf{p}$ belongs to a path in $H$, and if so which are the previous and consecutive vectors in the path. It is therefore -hard to find a starting or end point of any path in $H$ other than $0^{n}$. Continuous mapping on $\left[0,1\right]^{2n+2}$ ----------------------------------------------- Our second step, which constructs a continuous mapping given $H$, is probably the most technically involved. Fortunately, almost all of the technical work we need was already done by Hirsch et al. [@HPV89]. Given an $M$-Lipschitz, poly-time computable function $f\colon\left[0,1\right]^{n}\rightarrow\left[0,1\right]^{n}$, [SuccinctBrouwer]{}$\left(n,M,\epsilon\right)$ is the problem of computing an $\epsilon$-approximate fixed point of $f$. [\[thm:pafp\]]{} [SuccinctBrouwer]{}$\left(n,80,1/88\right)$ is -hard. We begin with a quick overview of the mapping constructed by Hirsch et al. [@HPV89]. We will then show how to adapt their construction to fit our reduction. In the following, we denote $g\left(\mathbf{x}\right)=f\left(\mathbf{x}\right)-\mathbf{x}$. ### The HPV mapping {#the-hpv-mapping .unnumbered} Given a path in $\left\{ 0,1\right\} ^{2n+1}$, Hirsch et al. [@HPV89] construct a mapping $f\colon\left[0,1\right]^{2n+2}\rightarrow\left[0,1\right]^{2n+2}$ that satisfies[^5]: 1. $g$ is $79$-Lipschitz (thus, $f$ is $80$-Lipschitz) 2. $\left\Vert g\left(\mathbf{x}\right)\right\Vert _{\infty}\geq1/88$ for every $\mathbf{x}$ that does not correspond to the endpoint of the path 3. The value of $g$ at each point $\mathbf{x}$ depends only on whether the path passes through the subcube corresponding to $\mathbf{x}$, and in which direction. However, for our purposes, it does not suffice to embed just a single path. In particular, we have one path whose starting point we know (and must not correspond to a fixed point), and many other paths (and cycles), whose end points -and starting points- correspond to additional fixed points. Luckily, the same construction of [@HPV89] continues to work in our case, with only minor changes. In order to explain those modifications, we first briefly recall some of the details in the original construction. We divide the $\left[0,1\right]^{2n+2}$ hypercube into smaller subcubes of edge-size $h$. In [@HPV89], a single path over $\left\{ 0,1\right\} ^{2n+1}$ corresponds to a $\left(2n+1\right)$-dimensional sequence of subcubes, called the [*tube*]{}, that all lie in a special designated [*slice*]{} of $\left[0,1\right]^{2n+2}$. The [*home subcube*]{}, the subcube that corresponds to the beginning of the path, is special: the flow from all subcubes that do not belong to the path leads to this subcube. For the purpose of adapting this construction, the most important property is that on (and near) the outer facets of the tube, i.e. the facets that do not face other subcubes in the path, $g\left(\mathbf{x}\right)=\delta\mathbf{\xi}_{2n+2}$, where $\mathbf{\xi}_{2n+2}$ is the $\left(2n+2\right)$-unit vector, and $\delta$ is some small parameter (constant in this paper). The same also holds for all points in the slice that do not belong to the tube. Intuitively, this means that [*all subcubes -whether they belong to the tube or not- look the same from the outside*]{} (except for the two facets that continue the path). ### Embedding multiple paths {#embedding-multiple-paths .unnumbered} Given a collection $H$ of non-intersecting paths in $\left\{ 0,1\right\} ^{2n+1}$, we construct a mapping $f\colon\left[0,1\right]^{2n+2}\rightarrow\left[0,1\right]^{2n+2}$ in a similar fashion. Essentially, we construct many tubes (some of which may form close cycles) in the same slice. If any path in $H$ passes through some $\mathbf{p}\in\left\{ 0,1\right\} ^{2n+1}$, then $\mathbf{p}$ corresponds to a subcube on which $f$ is defined exactly the same as in [@HPV89]. Likewise, every end point of path in $H$ corresponds to a subcube on which $f$ is defined exactly the same as the unique end point in [@HPV89]. Our construction differs in the starting points of the paths in $H$. In [@HPV89], the starting point corresponds to the home subcube, which is *universally unique*: the flow from any subcube outside the path leads towards that point. Indeed we cannot imitate this behaviour for the starting point of every path in $H$. However, this does not complicate our reduction, because all other starting points are also (-) hard to find. In particular we can construct $f$ in a similar fashion to the end points of the paths, thereby creating additional fixed points. For each starting point, consider the corresponding subcube of $\left[0,1\right]^{2n+2}$. The values of $f$ on the facet $F_{1}$ in the direction of the path, are already determined by the next subcube in the path. Now, we let $g\left(\mathbf{x}\right)=\delta\mathbf{\xi}_{2n+2}$ uniformly for every point $\mathbf{x}$ on the opposite facet, $F_{0}$. For any $\mathbf{x}'$ between the facets we interpolate by taking the weighted average of the values $f\left(\mathbf{x}_{0}^{'}\right)$ and $f\left(\mathbf{x}_{1}^{'}\right)$ on the projections of $\mathbf{x}'$ on $F_{0}$ and $F_{1}$, respectively. (This corresponds to [@HPV89]’s “Cartesian interpolation”). Notice that this subcube also satisfies $g\left(\mathbf{x}\right)=\delta\mathbf{\xi}_{2n+2}$ for all points on facets that do not face the rest of the path. Outside the subcubes corresponding to the additional paths, all the properties of the mapping are trivially preserved. On the interface between any subcube in an additional path, and any other subcube which is not consecutive in the same path, all the properties are again preserved since adding the paths does not change the value of $f$ near those facets. Finally within each path, $f$ is constructed exactly the same way as the single path in [@HPV89], and therefore it is easy to see that all the properties continue to hold - except of course the new fixed points at the starting points of the additional paths. Finally, we conclude that it is -hard to find an approximate fixed point of $f$. From Brouwer to WSNE -------------------- Our third and final step in the proof reduces the problem of finding approximate fixed points in $f$ to that of finding approximate well-supported Nash equilibria. The reduction we use is based on [@Shm12_blog] and appears almost exactly in this format in [@Bab13_query_complexity]. (Essentially [@Bab13_query_complexity]) [SuccinctBrouwer]{}$\left(n,M,\epsilon\right)$ $\leq_{P}$ [SuccinctNash]{}$\left(2n,k+1,\frac{3}{4k^{2}}\right)$, where $k=\lceil\frac{3+M}{\epsilon}\rceil$ We construct a game with two groups of $2n+2$ players each. The action set of each player corresponds to $\left\{ 0,1/k,\dots,1\right\} $. We denote the choice of strategies for the first group $\mathbf{a}=\left(a_{1}\dots a_{2n+2}\right)$, and $\mathbf{b}=\left(b_{1},\dots b_{2n+2}\right)$ for the second group. Each player in the first group attempts to imitate the behaviour of the corresponding player in the second group. Her utility is given by $$u_{i}\left(a_{i},b_{i}\right)=-\left|a_{i}-b_{i}\right|^{2}$$ The second group players attempt to imitate the value of $f$, when applied to the vector of actions taken by all the players in the first group. The utility of the $i$-th player is $$v_{i}\left(b_{i},\mathbf{a}\right)=-\left|f_{i}\left(\mathbf{a}\right)-b_{i}\right|^{2}$$ Observe that when the $i$-th player on the second group (henceforth, player $\left(i,2\right)$) applies a mixed strategy, the expected utility for player $\left(i,1\right)$ (the $i$-th player in the first group) is given by: $$\mathbb{E}\left[u_{i}\left(a_{i},b_{i}\right)\right]=-\left|a_{i}-\mathbb{E}\left(b_{i}\right)\right|^{2}-\mathbf{Var}\left(b_{i}\right)$$ Let $\alpha_{i}\in\left\{ 0,1/k,\dots,1\right\} $ be such that $\mathbb{E}\left(b_{i}\right)\in\left[\alpha_{i},\alpha_{i}+1/k\right]$, and assume wlog that $\mathbb{E}\left(b_{i}\right)\in\left[\alpha_{i},\alpha_{i}+1/2k\right]$. Then we can lower-bound the expected utility of player $\left(i,1\right)$ when playing $\alpha_{i}$: $$\mathbb{E}\left[u_{i}\left(\alpha_{i},b_{i}\right)\right]\geq-\frac{1}{4k^{2}}-\mathbf{Var}\left(b_{i}\right)$$ On the other hand, for any $\gamma\notin\left\{ \alpha_{i},\alpha_{i}+1/k\right\} $, $$\mathbb{E}\left[u_{i}\left(\gamma,b_{i}\right)\right]\leq-\frac{1}{k^{2}}-\mathbf{Var}\left(b_{i}\right)$$ Therefore in every $\frac{3}{4k^{2}}$-NE, the support of the mixed strategy of player $\left(i,1\right)$ is restricted to $\left\{ \alpha_{i},\alpha_{i}+1/k\right\} $. For the second group of players, we have $$\mathbb{E}\left[v_{i}\left(b_{i},\mathbf{a}\right)\right]=-\left|\mathbb{E}\left(f_{i}\left(\mathbf{a}\right)\right)-b_{i}\right|^{2}-\mathbf{Var}\left(f_{i}\left(\mathbf{a}\right)\right)$$ Let $\beta_{i}\in\left\{ 0,1/k,\dots,1\right\} $ be such that $\mathbb{E}\left(f_{i}\left(\mathbf{a}\right)\right)\in\left[\beta_{i},\beta_{i}+1/k\right]$, then in every $\frac{3}{4k^{2}}$-NE, the support player $\left(i,2\right)$ is restricted to $\left\{ \beta_{i},\beta_{i}+1/k\right\} $. Finally, given a $\frac{3}{4k^{2}}$-NE, we use the Lipschitz property of $f$ to derive an approximate fixed point. Notice that for $\mathbf{\alpha}$ and $\mathbf{\beta}$ as defined above, $$\left|\alpha_{i}-\beta_{i}\right|\leq\left|\alpha_{i}-\mathbb{E}\left(b_{i}\right)\right|+\left|\mathbb{E}\left(b_{i}\right)-\beta_{i}\right|\leq\frac{2}{k}$$ Likewise, $$\left|\beta_{i}-f_{i}\left(\mathbf{\alpha}\right)\right|\leq\left|\beta_{i}-\mathbb{E}\left(f_{i}\left(\mathbf{a}\right)\right)\right|+\left|\mathbb{E}\left(f_{i}\left(\mathbf{a}\right)\right)-f_{i}\left(\mathbf{\alpha}\right)\right|\leq\frac{1}{k}+\frac{M}{k}\mbox{ ,}$$ where $M$ is the Lipschitz constant of $f$. Therefore $\left|\mathbf{\alpha}-f\left(\mathbf{\alpha}\right)\right|_{\infty}\leq\frac{3+M}{k}$, so $\alpha$ is a $\frac{3+M}{k}$-approximate fixed point of $f$. Open problems ============= We prove that finding an $\epsilon$-well-supported Nash equilibrium -hard even for constant $\epsilon$. This is a modest step towards a better understanding of the computational complexity of approximating Nash equilibrium in games with a large number of players. Many important questions remain open: #### NE vs WSNE As mentioned earlier, in the domain of constant approximations, Nash equilibria may be strictly easier than finding well-supported Nash equilibria. [*What is the complexity of finding $\epsilon$-ANE?*]{} We note that a similar obstacle was encountered by Babichenko [@Bab13_query_complexity], in the case of query complexity, for essentially the same reasons. #### Simpler games Other, more restrictive forms of succinct games have been studied quite extensively before (see e.g. [@Pap07_succint-games_inbook]). It would be interesting to improve our understanding of the complexity of those games. In particular, [*What is the complexity of finding an $\epsilon$-NE in bounded-degree graphical games?*]{} #### Upper bound on complexity Embarrassingly, we do not have a (non-trivial) upper bound on the complexity of finding an $\epsilon$-NE. One particular obstacle to showing that this problem belongs to the class , is that it is not even clear how to find an approximate best response to mixed strategies without using randomness. (In particular, by [@SV12_succint_games], this problem is also -hard; we do not know whether is contained in .) [*What is the right complexity classification of the problem of finding an $\epsilon$-NE?*]{} Addendum ======== Since the first version of this paper was posted, we found solutions to the first two open problems mentioned above [@Rub14b_simpler-games]. [^1]: UC Berkeley. I am grateful to Christos Papadimitriou for inspiring discussions, comments, and advice. I would also like to thank Constantinos Daskalakis and Paul Goldberg for pointing out important missing references in an earlier version. [^2]: See [@Das13_multiplicative_hardness] for a short discussion on terminology. [^3]: There is some disagreement in the literature about the terminology: a very similar definition appeared in [@SV12_succint_games] as [*circuit games*]{}; [@FKS95_poly-definable-games] discuss [*polynomially definable games*]{}; while [@FIKU08_succinct-zero-sum] did use the term succinct games. [^4]: Approximation here is in the sense of $L^{\infty}$. In fact, it seems to give a constant hardness of approximation in $L^{1}$, but unfortunately it is not clear how to use that for a reduction to Nash equilibria. [^5]: See also [@Bab13_query_complexity] for the choice of constants. Please note the change in variable names: $h,M,2^{-p}$ in [@HPV89] correspond to $\delta,\lambda,\epsilon$, respectively, in [@Bab13_query_complexity].
--- abstract: | It has been suggested based on analytic theory that even in non-rotating supernova progenitors stochastic spin-up by internal gravity waves (IGWs) during the late burning stages can impart enough angular momentum to the core to result in neutron star birth spin periods below $100 \, \mathrm{ms}$, and a relatively firm upper limit of $500 \, \mathrm{ms}$ for the spin period. We here investigate this process using a 3D simulation of oxygen shell burning in a $3M_\odot$ He star. Our model indicates that stochastic spin-up by IGWs is less efficient than previously thought. We find that the stochastic angular momentum flux carried by waves excited at the shell boundary is significantly smaller for a given convective luminosity and turnover time than would be expected from simple dimensional analysis. This can be explained by noting that the waves launched by overshooting convective plumes contain modes of opposite angular wave number with similar amplitudes, so that the net angular momentum of excited wave packets almost cancels. We find that the wave-mediated angular momentum flux from the oxygen shell follows a random walk, but again dimensional analysis overestimates the random walk amplitudes since the correlation time is only a fraction of the convective turnover time. Extrapolating our findings over the entire life time of the last burning stages prior to collapse, we predict that the core angular momentum from stochastic spin-up would translate into long birth spin periods of several seconds for low-mass progenitors and no less than $100\, \mathrm{ms}$ even for high-mass progenitors. author: - | Lucy O. McNeill$^{1}$[^1] and Bernhard Müller$^{1}$\ $^{1}$ School of Physics and Astronomy, Monash University, Victoria 3800, Australia bibliography: - 'refs.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: 'Stochastic Core Spin-Up in Massive Stars – Implications of 3D Simulations of Oxygen Shell Burning' --- \[firstpage\] waves — hydrodynamics — stars: evolution — stars: massive — stars: interiors — stars: neutron Introduction ============ The birth spin periods of pulsars potentially provide a window into the inner workings of angular momentum transport in massive stars and of the core-collapse supernova explosion mechanism. The bulk of the observed solitary neutron star population has birth spin periods of hundreds of ms, and some as fast as tens of ms [@Fau2006; @Popov2010; @Popov2012; @Gullon2014]. Slow birth spin periods of several hundred ms can be inferred from age and spin down measurements (e.g., PSR J1955+5059 in [@Noutsos2013]), but without independent measures of the age and braking index, the slow end of the population cannot be tightly constrained. Even for young pulsars with spin periods of $\mathord{>}100\, \mathrm{ms}$ (e.g. PSR J0248+6021, which has a spin period of $217\,\mathrm{ms}$ and age $62\, \mathrm{kyr}$ [@Theureau2011]), uncertainties in possible instabilities and spin-down mechanisms during the first years means that one needs to be careful in drawing conclusions from such low spin rates on the birth spin period. Despite long-standing efforts to understand the pre-collapse spin periods of massive stars [e.g., @Heger2000; @hirschi_04; @Heger2005] and spin-up/spin-down processes during the supernova explosion [@Blondin2007; @Rantsiou2011; @Wongwathanarat2013; @Kazeroni2016; @BM2019a; @stockinger_20], it is still not clear what shapes the observed period distribution. In particular, there are still considerable uncertainties concerning the angular momentum transport processes in stellar interiors that determine the pre-collapse rotation profiles of massive stars. Stellar evolution models incorporating magnetic torques [e.g., @Heger2005; @Suijs2008; @Cantiello2014] from the Tayler-Spruit dynamo [@Spruit2002] have long defined the state of the art for the treatment of stellar rotation in massive stars. In recent years, however, asteroseismology has revealed unexpectedly slow core rotation rates in low-mass red giants [@Beck2012; @Beck2014; @Deh2012; @Deh2014; @Mosser2012], which cannot be explained by the classic Tayler-Spruit dynamo, suggesting that even more efficient angular momentum transport mechanisms must operate in nature [@Fuller2014; @Cantiello2014; @Wheeler2015]. Different solutions have been proposed to resolve this problem. @Fuller2019 have proposed a modification of the Tayler-Spruit dynamo which efficiently slows down progenitors to be roughly consistent with both the observed neutron star birth spin periods and, unlike the Tayler-Spruit dynamo, the core rotation rates of red giants. It cannot, however, explain the rotation rates of intermediate mass stars [@Denhartough2020]. As an alternative solution that does not rely on magnetic torques, many studies have investigated angular momentum transport by internal gravity waves (IGWs) in low- and high-mass stars during various evolutionary phases [e.g., @Zahn1997; @Kumar1999; @Talon2003; @Charbonnel2005; @Rogers2006; @Fuller2014; @Belkacem2015; @Pincon2016; @Pincon2017] and suggested that IGWs can maintain strong core-envelope coupling only up to the subgiant branch. Angular momentum transport by IGWs could, however, again play an important role during more advanced burning stages in massive stars because the enormous convective luminosities and relatively high Mach numbers lead to a strong excitation of IGWs at convective boundaries [@Fuller2015]. The familiar wave filtering mechanism for prograde/retrograde modes could then effectively limit the degree of differential rotation in the interior and spin down the core on rather short time scales if rotation is fast to begin with [@Fuller2015]. They find that for initially rapidly rotating progenitors, angular momentum transport via g-modes is not efficient enough to explain the pulsar population. They find a maximum spin frequency bound from spin–down of $\sim 3$ms, which is consistent with the range of possible birth spins for the fastest spinning solitary pulsar ([@Marshall1998], spin period of 16 milliseconds today). Additionally, @Fuller2015, pointed out that IGWs could lead to a stochastic core *spin-up* in initially non-spinning progenitors. The idea here is that stochastically excited IGWs from the convective boundaries of the Si, O, or C shell propagate into the core, where they break and deposit their randomly varying angular momentum. The core angular momentum executes a random walk, as long as there is an influx of IGWs from strong burning shells. In principle, IGWs can also be excited by core convection or shells that end up in the core, and carry angular momentum *outwards*. This mirror process was found to be subdominant in the case studied by @Fuller2015, however. @Fuller2015 argue that this stochastic spin-up will result in typical neutron birth periods scattering over a range between about $40\, \mathrm{ms}$ up to an effective upper limit of $\mathord{\sim} 500\,\mathrm{ms}$ without the need to assume any progenitor rotation. If the core angular momentum is indeed determined by this stochastic spin-up process, this could explain the paucity of long spin periods of $\mathord{\gtrsim} 500\,\mathrm{ms}$ among young neutron star. Aside from the implications for neutron star spin periods, the excitation of IGWs during the late convective burning stages is also of interest because it could drive mass loss in supernova progenitors shortly before collapse [@QS2012; @Fuller2017] or lead to envelope inflation [@Mcley2014]. This could explain observations of circumstellar material (CSM) from late pre-collapse outbursts in a significant number of observed supernovae [for an overview, see @Foley2011; @Smith2014; @Bilinski2015; @Smith2017], although other mechanisms such as flashes from degenerate shell burning [@Woosley2015] or the pulsational pair instability [@Woosley2007] may be required to account for more spectacular cases with several solar masses of CSM in Type IIn supernovae. In this paper, we further investigate the excitation of IGWs during late-stage convective burning and its implication for stochastic core spin-up. Different from the problem of wave excitation by convection during early evolutionary stages, this problem cannot be addressed by asteroseismic measurements. The extant studies of @QS2012 [@Fuller2015; @Fuller2017] strongly rely on the analytic theory of wave excitation by turbulent convection that has been developed over decades [@Lighthill1952; @Townsend1966; @GK1990; @LQ2013]. Numerical simulations of IGW excitation at convective boundaries have been conducted for earlier evolutionary stages in 2D [@Rogers2013] and 3D [@Alvan2014; @Alvan2015; @Edelmann2019], but cannot be easily extrapolated to the late, neutrino-cooled burning stages where the Péclet number is lower and compressibility effects play a bigger role. During the early evolutionary stages, asteroseismology can be used to directly test the underlying theories for IGW excitation by convection [@Aerts2015; @Aerts2017b]. On the other hand, a number of studies have already addressed the late burning stages in 3D [e.g., @Meakin2007; @Arnett2008; @BM2016; @Jones2017; @Cristini2017; @Andrassy2018; @BM2019a; @Yadav2019], but without addressing the problem of stochastic spin-up. In this study, we use a full $4 \pi$–3D numerical simulation to quantitatively study the idea of stochastic spin-up for the first time. We consider a $3 M_\odot$ He star model from @BM2019a, which we evolved for 7.8 minutes, or $\mathord{\sim} 35$ convective turnover times. We consider the excitation of waves by vigorous convection in the oxygen burning shell of this model to check the assumptions underlying the theory of @Fuller2015 [F15 henceforth] for stochastic core spin-up. We focus on the outward flux of angular momentum from the oxygen shell since wave excitation at the outer boundaries of a convective shell or core is more tractable from the numerical point of view than the excitation of inward-propagating waves from the inner boundary of a convective shell. Since the physical process of wave excitation is the same, this approach nonetheless allows us to study the process in the relevant physical regime and check the scaling relations and dimensionless parameters used in the theory of F15. The paper is structured as follows: In Section \[sec:theory\], we review key elements of the theory of F15 for the stochastic spin-up of supernova progenitor cores by IGWs. In Section \[sec:setup\] we describe the 3D progenitor model. In Section \[sec:results\] we analyze the simulation using spherical Favre decomposition to determine the energy and angular momentum flux in convectively stable zones. We compare these results with the random walk model of @Fuller2015, and finally comment on the implications of our results for neutron star birth spins. In Section \[sec:ang\_cons\] we discuss potential numerical issues that may affect our results, such as numerical angular momentum conservation errors, and then summarise our findings in Section \[sec:conclusions\]. Theory of Stochastic Spin-Up {#sec:theory} ============================ Gravity waves ($g$-modes) are generated by turbulent convection at convective boundaries, and propagate in the neighboring convectively stable regions. The theory of F15 relies on a few key assumptions about the wave excitation process to model the stochastic spin-up of the core in non-rotating or slowly rotating progenitors. The first of these assumptions concerns the (time-averaged) IGW energy flux $\dot{E}$ in the stable regions. Following established analytic theory for $g$-mode excitation [@Lighthill1952; @Townsend1966; @GK1990; @LQ2013], F15 assume that the wave energy flux $\dot{E}$ depends on the convective luminosity $L_\mathrm{conv}$ in the driving motions and the convective Mach number $\mathcal{M}$, $$\label{eq:e_wave} \dot{E}= L_\mathrm{conv} \mathcal{M}^\alpha,$$ where the power-law exponent $\alpha$ is expected to lie in the range $\alpha=5/8\texttt{-}1$. While the wave energy flux can be assumed to vary only mildly in time, the overshooting convective plumes will continuously excite different modes, and the overall angular momentum flux $\dot{\mathbf{J}}$ carried by the waves will fluctuate according to the wave numbers of the excited modes. F15 assume that the direction of the angular momentum flux vector remains correlated over roughly one convective turnover time $\tau$ and varies randomly on longer time scales, so that time-integrated spin-up $\Delta J =\int\dot{\mathbf{J}}\,{\mathrm{d}}t$ can be approximated as the result of a random walk by the typical angular momentum $\delta J$ of a correlated wave packet per time $\tau$. Thus, after $\mathcal{N}$ convective turnover times, the expectation value $\langle \Delta J^2 \rangle$ is [^2] $$\langle \Delta J^2 \rangle = \mathcal{N} \delta J^2.$$ At this stage, one still needs to specify the typical angular momentum $\delta J$ carried by one correlated wave packet. For a single mode of spherical harmonics degree $\ell$ and order $m$, the $z$-component $\dot{J}_z$ of the flux of angular momentum is related to the wave energy flux as [@Zahn1997; @Kumar1999], $$\dot{J}_z=\frac{m}{\omega}\dot{E}, \label{eq:GKE}$$ where $\omega$ is the mode frequency. F15 suggest replacing $m$ and $\omega$ by appropriate averages $\bar{m}$ and $\bar{\omega}$ for the wave packet so that $$\dot{J}_\mathrm{F15}=\frac{\bar{m}}{\bar{\omega}}\dot{E} =\frac{\bar{m}}{\bar{\omega}}\mathcal{M}^\alpha\, L_\mathrm{conv}. \label{eq:RWJ}$$ They argue that the excited modes will predominantly have periods of the order of the convective turnover time so that $\bar{\omega}=2\pi/\tau$. For the average wave number, F15 choose $\bar{m}=1$, arguing that this is the most conservative estimate provided that each wave packet consists only of modes of positive or negative $m$. We shall revisit this assumption later in Section \[sec:results\]. Integrating over one turnover time and using Equation (\[eq:e\_wave\]), one then finds $$\delta J_\mathrm{F15} = \frac{\bar{m} \tau^2 \dot{J}_\mathrm{F15}}{2\pi} = \frac{\tau^2 \mathcal{M}^\alpha L_\mathrm{conv}}{2\pi},$$ and the expected spin-up after $\mathcal{N}$ turnovers is $$\langle \Delta J_\mathrm{F15}^2 \rangle^{1/2} = \sqrt{\mathcal{N}}\delta J_\mathrm{F15} = \frac{\sqrt{\mathcal{N}} \bar{m} \tau^2 \dot{J}_\mathrm{F15}}{2\pi\sqrt{3}} = \frac{\sqrt{\mathcal{N}} \bar{m} \tau^2 \mathcal{M}^\alpha L_\mathrm{conv}}{2\pi\sqrt{3} } \label{eq:RWMa},$$ where the factor $1/\sqrt{3}$ accounts for the random orientation of the vectorial angular momentum carried by different wave packets. Shell Burning Simulation {#sec:setup} ======================== The challenge is now to determine whether the key approximations in Equations (\[eq:e\_wave\]), (\[eq:RWJ\]), and (\[eq:RWMa\]) are borne out by full 3D simulations. We analyse a 3D model of convective shell burning in a non-rotating $3 M_\odot$ helium star from @BM2019a. Following the methodology of @BM2016, we map the the underlying 1D stellar evolution model from the <span style="font-variant:small-caps;">Kepler</span> code [@weaver_78; @heger_10] into the finite-volume code <span style="font-variant:small-caps;">Prometheus</span> [@fryxell_89; @fryxell_91] at a late pre-collapse stage about 8 minutes before the onset of collapse. Our simulation includes the mass shells initially located between $2,420\,\mathrm{km}$ and $16,500\, \mathrm{km}$, which covers three distinct convective zones as illustrated (in turquoise) by the progenitor entropy profile in Figure \[fig:midpoints\]. Initially there are three convective burning zones: (I), (II) and (III), which are oxygen, neon, and carbon shells respectively. The neon burning region (II) is consumed before collapse and is so thin that $g$-modes will in fact be able to propagate through it for much of the simulation. By $344\, \mathrm{s}$ (5.77 minutes) region (II) has disappeared completely, and there is just one convectively stable region between (I) and (III), shown in Figure \[fig:lateS\]. The inner boundary of the grid is contracted in accordance with the mass shell trajectory from the 1D <span style="font-variant:small-caps;">Kepler</span> model. We use uniform spacing in $\log r$ for the radial coordinate and the axis-free overset Yin-Yang grid [@kageyama_04] as implemented by @melson_15a to cover the full sphere. The grid resolution is $400 \times 56 \times 148$ zones in radius $r$, latitude $\theta$, and longitude $\varphi$ on each of the Yin-Yang patches, which corresponds to an angular resolution of $2^\circ$. Extremum-preserving 6th-order PPM reconstruction has been implemented in <span style="font-variant:small-caps;">Prometheus</span> following @colella_08. \[section:progenitor\] ![Spherically averaged profiles of the specific entropy $s$ (solid black), against our defined regions at early ($t=144\, \mathrm{s}$) and late ($t=368\, \mathrm{s}$) times. Convective (turquoise) and convectively stable (coral) regions are labelled (I)–(III), with increasing darkness from the inner boundary to the outer boundary. Turbulent fluxes between the convective regions (I) and (III) are evaluated at the radius $R_\mathrm{flux} =R_{m=1.65 M_\odot}$, which is in the convectively stable region between zones (II) and (III) up until around $400\, \mathrm{s}$. The second convective region disappears completely at $344\, \mathrm{s}$; the shell structure after the disappearance of region (II) is shown in Figure \[fig:lateS\]. []{data-label="fig:earlyS"}](rad-conv-zones.pdf){width="\linewidth"} ![Spherically averaged profiles of the specific entropy $s$ (solid black), against our defined regions at early ($t=144\, \mathrm{s}$) and late ($t=368\, \mathrm{s}$) times. Convective (turquoise) and convectively stable (coral) regions are labelled (I)–(III), with increasing darkness from the inner boundary to the outer boundary. Turbulent fluxes between the convective regions (I) and (III) are evaluated at the radius $R_\mathrm{flux} =R_{m=1.65 M_\odot}$, which is in the convectively stable region between zones (II) and (III) up until around $400\, \mathrm{s}$. The second convective region disappears completely at $344\, \mathrm{s}$; the shell structure after the disappearance of region (II) is shown in Figure \[fig:lateS\]. []{data-label="fig:earlyS"}](rad-conv-zones-late.pdf){width="\linewidth"} Results {#sec:results} ======= Visual identification of excited modes {#sec:flow} -------------------------------------- We run the model for 7.8 minutes during oxygen burning. It takes about one minute for convection to fully develop in region (I) from the 1D initial conditions, so we only consider the phase *after* the first minute in our subsequent analysis. In Figure \[fig:visit\] we show 2D slices of the absolute radial velocity $|v_r|$ and specific entropy $s$. We have restricted the logarithmic velocity scale to $5\texttt{-}620\, \mathrm{km}\, \mathrm{s}^{-1}$ to show both the convective motions in region (I) and the excited waves in the surrounding stable region. The boundary between region (I) and the stable region is clearly identifiable in the entropy plot as a relatively sharp discontinuity at a radius of about $4000\, \mathrm{km}$; it is evident that the convective plumes do not overshoot strongly into the stable regions. This clearly identifies the slower, laminar motions of a few $10\, \mathrm{km}\, \mathrm{s}^{-1}$ as excited modes. The excited modes are of moderately high wave number and clearly not dominated by dipole or quadrupole modes. [2]{} ![image](vr_plot_final){width="\linewidth"} ![image](sto_both){width="\linewidth"} Expected IGW frequencies ------------------------ We compute the Brunt–Väisälä frequency $\omega_\mathrm{BV}$ in the convectively stable regions using $$\omega_\mathrm{BV}^2 = g \left( \frac{1}{\Gamma}\frac{\mathrm{d \ ln} P}{\mathrm{d}r} - \frac{\mathrm{d \ ln}\rho}{\mathrm{d}r}\right), \label{eq:BV}$$ where $g$ is the local gravitational acceleration $$g(r) = \frac{G M(r)}{r^2},$$ and $\Gamma$ is the adiabatic exponent $\Gamma = (\partial \mathrm{ln} P/ \partial \mathrm{ln} \rho)_s = \rho/P c_s^{2}$ in terms of the sound speed $c_s$, the pressure $P$, and the density $\rho$ at radius $r$. ![Brunt–Väisälä frequency $\omega_\mathrm{BV}$ as a function of radius $r$ at $368\, \mathrm{s}$ (same as Figure \[fig:lateS\]), after the middle convective region has disappeared. Based on this profile, gravity waves may propagate in the coral convectively stable zones at frequencies below $\mathord{\sim} 0.1\, \mathrm{Hz}$. []{data-label="fig:BV"}](BV_368.pdf){width="50.00000%"} In Figure \[fig:BV\] we plot $f_\mathrm{BV} = \omega_{\mathrm{BV}}/2\pi$ for the simulated region as a function radius at a late time of $368\, \mathrm{s}$ (same time as Figure \[fig:lateS\]). In the first convectively stable region, $f_\mathrm{BV}$ peaks between $0.1\texttt{-}0.4\, \mathrm{Hz}$ during the whole simulation. After about $350\, \mathrm{s}$ it stays at $\mathord{\sim} 0.1\,\mathrm{Hz}$. Waves with frequencies lower than these may propagate as gravity waves in this region. Convective and convectively stable regions (defined by the entropy gradient of Figure \[fig:lateS\]) at this time step are included with the same colour scheme as Figure \[fig:earlyS\]. The expected mode frequencies are of a similar order as the convective turnover frequency, and hence the conditions for IGW excitation by convective motions should be close to optimal. Note that, strictly speaking, the Brunt–Väisälä frequencies computed with Equation (\[eq:BV\]) are only valid in the linear approximation, where the runaway (convectively unstable) or oscillation (convectively stable) of a stochastically displaced fluid parcel is determined by the local buoyancy excess upon displacement. The fact that there are real Brunt–Väisälä frequencies outside of our convectively stable regions (defined by the entropy gradient) can be explained by the deceleration of fluid parcels as they approach the boundary, where they can penetrate (overshoot) it (see [@Mocak2008] for a detailed explanation). Angular momentum flux in excited waves {#section:reynolds} -------------------------------------- In order to quantify the energy and angular momentum flux carried by the waves excited a convective boundaries, we make use of a spherical Reynolds/Favre decomposition of the flow [@Favre1965]. We use hats (or, alternatively, angled brackets) and primes for the volume-weighted Reynolds averages and fluctuating components of extensive quantities such as the density $\rho$ and pressure $P$ (e.g., $\hat{\rho}$ and $\rho'$). These are defined for any such quantity $X$ as $$\begin{aligned} \hat{X}(r) &=& \langle X \rangle = \int X \, d \Omega, \\ X'(r,\theta,\varphi)&=&X-\hat{X}(r).\end{aligned}$$ Mass-weighted Favre averages and fluctuating components of intensive quantities like the internal energy density $\epsilon$ are denoted by tildes and double primes (e.g., $\tilde{\epsilon}$, and $\epsilon''$). For any such quantity $Y$, we have $$\begin{aligned} \tilde{Y} (r) &=& \frac{ \int \rho Y \, d \Omega}{\int \rho \, d \Omega}, \\ Y''(r,\theta,\varphi)&=&Y-\tilde{Y}(r).\end{aligned}$$ Disregarding subdominant terms for the work done by turbulent Reynolds stresses, the Favre-averaged energy equation reads [@BM2019b], $$\label{eq:favre} \begin{split} & \frac{\partial}{\partial t} \left( \hat{\rho} \tilde{\epsilon} + \hat{\rho} \frac{|\tilde{\mathbf{v}}|^2}{2} + \hat{\rho} \frac{\langle |\tilde{\mathbf{v}}''| \rangle^2}{2}\right) + \nabla \cdot \left[ \left( \hat{\rho} \tilde{\epsilon} + {\hat{\rho}\frac{|\tilde{\mathbf{v}}|^2}{2}} +\hat{P} \right)\tilde{\mathbf{v}} \right] \\ & + \nabla \cdot \left[ \langle{\hat{\rho} \epsilon'' \mathbf{v}'' \rangle } + \langle P' \mathbf{v}'' \rangle + \langle \rho \mathbf{v}'' \frac{|\mathbf{v}''^2|}{2}\rangle \right] =0. \end{split}$$ The relevant terms containing the energy flux carried by waves excited at convective boundaries are the convective energy flux $\mathbf{F}_\mathrm{conv} = \langle \hat{\rho} \epsilon'' \mathbf{v}''\rangle$ (for the turbulent transport of internal energy), the “acoustic” energy flux $\mathbf{F}_\mathrm{sound} = \langle P' \mathbf{v}'' \rangle$, and the turbulent kinetic energy flux $\mathbf{F}_\mathrm{kin} = \langle \rho \mathbf{v}'' \frac{|\mathbf{v}''^2|}{2}\rangle$. Strictly speaking, however, one can only properly separate the flux carried by gravity waves and acoustic waves using a decomposition of the fluctuating components into eigenmodes. When using the Favre decomposition of the energy equation, the wave energy flux is split between the three turbulent flux components. At the same time acoustic waves and entrainment contribute to the turbulent fluxes, which is particularly problematic in the case of entrainment, which produces a *negative* energy flux near the convective boundary. Due to these difficulties in isolating the wave energy flux $\dot{E}$ from region (I), we do not test Equation (\[eq:e\_wave\]), for which there is strong justification from theory and simulations anyway [@GK1990; @LQ2013; @Pincon2016]. Instead, we assume $\dot{E} = \mathcal{M} L_\mathrm{conv}$, and directly test the dependence of $\dot{J}$ on $L_\mathrm{conv}$ and $\mathcal{M}$ in Equation (\[eq:RWJ\]). To obtain $\dot{J}$, we consider the Favre-averaged equation for the transport of angular momentum, which only contains two flux terms $\mathbf{F}_{\mathrm{adv}}$ and $\mathbf{F}_{\mathrm{turb}}$ for the mean (advective) and turbulent angular momentum flux, $$\frac{{\partial}\langle \rho \mathbf{l} \rangle}{{\partial}t} + \nabla\cdot \mathbf{F}_\mathrm{adv} + \nabla\cdot \mathbf{F}_\mathrm{turb} =0,$$ where $$\mathbf{F}_{\mathrm{turb}} = \langle \rho \mathbf{l}'' u_r''\rangle,$$ and $$\mathbf{F}_{\mathrm{adv}} = \langle \rho \tilde{\mathbf{l}} \tilde{u}_r \rangle = \rho \tilde{\mathbf{l}} \tilde{u}_r .$$ For a full derivation, we refer to Appendix \[sec:favre\_j\]. For our purpose, only the turbulent angular momentum flux $\mathbf{F}_{\mathrm{turb}}$ is of interest, which contains the flux carried by gravity waves and acoustic waves. We evaluate the angular momentum flux out of region (I) at a fixed mass coordinate of $m = 1.65 M_\odot$. We choose a fixed mass shell rather than a fixed radius because this mass shell stays inside the convectively stable region until around $400\, \mathrm{s}$; a fixed analysis radius would be problematic because of the contraction of the shells during the simulation. Using the numerically computed turbulent angular momentum flux, we can now proceed to assess Equation (\[eq:RWJ\]). To evaluate the right-hand side (RHS) of Equation (\[eq:e\_wave\]), we use the maximum of the ratio of RMS velocity fluctuation $v_r''$ to the sound speed $c_\mathrm{s}$ in the convectively stable region containing $R_\mathrm{flux}$, $$\mathcal{M}=\mathrm{max}_\mathrm{I} \left( \frac{(\widetilde{v_r''^2})^{1/2}} {\tilde{c_s}}\right),$$ and use the volume-integrated nuclear energy generation rate $\dot{Q}_\mathrm{nuc}$ in region (I) as a proxy for the convective luminosity $L_\mathrm{conv}$, which is well justified under steady state conditions [@Arnett2008; @BM2016][^3]. Finally, we need the convective turnover time to evaluate Equation (\[eq:RWJ\]), which is computed from the width of the convective region (I) and the maximum convective velocity fluctuation $v_r''$, $$\tau = \Delta r/ \mathrm{max}_\mathrm{I} \left( v_r'' \right).$$ ![Ratio of the turbulent angular momentum flux $\dot{J} = |\mathbf{F}_\mathrm{turb}|$ from our 3D simulation (MM20) to the prediction from Equation (\[eq:RWJ\]), based on our calculations of the convective Mach number $\mathcal{M}$ and convective burning luminosity $\dot{Q}_\mathrm{nuc}$ (and hence the energy flux via Equation (\[eq:e\_wave\])) and convective turnover frequency. Our computed angular momentum flux is smaller by a factor of $\mathord{\sim}10^2$ than what the model in F15 suggests when using the typical wave number $\bar{m}=1$. After around $400\,\mathrm{s}$ (grey dashed line), the mass coordinate where we measure the angular momentum flux resides *inside* the inner convective region (I), so the ratio is no longer reliable.[]{data-label="fig:Eq3scaling"}](F15_eq_3.pdf){width="50.00000%"} In Figure \[fig:Eq3scaling\], we plot the ratio of the turbulent angular momentum flux $\dot{J}=4\pi r^2|\mathbf{F}_{\mathrm{turb}}|$ to the value $\dot{J}_\mathrm{F15}=\tau \mathcal{M} L_\mathrm{conv}/(2\pi) $ predicted by F15. Once convection in region (I) has reached a steady state, the angular momentum flux $\dot{J}$ only reaches a fraction of a few $\mathord{\sim} 10^{-2}$ of the flux predicted by F15. If we accept the scaling $\dot{E}=\mathcal{M}^{\alpha} L_\mathrm{conv}$ for the wave energy flux, this result implies that the effective average wave number of the excited wave packets must be *much smaller* than $\bar{m}=1$ despite the fact that the excited modes have rather *high* angular wave numbers as discussed in Section \[sec:flow\].[^4] It is, in fact, natural to expect that $\bar{m}=1$ is not a lower bound for the average wave number if one considers the dynamics of wave excitation at convective boundaries. Convective plumes are driven by radial buoyancy forces and therefore hit the convective boundary almost perpendicularly. Any overshooting plume will therefore tend to launch gravity waves propagating in all directions away from the plume (i.e., with wave numbers of opposing sign) with similar amplitude. A small asymmetry between modes of different $m$ can still arise if the plume wanders around, but the convective updrafts and downdrafts tend to be rather stationary in simulations of convection during the neutrino-cooled burning stages, and hence there is only a small asymmetry in amplitude between excited modes of opposite $m$. Contrary to the assumption of F15, $\bar{m}=1$ is therefore not a strict lower bound for a single wave packet launched by an overshooting plume; $\bar{m}<1$ is easily possible and in fact expected for waves excited by buoyancy-driven convection. Testing the Random Walk Assumption ---------------------------------- The relatively small values of the turbulent angular momentum flux already indicate that the mechanism for stochastic spin-up is considerably less efficient than F15 estimated. However, there is yet another potential problem that could further reduce the efficiency of stochastic spin-up. One might suspect that the excitation of modes by overshooting plumes is not a time-independent stochastic process that results in a random walk of the angular momentum of the convective region that launches the waves, and of the angular momentum in the destination region. Instead, there could be a regression to zero if the stochastic excitation were to predominantly produce prograde modes, which would drive the angular momentum of the driving region towards zero. Such a mechanism could operate independently of the familiar filtering mechanism for prograde and retrograde modes and limit stochastic fluctuations of the angular momentum of convective shells even when there is no strong differential rotation. In order to determine whether there is such a restoring mechanism, or whether the random walk assumption of F15 is valid, we first need to generalise Equation (\[eq:RWMa\]) to accommodate the slow, secular changes in convective luminosity and turnover time as the oxygen burning shell contracts. We can then compare the actual time-integrated angular momentum flux to this generalised random walk model. It is straightforward to determine the root-mean-square expectation value $\Delta J_\mathrm{RW}$ for a continuous random walk process by integrating the variance of each angular momentum component $\delta J_i^2=(\tau \dot{J}_i)^2=|\tau \dot{J}|^2/3$ of each wave packet times the assumed correlation frequency of the random walk process (i.e. the convective turnover frequency $\tau^{-1}$ in the model of F15), $$\Delta J_\mathrm{RW}(t) = \left[ \sum_{i=1}^{3} \int_0^t (\delta J_i)^2 \frac{{\mathrm{d}}t'}{\tau} \right]^{1/2} =\left[ \sum_{i=1}^{3} \int_0^t \tau \dot{J}_i(t')^2 \, {\mathrm{d}}t' \right]^{1/2}. \label{eq:J2}$$ ![The change in angular momentum, $\Delta J$, inside the mass coordinate $m=1.65 M_\odot$ based on the three different formulations of $\Delta J$ in Equations (\[eq:J2\]),(\[eq:J3\]), and (\[eq:J1\]). The angular momentum computed from the simulation (Equation \[eq:J3\]) is in red and the angular momentum computed assuming a random walk via Equation (\[eq:J2\]) is in black. The angular momentum calculated from Equation (\[eq:J1\]) assuming the relation (\[eq:RWJ\]) between the angular momentum flux and the convective luminosity is in cyan. After $400\, \mathrm{s}$ (dashed grey line) the mass coordinate where we measure the flux lies inside the convective region (I) (see Figure \[fig:earlyS\]). The significant growth of $\Delta J$ in the last $70\,\mathrm{s}$ is the result of stochastic angular momentum redistribution *inside* the convective zone, and not associated with a flux of IGWs. The wave packet random walk model is no longer applicable during this phase and cannot be used to infer spin–up of the region from IGWs. []{data-label="fig:randomwalk2"}](Random_walks_mass.pdf){width="50.00000%"} Figure \[fig:randomwalk2\] compares $\Delta J_\mathrm{RW}$ (in black) to the actual time-integrated angular momentum flux (in red) $\Delta J$, $$\Delta J(t)= \left[ \sum_{i=1}^{3} \left( \int_0^t \dot{J}_i(t') \, {\mathrm{d}}t' \right)^2 \right]^{1/2}. \label{eq:J3}$$ For illustrative purposes, we also show the predicted spin-up $\Delta J_\mathrm{F15}(t)$ for the original random walk model of F15 with $\dot{J}=\dot{E} \tau/2\pi $ (i.e. $\bar{m}=1$) and $\dot{E}=\mathcal{M} L_\mathrm{conv}$ in cyan, $$\Delta J_\mathrm{F15}(t) = \frac{1}{\sqrt{3}} \left[ \int_0^{t} \tau^{-1} \left(\frac{\tau^2 \mathcal{M} L_\mathrm{conv}}{2\pi}\right)^2 \, {\mathrm{d}}t' \right] ^{1/2}. \label{eq:J1}$$ Interestingly, even though $\Delta J_\mathrm{RW}$ (black) still overpredicts the actual time-integrated angular momentum flux $\Delta J$ (red) by a factor of 5 on average, this is not a huge discrepancy. The growth of $\Delta J$ appears roughly compatible with a random walk, only with an effective correlation time that is $4\%$ of what is assumed in F15, since the spin-up scales with the square root of the correlation time, $\Delta J_\mathrm{RW}\propto \sqrt{\tau}$. This short correlation time may be related to the rather high wave numbers of the excited modes. The effective correlation time of the random walk must reflect the characteristic time scale of the forcing motions with similar wave numbers, or in other words, of relatively small-scale structures in the convective flow, which evolve on significantly shorter time scales than $\tau_\mathrm{conv}$. Thus, our simulation actually *supports* the assumption that stochastic wave excitation will lead to the core angular momentum executing a random walk, albeit with a shorter correlation time. However, because the assumption of $\bar{m}=1$ is not valid and because the correlation time of the random walk is somewhat shorter than the convective turnover time in our simulations, the original model of F15 overpredicts $\Delta J$ by several orders of magnitude. On the other hand, Figure \[fig:randomwalk2\] suggests that the late convective burning stages might stochastically affect the remnant angular momentum in a different manner. During the last $70\, \mathrm{s}$ of the simulation, the analysis radius lies *inside* the oxygen shell, and the measured $\Delta J$ indicates stochastic *internal* redistribution of angular momentum within the shell. This stochastic redistribution of angular momentum could potentially spin-up the inner regions of the shell considerably. In the present example, the time-integrated flux of angular momentum $\Delta J$ *within* the oxygen shell during the last $70\, \mathrm{s}$ is ten times bigger than the time-integrated wave-mediated flux of angular momentum out of the oxygen shell. If only part of the oxygen shell is ejected during the supernova explosion, the total angular momentum within the “mass cut” may be considerable. If the mass cut were placed at $1.65 M_\odot$ (which is much larger than is realistic for this model, cp. [@BM2019a]), the resulting neutron star spin period would be of order $100\, \mathrm{ms}$. Naturally, the theory of stochastic IGW excitation cannot be applied to stochastic variations within convective regions close to the mass cut, so one cannot make generic predictions about the importance of this phenomenon. However, 3D supernova progenitor models that include this effect naturally anyway are becoming more widely used. Estimate of neutron star spin periods ------------------------------------- Figure \[fig:randomwalk2\] shows that during the simulation time of about 7 minutes, the angular momentum transported out of the oxygen shell from the outer boundary is about $2\times 10^{45}\, \mathrm{g}\, \mathrm{cm}^2\, \mathrm{s}^{-1}$. If a similar amount of angular momentum is transported inwards into the core, and the angular momentum is conserved during the collapse to a neutron star, this would result in a neutron star rotation rate $\omega\approx 1\, \mathrm{rad}\, \mathrm{s}^{-1}$ of a typical neutron star moment of inertia $\mathord{\sim}1.5 \times 10^{45}\, \mathrm{g}\, \mathrm{cm}^2$. However, our simulation only covers the last phase in the life of the second oxygen shell. To gauge the actual impact of stochastic spin-up, we need to extrapolate our results to the entire lifetime of a shell similar to F15, but with different efficiency factors. For this purpose, it is convenient to express all relevant quantities in Equation (\[eq:RWMa\]) in terms of the shell mass $\Delta M$, inner radius $R$ and width $\Delta R$, the average $Q$-value of the nuclear reaction per unit mass, $q$, and the nuclear time scale $\tau_\mathrm{nuc}$ for the respective burning phase. Using $L_\mathrm{conv}= \Delta M\, q/\tau_\mathrm{nuc}$, we find the convective velocity from mixing-length theory as calibrated against 3D simulations [@BM2016], $$v_\mathrm{conv} = \frac{1}{2} \left(\frac{L_\mathrm{conv} h_P} {m}\right)^{1/3} =\frac{1}{2}\left(\frac{q R/4}{\tau_\mathrm{nuc}}\right)^{1/3},$$ where we have taken the pressure scale height to be $h_P=P/(\rho g) \approx R/4$, which is a good approximation during advanced burning stages in massive stars. The sound speed at the bottom of the shell is approximately $$c_\mathrm{s} = \left(\frac{G M}{3 R} \right)^{1/2},$$ so that we can write the convective Mach number as $$\mathcal{M} = \frac{v_\mathrm{conv}}{c_\mathrm{s}} = \frac{1}{2} \left( \frac{q R} {4\tau_\mathrm{nuc}}\right)^{1/3} \left(\frac{G M}{3 R} \right)^{-1/2} \label{eq:Mach},$$ and find the convective turnover time to be $\tau_\mathrm{conv}=\Delta R/v_\mathrm{conv}$. Putting all this together in Equation (\[eq:RWMa\]) and noting that we have $\mathcal{N}=\tau_\mathrm{nuc}/\tau_\mathrm{conv}$ turnovers over the lifetime of the shell, we find $$\begin{aligned} \Delta J &= \frac{\alpha}{\sqrt{3}} \sqrt{\frac{\tau_\mathrm{nuc}}{\tau_\mathrm{conv}}} \left(\frac{\tau_\mathrm{conv}^2 \mathcal{M} \Delta M q}{2\pi \tau_\mathrm{nuc}}\right) \nonumber \\ &\approx 3.4\times 10^{-3} \times \frac{\Delta R^{3/2} \Delta M q^{5/6} {R}^{1/3}}{\sqrt{G} \sqrt{M} \tau_\mathrm{nuc}^{1/3}}, \label{eq:F15burn} \end{aligned}$$ where we have replaced the average wave number $\bar{m}$ in Equation (\[eq:RWMa\]) with an efficiency factor $\alpha\sim 10^{-2}$ in line with our simulation results. This can be used for any of the neutrino-cooled shell burning phases, but as pointed out by F15, one expects the last shell burning stage to determine the stochastic spin-up at the time of collapse because deterministic angular momentum transport can spin down the core again if the shell is extinguished before collapse. Typically, this will be the second or third oxygen shell [@Collins2018], though in some cases, the most violent active shell at or close before collapse is the silicon shell as assumed by F15. Since the last oxygen shell typically has a higher mass, larger radius and width, shorter duration, and a higher average $Q$-value of $q\approx 0.4\, \mathrm{MeV}/m_\mathrm{u}$ than the silicon shell, most of the factors in Equation (\[eq:F15burn\]) have slightly more favorable values than used by F15 for silicon shells. With a neutron star moment of inertia $I=0.36 M R_\mathrm{NS}^2$ and a neutron star radius of $R_\mathrm{NS}=12\, \mathrm{km}$, Equation (\[eq:F15burn\]) translates into a characteristic spin period of $$\begin{aligned} P_\mathrm{NS} =& 13.5\,\mathrm{s} \times \left(\frac{M}{1.4 M_\odot}\right)^{3/2} \times \left(\frac{\Delta M}{0.1 M_\odot}\right)^{-1} \times \left(\frac{R}{3000\, \mathrm{km}}\right) \nonumber \\ \label{eq:Jredq} & \times \left(\frac{\Delta R}{1000\, \mathrm{km}}\right)^{-3/2} \times \left(\frac{\tau_\mathrm{nuc}}{10^4\, \mathrm{s}}\right)^{1/3} \times \left(\frac{q}{0.4\, \mathrm{MeV}/m_\mathrm{u}}\right)^{5/6},\end{aligned}$$ in terms of typical parameters for oxygen shells in low-mass supernova progenitors. Based on the efficiency parameters in our simulations, we therefore expect stochastic spin-up to play a negligible role in low-mass progenitors compared to stochastic spin-up during during the early explosion phase [@Wongwathanarat2013; @BM2019a] and by late-time fallback [@chan_20; @stockinger_20], which can achieve spin periods from a few seconds down to milliseconds. For high-mass progenitors with $\Delta M\sim 1 M_\odot$ and wider oxygen shells $\Delta R \sim 5000\, \mathrm{km}$, spin periods of order $100 \, \mathrm{ms}$ remain within reach even with a much lower efficiency factor than assumed by F15, and stochastic spin-up by IGWs might still be a relevant factor in determining the neutron star spin period along with other angular momentum processes during the progenitor evolution and spin-up/spin-down processes during the explosion. Effects of Numerical Conservation Errors {#sec:ang_cons} ======================================== Finite-volume methods as used in <span style="font-variant:small-caps;">Prometheus</span> generally cannot conserve total angular momentum to machine accuracy (unlike smoothed-particle hydrodynamics). Moreover, angular momentum transport in any grid-based or particle-based simulation will be affected to some degree by numerical dissipation. One may justifiably ask whether non-conservation of total angular momentum or numerical dissipation have any bearing on our simulation of stochastic spin-up. In the context of the preceding discussion, it is especially pertinent, whether the convective region (I) is indeed spun up in the opposite direction to the angular momentum that is lost by waves. This specific problem can already be addressed here, though more extensive resolution studies and simulations with different reconstruction and discretisation schemes naturally remain desirable for the future. As a simple check on the quality of angular momentum conservation in our simulation, we compare the time-integrated angular momentum flux from region (I) to the volume-integrated angular momentum. Unlike when we calculated the angular momentum flux for the random walk at a fixed mass coordinate, we compute the angular momentum dissipation at a fixed radial coordinate. so that we can formulate the angular momentum “budget” for region (I) using Gauss’ theorem. We define the analysis region region C1 as the sphere contained within $r=4337\,\mathrm{km}$, i.e. the boundary of C1 is in the middle of the convectively stable region between regions (I) and (II) once 3D burning and convection develops. Analytically, we have $$\label{eq:int_cons} \int\limits_\mathrm{C1} \rho \textbf{l} \, {\mathrm{d}}V = - \int\limits_0^t \oint\limits_{{\partial}\mathrm{C1}} \rho \mathbf{l} \mathbf{v}\cdot \mathbf{dA}\,{\mathrm{d}}t'.$$ In Figure \[fig:AMxyz\], we compare the left-hand side and the right-hand side of Equation (\[eq:int\_cons\]) for the $x$-, $y$-, and $z$-component of the angular momentum. For all three components, at the end of the simulation the direction of the convective shell angular momentum is indeed the opposite sign of the angular momentum leaving at the flux boundary. Early on, the evolution of $\mathbf{J}$ is dominated by numerical errors, the $y$-component in particular shows a significant drift over the first $200 \, \mathrm{s}$. As soon as convection is fully developed and the excited waves from the convective boundary carry a significant angular momentum flux, the evolution of the total angular momentum in the analysis region clearly follows the time-integrated angular momentum flux. This suggests that angular momentum conservation errors do not significantly affect our analysis at times later than $\mathord{\sim} 200 \, \mathrm{s}$. Of course, more subtle numerical issues, such artificial damping of the excited waves, may limit the accuracy of our results and need to be further investigated in the future. However, we believe that the key findings of this study will likely remain robust. The code clearly has the ability to evolve relatively weak waves, and the waves that would need to be resolved for the mechanism of F15 to work effectively are not of extremely short wavelength, so that excessive dissipation is likely not a critical problem. ![Time-integrated angular momentum flux at the fixed boundary C1 (red) and the (negative) integrated angular momentum over the volume enclosed by C1 (black) in the $x$, $y$ and $z$ directions. For each component, the angular momentum in the spun-up convective shell volume is consistent with the angular momentum flux leaving the volume. []{data-label="fig:AMxyz"}](AM-joined-2){width="50.00000%"} Conclusions {#sec:conclusions} =========== Using the 3D hydrodynamics code <span style="font-variant:small-caps;">Prometheus</span>, we studied the angular momentum flux from stochastically excited IGWs during oxygen shell burning in an initially non-rotating massive star. Our findings allow us to better assess the efficiency of stochastic wave excitation as a process for core spin-up, which has been suggested by @Fuller2015 based on analytic theory. In agreement with the theory of @Fuller2015, we find that the IGW-mediated flux of angular momentum out of a convective shell in a non- or slowly-rotating progenitor can indeed be described by a random-walk process. However we find that the correlation time of this random-walk process is only a fraction of the convective turnover time and hence significantly shorter than assumed by @Fuller2015. This reduces the amplitude of the random walk by a factor of several. We also find a smaller ratio between the angular momentum flux carried by the excited IGWs and the convective luminosity by more than an order of magnitude than assumed by @Fuller2015. We ascribe this discrepancy to the fact that the assumption of an average zonal wave number $\bar{m}=1$ of individual IGW “wave packets” is too optimistic, since IGWs excited by overshooting convective plumes will generally contain modes with opposite zonal wave number $m$ of almost equal amplitude. Therefore the average zonal wave number $\bar{m}$ of each wave packet will be much smaller than unity and the net angular momentum carried by the wave packet will be very small. The combination of a shorter effective correlation time and a smaller instantaneous angular momentum flux results in a time-integrated stochastic angular momentum flux from the oxygen shell that is about a factor of $\mathord{\sim}10^2$ smaller than predicted by the model of @Fuller2015. It should be pointed out, however, that our simulation *is* compatible with their analytic scaling laws; it merely suggests that the relevant dimensionless parameters of the model may be quite small. The characteristic spin period from stochastic spin-up of the core by IGWs over the entire life of a convective burning shell can be estimated based on the mass, radius, width, and lifetime of the shell from 1D stellar evolution models using a simple scaling relation (Equation \[eq:Jredq\]). Extrapolating our simulation results over the entire life time of convective shells, we estimate that stochastic spin-up by IGWs alone would result in neutron star birth spin periods of several seconds for low-mass stars and down to $\mathord{\sim}0.1\, \mathrm{s}$ for high-mass stars with thick oxygen shells. These spin rates are slower than predicted by @Fuller2015 by one to two orders of magnitude. Thus our findings suggest that stochastic spin-up of progenitor cores will usually not play a major role in determining the core spin rates of massive stars and neutron star birth periods, because stochastic spin-up processes during the supernova explosion will impart more angular momentum onto the neutron star. It remains to be seen, however, to what extent the efficiency factors for IGW excitation and stochastic core spin-up found in our simulation depend on the detailed properties of convective zones and the structure of the convective boundary. For example, the correlation time of the random walk and the average zonal wave number will depend on the typical size of the convective eddies. Convection zones with pronounced large-scale flow patterns and fewer but stronger plume impact events may provide more favorable conditions for stochastic core spin-up. In future, one should also investigate stiffer convective boundaries with sharp entropy jumps; the inner convective boundaries, which are of highest relevance for the problem of stochastic spin-up, are considerably stiffer than the convective boundary considered in this study. Moreover, the dependence of stochastic spin-up on the convective Mach number should be investigated numerically to confirm the predicted scaling laws. We also note that stochastic angular momentum redistribution *within* convective zones close to the mass cut could be relevant for predicting neutron star birth spin periods. This effect is already implicitly included in modern 3D supernova progenitor models. Better understanding convection, wave excitation, and angular momentum transport by 3D simulations will remain challenging because of the technical difficulties of low-Mach number flow, stringent resolution requirements at stiff convective boundaries, and the numerical problem of angular momentum conservation. However, 3D simulations of convective burning are clearly proving useful in understanding angular momentum transport processes in supernova progenitors. Acknowledgements {#acknowledgements .unnumbered} ================ We acknowledge fruitful discussions with A. Heger and I. Mandel. LM acknowledges support by an Australian Government Research Training (RTP) Scholarship. BM has been supported by the Australian Research Council through Future Fellowship FT160100035 and partly as an Associate Investigator of the ARC Centre of Excllence *OzGrav* (CE170100004). This research was undertaken with the assistance of resources from the National Computational Infrastructure (NCI), which is supported by the Australian Government and was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia. Favre-averaged Angular Momentum Transport Equation {#sec:favre_j} ================================================== To derive the Favre-averaged angular momentum transport equation, we cross $\mathbf{r}$ with the momentum equation [@shu; @Pope2000], $$\mathbf{r} \times \left [ \frac{\partial (\rho \mathbf{u})}{\partial t} + \nabla \cdot (\rho \mathbf{u} \otimes \mathbf{u})\right] \nabla P+ = \mathbf{r} \times \rho \mathbf{g}.$$ It is convenient to write this in component form as, $$\frac{\partial (\epsilon_{ijk} r_j \rho u_k)}{\partial t} + \epsilon_{ijk} r_j \nabla_l (\rho u_l u_k +\delta_{lk} P) = \epsilon_{ijk} r_j \rho g_k,$$ which leads to $$\frac{\partial (\epsilon_{ijk} r_j \rho u_k)}{\partial t} + \nabla_l [\epsilon_{ijk} r_j (\rho u_l u_k + P \delta_{lk})] - \epsilon_{ijk} (\rho u_l u_k+ P \delta_{lk}) \underbrace{\nabla_l r_j}_{=\delta_{lj}} = \epsilon_{ijk} r_j \rho g_k,$$ and hence $$\frac{\partial (\epsilon_{ijk} r_j \rho u_k)}{\partial t} + \nabla_l [\epsilon_{ijk} r_j ( \rho u_l u_k+ P \delta_{lk})] - \underbrace{\epsilon_{ijk} (\rho u_j u_k+P \delta_{lk}) }_{=0} = \epsilon_{ijk} r_j \rho g_k.$$ This can again be written in component-free notation $$\frac{\partial \rho \mathbf{l}}{\partial t} + \nabla \cdot (\rho \mathbf{u}\otimes \mathbf{l}) +\nabla \cdot (*\mathbf{r} P ) = \mathbf{r}\times \rho \mathbf{g},$$ where $\mathbf{l}=\mathbf{r}\times\mathbf{v}$ is the specific angular momentum and $*$ denotes the Hodge star operator. The cross product on the right-hand side vanishes if we assume a monopole potential, and the second divergence term on the left-hand side vanishes when we average over a thin spherical shell $$\int \nabla \cdot (*\mathbf{r} P ) \,{\mathrm{d}}V = \int_{{\partial}V} P \mathbf{r} \times \mathbf{dA} = \int_{{\partial}V} P \mathbf{r} \times \mathbf{n} \,{\mathrm{d}}A =0.$$ Hence we need only take into account the first two terms on the left-hand side when performing a spherical Favre decomposition. Decomposing the velocity and averaging over spherical surfaces yields $$\frac{\partial \langle \rho \mathbf{l}\rangle}{\partial t} + \nabla_r \cdot \langle \rho (\tilde{u}_r+u_r'') (\mathbf{\tilde{l}}+\mathbf{l''}) \rangle = 0,$$ where $\nabla_r$ denotes the radial part of the divergence operator. Using the usual rules $\langle\rho \tilde{X} \tilde{Y}\rangle= \hat{\rho} \tilde{X} \tilde{Y}$ and $\langle\rho \tilde{X} Y''\rangle=0$ for correlation terms containing no or only one fluctuating intensive quantity, only two terms remain, $$\frac{\partial \langle \rho \mathbf{l}\rangle}{\partial t} + \nabla \cdot ( \hat{\rho} \tilde{u}_r \mathbf{\tilde{l}} ) + \nabla \cdot \langle \rho u_r'' \mathbf{l''} \rangle = 0.$$ \[lastpage\] [^1]: E-mail: lucy.mcneill@monash.edu [^2]: Properly speaking, each component $\Delta J_i$ will execute an independent random by with step size $\delta J_i=\delta J/ \sqrt{3}$ if the angular momentum of the wave packets is randomly oriented, but the factor $1/\sqrt{3}$ cancels again when we consider the expectation value of $\Delta J^2$ since $\langle \Delta J^2\rangle =3 \langle \Delta J^2\rangle$. [^3]: One could also define $L_\mathrm{conv}$ as the maximum of the turbulent energy flux $\dot{E}$ inside a convective region. This yields similar, but more noisy results. [^4]: Alternatively, one might surmise that the excited modes have frequencies higher than $\omega=2\pi/\tau$, but unless there is resonant excitation it is unavoidable that the frequency spectrum of the excited modes roughly reflects the frequency spectrum of the turbulent driving motions.
--- abstract: 'In 1976, Dekking showed that there exists an infinite binary word that contains neither squares $yy$ with $|y| \geq 4$ nor cubes $xxx$. We show that ‘cube’ can be replaced by any fractional power $> 5/2$. We also consider the analogous problem where ‘$4$’ is replaced by any integer. This results in an interesting and subtle hierarchy.' author: - | Jeffrey Shallit\ School of Computer Science\ University of Waterloo\ Waterloo, ON, N2L 3G1\ CANADA\ [shallit@graceland.uwaterloo.ca]{} title: Simultaneous Avoidance of Large Squares and Fractional Powers in Infinite Binary Words --- Introduction ============ A [*square*]{} is a nonempty word of the form $yy$, as in the English word [murmur]{}. It is easy to see that every word of length $\geq 4$ constructed from the symbols $0$ and $1$ contains a square, so it is impossible to avoid squares in infinite binary words. However, in 1974, Entringer, Jackson, and Schatz proved the surprising fact that there exists an infinite binary word containing no squares $yy$ with $|y| \geq 3$. Further, the bound $3$ is best possible. A [*cube*]{} is a nonempty word of the form $xxx$, as in the English sort-of-word [shshsh]{}. An [*overlap*]{} is a word of the form $axaxa$, where $a$ is a single letter and $x$ is a (possibly empty) word, as in the French word [entente]{}. Dekking [@Dekking:1976] showed that there exists an infinite binary word that contains neither squares $yy$ with $|y| \geq 4$ nor cubes $xxx$. Furthermore, the bound $4$ is best possible. He also proved that every overlap-free word contains arbitrarily large squares. These two results suggest the following natural question: for each length $l \geq 1$, determine the fractional exponent $p$ (if it exists) such that - there is no infinite binary word simultaneously avoiding squares $yy$ with $|y| \geq l$ and fractional powers $x^e$ with $e \geq p$; - there is an infinite binary word simultaneously avoiding squares $yy$ with $|y| \geq l$ and fractional powers $x^e$ with $e > p$? Here we say a word $w$ is an $e$’th power ($e$ rational) if there exist words $y, y'\in \Sigma^*$ such that $w = y^n y'$, and $y'$ is a prefix of $y$ with $n + |y'|/|y| = e$. For example, the English word [abracadabra]{} is an ${{11} \over 7}$-power. We say a word [*avoids $p$ powers*]{} if it contains no subword of the form $y^e$ with $e \geq p$. We say a word [*avoids $p^+$ powers*]{} if it contains no subword of the form $y^e$ with $e > p$. In this paper we completely resolve this question. It turns out there is a rather subtle hierarchy depending on $l$. The results are summarized in Table 1. -------------------- ----------- ------------- minimum length $l$ avoidable unavoidable of square avoided power power $2$ none all $3$ $3^+$ $3$ $4,5,6$ $(5/2)^+$ $5/2$ $ \geq 7$ $(7/3)^+$ $7/3$ -------------------- ----------- ------------- More precisely, we have There are no infinite binary words that avoid all squares $yy$ with $|y| \geq 2$. There are no infinite binary words that simultaneously avoid all squares $yy$ with $|y| \geq 3$ and cubes $xxx$. There is an infinite binary word that simultaneously avoids all squares $yy$ with $|y| \geq 3$ and $3^+$ powers. There is an infinite binary word that simultaneously avoids all squares $yy$ with $|y| \geq 4$ and ${5 \over 2}^+$ powers. There are no infinite binary words that simultaneously avoid all squares $yy$ with $|y| \geq 6$ and ${5 \over 2}$ powers. There is an infinite binary word that simultaneously avoids all squares $yy$ with $|y| \geq 7$ and ${7 \over 3}^+$ powers. For all $t \geq 1$, there are no infinite binary words that simultaneously avoid all squares $yy$ with $|y| \geq t$ and ${7 \over 3}$ powers. The result (a) is originally due to Entringer, Jackson, and Schatz . The result (b) is due to Dekking [@Dekking:1976]. The result (g) appears in a recent paper of the author and J. Karhumäki . We mention them for completeness. The remaining results are new. Proofs of the negative results ============================== We say a word avoids $(l,p)$ if it simultaneously avoids squares $yy$ with $|y| \geq l$ and $p$ powers. The negative results (a), (b), and (e) can be proved purely mechanically. The idea is as follows. Given $l$ and $p$, we can create a tree $T = T(l,p)$ of all binary words avoiding $(l,p)$ as follows: the root of $T$ is labeled $\epsilon$. If a node is labeled $w$ and avoids $(l,p)$, then it is an internal node with two children, where the left child is labeled $w0$ and the right child is labeled $w1$. If it does not avoid $(l,p)$, then it is an external node (or “leaf”). It is now easy to see that no infinite word avoiding $(l,p)$ exists if and only if $T(l,p)$ is finite. In this case, a breadth-first search will suffice to resolve the question. Furthermore, certain parameters of $T(l,p)$ correspond to information about the finite words avoiding $(l,p)$: - the number of leaves $n$ is one more than the number of internal nodes, and so $n-1$ represents the total number of finite words avoiding $(l,p)$; - if the height of the tree (i.e., the length of the longest path from the root to a leaf) is $h$, then $h$ is the smallest integer such that there are no words of length $\geq h$ avoiding $(l,p)$; - the internal nodes at depth $h-1$ gives the all words of maximal length avoiding $(l,p)$. The following table lists $(l, p, n, h, t, S)$, where - $l = |y|$, where one is trying avoiding $yy$; - $p$, the fractional exponent one is trying to avoid; - $n$, the number of leaves of $T(l,p)$; - $h$, the height of the tree $T(l,p)$. - $t$, the number of internal nodes at depth $h-1$ in the tree. - $S$, the set of labels of the internal nodes at depth $h-1$ that start with $0$. (The other words can be obtained simply by interchanging $0$ and $1$.) For completeness, we give the results for the optimal exponents for $2 \leq l \leq 7$. As mentioned above, the case $l = 2$ is due to Entringer, Jackson, and Schatz and the case $l = 3$ is due to Dekking [@Dekking:1976]. $l$ $p$ $n$ $h$ $t$ $S$ ----- ---------- ------- ----- ----- --------------------------------------------------------------------------------------------------------------------------------------------------------- 2 $\infty$ 478 19 2 $\lbrace 010011000111001101 \rbrace$ 3 3 578 30 2 $\lbrace 00110010100110101100101001100 \rbrace$ 4 $5/2$ 6860 84 4 $\scriptscriptstyle\lbrace$[00101101001011001001101100101101001101100100110100101100100110110010110100110110011]{}, [00110010011010010110010011011001011010011011001001101001011001001101100101101001011]{}$\scriptscriptstyle\rbrace$ 5 $5/2$ 15940 93 2 $\scriptscriptstyle\lbrace$[00100101100110100101100100110110010110100110110010011010010110010011011001011010011001011011]{}$\scriptscriptstyle\rbrace $ 6 $5/2$ 15940 93 2 $\scriptscriptstyle\lbrace$[00100101100110100101100100110110010110100110110010011010010110010011011001011010011001011011]{}$\scriptscriptstyle\rbrace$ 7 $7/3$ 3548 43 2 $\lbrace 001011001011010011001011001101001011001011 \rbrace$ Proof of (c) {#proofc} ============ In this section we prove that there is an infinite binary word that simultaneously avoids $yy$ with $|y| \geq 3$ and $3^+$ powers. We introduce the following notation for alphabets: $\Sigma_k := \lbrace 0, 1, \ldots, k-1 \rbrace$. Let the morphism $f: \Sigma_3^* \rightarrow \Sigma_2^*$ be defined as follows. $$\begin{aligned} 0 &\rightarrow& 0010111010 \\ 1 &\rightarrow& 0010101110 \\ 2 &\rightarrow& 0011101010 \end{aligned}$$ We will prove If $w$ is any squarefree word over $\Sigma_3$, then $f(w)$ avoids $yy$ with $|y| \geq 3$ and $3^+$ powers. \[square3\] We argue by contradiction. Let $w = a_1 a_2 \cdots a_n$ be a squarefree string such that $f(w)$ contains a square $yy$ with $|y| \geq 3$, i.e., $f(w) = xyyz$ for some $x, z \in \Sigma_2^*$, $y \in \lbrace \Sigma_2^{\geq 3}$. Without loss of generality, assume that $w$ is a shortest such string, so that $0 \leq |x|, |z| < 20$. Case 1: $|y| \leq 20$. In this case we can take $|w| \leq 5$. To verify that $f(w)$ has no squares $yy$ with $|y| \geq 3$, it therefore suffices to check each of the 30 possible words $w \in \Sigma_2^5$. Case 2: $|y| > 20$. First, we establish the following result. - (inclusion property) Suppose $f(ab) = t f(c) u$ for some letters $a, b, c \in \Sigma_2$ and strings $t, u \in \Sigma_2^*$. Then this inclusion is trivial (that is, $t = \epsilon$ or $u = \epsilon$). - (interchange property) Suppose there exist letters $a, b, c$ and strings $s, t, u, v$ such that $f(a) = st$, $f(b) = uv$, and $f(c) = sv$. Then either $a = c$ or $b = c$. \[ming\] - A short computation verifies there are no $a, b, c$ for which the equality $f(ab) = t f(c) u$ holds nontrivially. - This can also be verified with a short computation. If $|s| \geq 6$, then no two distinct letters share a prefix of length $6$. If $|s|\leq 5$, then $|t| \geq 5$, and no two distinct letters share a suffix of length $5$. Once Lemma \[ming\] is established, the rest of the argument is fairly standard. It can be found, for example, in , but for completeness we repeat it here. For $i = 1, 2, \ldots, n$ define $A_i = f(a_i)$. Then if $f(w) = xyyz$, we can write $$f(w) = A_1 A_2 \cdots A_n = A'_1 A''_1 A_2 \cdots A_{j-1} A'_j A''_j A_{j+1} \cdots A_{n-1} A'_n A''_n$$ where $$\begin{aligned} A_1 &=& A'_1 A''_1 \\ A_j &=& A'_j A''_j \\ A_n &=& A'_n A''_n \\ x &=& A'_1 \\ y &=& A''_1 A_2 \cdots A_{j-1} A'_j = A''_j A_{j+1} \cdots A_{n-1} A'_n \\ z &=& A''_n, \\\end{aligned}$$ where $|A''_1|, |A''_j| > 0$. See Figure \[fig1\]. cube1.pstex\_t If $|A''_1| > |A''_j|$, then $A_{j+1} = f(a_{j+1})$ is a subword of $A''_1 A_2$, hence a subword of $A_1 A_2 = f(a_1 a_2)$. Thus we can write $A_{j+2} = A'_{j+2} A''_{j+2}$ with $$A''_1 A_2 = A''_j A_{j+1} A'_{j+2}.$$ See Figure \[fig2\]. cube2.pstex\_t But then, by Lemma \[ming\] (a), either $|A''_j| = 0$, or $|A''_1| = |A''_j|$, or $A'_{j+2}$ is a not a prefix of any $f(d)$. All three conclusions are impossible. If $|A''_1| < |A''_j|$, then $A_2 = f(a_2)$ is a subword of $A''_j A_{j+1}$, hence a subword of $A_j A_{j+1} = f(a_j a_{j+1})$. Thus we can write $A_3 = A'_3 A''_3$ with $$A''_1 A_2 A'_3 = A''_j A_{j+1} .$$ See Figure \[fig3\]. cube3.pstex\_t By Lemma \[ming\] (a), either $|A''_1| = 0$ or $|A''_1| = |A''_j|$ or $A'_3$ is not a prefix of any $f(d)$. Again, all three conclusions are impossible. Therefore $|A''_1| = |A''_j|$. Hence $A''_1 = A''_j$, $A_2 = A_{j+1}$, $\ldots$, $A_{j-1} = A_{n-1}$, and $A'_j = A'_n$. Since $h$ is injective, we have $a_2 = a_{j+1}, \ldots, a_{j-1} = a_{n-1}$. It also follows that $|y|$ is divisible by $10$ and $A_j = A'_j A''_j = A'_n A''_1$. But by Lemma \[ming\] (b), either (1) $a_j = a_n$ or (2) $a_j = a_1$. In the first case, $a_2 \cdots a_{j-1} a_j = a_{j+1} \cdots a_{n-1} a_n$, so $w$ contains the square $(a_2 \cdots a_{j-1} a_j)^2$, a contradiction. In the second case, $a_1 \cdots a_{j-1} = a_j a_{j+1} \cdots a_{n-1}$, so $w$ contains the square $(a_1 \cdots a_{j-1})^2$, a contradiction. It now follows that if $w$ is squarefree then $f(w)$ avoids squares $yy$ with $|y| \geq 3$. It remains to see that $f(w)$ avoids $3^+$ powers. If $f(w)$ contained $x^e$ for some fractional exponent $e > 3$, then it would contain $x^2$, so from above we have $|x| \leq 2$. Thus it suffices to show that $f(w)$ avoids the words $0000, 1111, 0101010, 1010101$. This can be done by a short computation. There is an infinite binary word avoiding squares $yy$ with $|y| \geq 3$ and $3^+$ powers. As is very well-known, there are infinite squarefree words over $\Sigma_3$ [@Thue:1906; @Berstel:1995]. Take any such word $\bf w$ (for example, the fixed point of the morphism $2 \rightarrow 210$, $1 \rightarrow 20$, $0 \rightarrow 1$), and apply the map $f$. The resulting word $f({\bf w})$ avoids $(3, 3^+)$. It may be of some interest to explain how the morphism $f$ was discovered. We iteratively generated all words of length $1, 2, 3, \ldots$ (up to some bound) that avoid $(3, 3^+)$. We then guessed such words were the image of a $k$-uniform morphism applied to a squarefree word over $\Sigma_3$. For values of $k = 2,3, \ldots$, we broke up each word into contiguous blocks of size $k$, and discarded any word for which there were more than $3$ blocks. For certain values of $k$, this procedure eventually resulted in $0$ words fitting the criteria. At this point we knew a $k$-uniform morphism cannot work, so we increased $k$ and started over. Eventually a $k$ was found for which the number of such words appeared to increase without bound. We then examined the possible sets of $3$ $k$-blocks to see if any satisfied the requirements of Lemma \[ming\]. This gave our candidate morphism $f$. Let $A_n$ denote the number of binary words of length $n$ avoiding $yy$ with $|y| \geq 3$ and $3^+$ powers. Then $A_n = \Omega(1.01^n)$ and $A_n = O(1.49^n)$. \[avoid3\] Grimm [@Grimm:2001] has shown there are $\Omega(\lambda^n)$ squarefree words over $\Sigma_3$, where $\lambda = 1.109999$. Since the map $f$ is $10$-uniform, it follows that $A_n = \Omega(\lambda^{n/10}) = \Omega(1.01^n)$. For the upper bound, we reason as follows. The set of binary words of length $n$ avoiding $yy$ with $|y| \geq 3$ and $3^+$ powers is a subset of the set of binary words avoiding $0000$ and $1111$. The number $A'_n$ of words avoiding $0000$ and $1111$ satisfies the linear recurrence $A'_n = A'_{n-1} + A'_{n-2} + A'_{n-3}$ for $n \geq 4$. It follows that $A'_n = O(\alpha^n)$, where $\alpha$ is the largest zero of $x^3 - x^2 -x -1$, the characteristic polynomial of the recurrence. Here $\alpha < 1.84$, so $A_n = O(1.84^n)$. This reasoning can be extended using a symbolic algebra package such as Maple. Noonan and Zeilberger have written a Maple package [DAVID\_IAN]{} that allows one to specify a list $L$ of forbidden words, and computes the generating function enumerating words avoiding members of $L$. We used this package for a list $L$ of $62$ words of length $\leq 12$: $$0000, 1111, \ldots, 111010111010$$ obtaining a characteristic polynomial of degree $67$ with dominant zero $\doteq 1.4895$. Proof of (e) {#proofe} ============ In this section we prove that there is an infinite binary word that simultaneously avoids $yy$ with $|y| \geq 4$ and ${5 \over 2}^+$ powers. Let $g_1: \Sigma_8^* \rightarrow \Sigma_2^*$ be defined as follows. $$\begin{aligned} 0 &\rightarrow& 0011010010110 \\ 1 &\rightarrow& 0011010110010 \\ 2 &\rightarrow& 0011011001011 \\ 3 &\rightarrow& 0100110110010 \\ 4 &\rightarrow& 0110100101100 \\ 5 &\rightarrow& 1001101011001 \\ 6 &\rightarrow& 1001101100101 \\ 7 &\rightarrow& 1010011011001\end{aligned}$$ Let $g_2: \Sigma_4^* \rightarrow \Sigma_8^*$ be defined as follows. $$\begin{aligned} 0 &\rightarrow& 03523503523453461467 \\ 1 &\rightarrow& 03523503523453467167 \\ 2 &\rightarrow& 16703523503523461467 \\ 3 &\rightarrow& 03523503523461467167 \end{aligned}$$ Let $g_3: \Sigma_3^* \rightarrow \Sigma_4^*$ be defined as follows. $$\begin{aligned} 0 &\rightarrow& 010203 \\ 1 &\rightarrow& 010313 \\ 2 &\rightarrow& 021013\end{aligned}$$ Finally, define $g: \Sigma_3^* \rightarrow \Sigma_2^*$ by $g = g_1 \circ g_2 \circ g_3$. Note that $g$ is $1560$-uniform. We will prove If $w$ is any squarefree word over $\Sigma_3$, then $g(w)$ avoids $yy$ with $|y| \geq 4$ and ${5 \over 2}^+$ powers. The proof is very similar to the proof of Theorem \[square3\], and we indicate only what must be changed. First, it can be checked that Lemma \[ming\] also holds for the morphism $g$. As before, we break the proof up into two parts: the case where $g(w) = xyyz$ for some $y$ with $4 \leq |y| \leq 2 \cdot 1560$, and the case where $g(w) = xyyz$ for some $y$ with $|y| \geq 2 \cdot 1560$. The former can be checked by examining the image of the 30 squarefree words in $\Sigma_3^5$ under $g$. The latter is handled as we did in the proof of Theorem \[square3\]. We checked these conditions with programs written in Pascal; these are available from the author on request. There is an infinite binary word avoiding squares $yy$ with $|y| \geq 4$ and ${5 \over 2}^+$ powers. It may be of some interest to explain how the morphisms $g_1$, $g_2$, $g_3$, were discovered. We used a procedure analogous to that described above in Section \[proofc\]. However, since it was not feasible to generate all words avoiding $(4, {5\over2}^+)$ and having at most $3$ contiguous blocks of length $1560$, we increased the alphabet size and and tried various $k$-blocks until we found a combination of alphabet size and block size for which the number of words appeared to increase without bound. We then obtained a number of possible candidates for blocks. Next, we determined the necessary avoidance properties of the blocks given by images of letters under $g_1$. For example, $g_1(0)$ cannot be followed by $g_1(1)$, because this results in the subword $000$, which is a 3rd power (and $3 > 2.5$). The blocks that must be avoided include all words with squares, and $$01,02,04,05,06,07, 10,12,13,17, 20,21,24,25,26,27, 30,31,32,36,37, 40,41,42,43,47,$$ $$51,54,56,57, 60,62,63,64,65, 72,73,74,75,76, 034,145,153,161,353,450,452,535,615,616,$$ $$714,715, 2346703,5234670,5234671, 53467035, 6703523461, 2346146703503, 5234614670350$$ This list was computed purely mechanically, and it is certainly possible that this list is not exhaustive. We now iterated our guessing procedure, looking for a candidate uniform morphism that creates squarefree words avoiding the patterns in the list above. This resulted in the $20$-uniform morphism $g_2$. We then computed the blocks that must be avoided for $g_2$. This was done purely mechanically. Our procedure suggested that arbitrarily large blocks must be avoided, but luckily they (apparently) had a simple finite description: namely, we must avoid $12$, $23$, $32$, and blocks of the form $2x0x1$ and $3x1x0$ for all nonempty words $x$, in addition to words with squares. We then iterated our guessing procedure one more time, looking for a candidate uniform morphism that avoids [*these*]{} patterns. This gave us the morphism $g_3$. Of course, once the morphism $g = g_1 \circ g_2 \circ g_3$ is discovered, we need not rely on the list of avoidable blocks; we can take the morphism as given and simply verify the properties of inclusion and interchange as in Lemma \[ming\]. Let $B_n$ denote the number of binary words of length $n$ avoiding $yy$ with $|y| \geq 4$ and ${5 \over 2}^+$ powers. Then $B_n = \Omega(1.000066^n)$ and $B_n = O(1.122^n)$. \[avoid4\] The proof is analogous to that of Theorem \[avoid3\]. We use the fact that $g$ is $1560$-uniform, which, combined with the result of Grimm [@Grimm:2001], gives the bound $1.109999^{1/1560} \doteq 1.000066899$. For the upper bound, we again use the Noonan-Zeilberger Maple package. We used the $54$ patterns corresponding to words of length $\leq 20$. This gave us a polynomial of degree $27$ with dominant zero $\doteq 1.12123967$. Proof of (f) {#proofg} ============ In this section we prove that there is an infinite binary word that simultaneously avoids $yy$ with $|y| \geq 7$ and ${7 \over 3}^+$ powers. Let $h_1: \Sigma_5^* \rightarrow \Sigma_2^*$ be defined as follows. $$\begin{aligned} 0 &\rightarrow& 00110100101100 \\ 1 &\rightarrow& 00110100110010 \\ 2 &\rightarrow& 01001100101100 \\ 3 &\rightarrow& 10011011001011 \\ 4 &\rightarrow& 11010011011001\end{aligned}$$ Let $h_2: \Sigma_3^* \rightarrow \Sigma_5^*$ be defined as follows. $$\begin{aligned} 0 &\rightarrow& 032303241403240314 \\ 1 &\rightarrow& 032314041403240314 \\ 2 &\rightarrow& 032414032303240314 \end{aligned}$$ Finally, define $h: \Sigma_3^* \rightarrow \Sigma_2^*$ by $h = h_1 \circ h_2$. Note that $h$ is $252$-uniform. We will prove If $w$ is any squarefree word over $\Sigma_3$, then $h(w)$ avoids $yy$ with $|y| \geq 7$ and ${7 \over 3}^+$ powers. Again, the proof is quite similar to that of Theorem \[square3\]. We leave it to the reader to verify that the inclusion and interchange properties hold for $h$, and that the image of all the squarefree words of length $\leq 5$ are free of squares $yy$ with $|y| < 7$ and ${7 \over 3}^+$ powers. There is an infinite binary word avoiding squares $yy$ with $|y| \geq 7$ and ${7 \over 3}^+$ powers. The morphisms $h_1, h_2$ were discovered using the heuristic procedure mentioned in Section \[proofc\]. The avoiding blocks for $h_1$ were heuristically discovered to include $$01, 02, 10, 12, 13, 20, 21, 34, 42, 43, 304, 23031, 24041, 231403141, 232403241$$ as well as blocks containing any squares. Then $h_2$ was constructed to avoid these blocks. Let $C_n$ denote the number of binary words of length $n$ avoiding $yy$ with $|y| \geq 7$ and ${7 \over 3}^+$ powers. Then $C_n = \Omega(1.0004^n)$ and $C_n = O(1.162^n)$. The proof is very similar to that of Theorems \[avoid3\] and \[avoid4\]. For the lower bound, note that $h$ is $252$-uniform. This, combined with the bound of Grimm [@Grimm:2001], gives a lower bound of $\Omega(\lambda^n)$ for all $\lambda < 1.109999^{1/252} \doteq 1.0004142$. For the upper bound, we again used the Noonan-Zeilberger Maple package. We avoided $58$ words of length $\leq 20$. This resulted in a polynomial of degree $26$, with dominant zero $\doteq 1.1615225$. Enumeration results =================== In this section we provide a table of the first values of the sequences $A_n$, $B_n$, and $C_n$, defined in Sections \[proofc\], \[proofe\], and \[proofg\], for $1 \leq n \leq 25$. $n$ $A_n$ $B_n$ $C_n$ ----- ------- ------- ------- 0 1 1 1 1 2 2 2 2 4 4 4 3 8 6 6 4 14 10 10 5 26 16 14 6 42 24 20 7 68 36 30 8 100 46 38 9 154 64 50 10 234 74 64 11 356 88 86 12 514 102 108 13 768 114 136 14 1108 124 164 15 1632 140 196 16 2348 160 226 17 3434 178 264 18 4972 198 322 19 7222 212 384 20 10356 230 436 21 14962 256 496 22 21630 294 578 23 31210 342 674 24 44846 366 754 25 64584 392 850 Acknowledgments =============== I would like to thank Jean-Paul Allouche and Matthew Nichols for having suggested the problem, and Narad Rampersad for independently verifying the properties of the morphism $g$. [1]{} J. Berstel. . Number 20 in Publications du Laboratoire de Combinatoire et d’Informatique [Mathématique]{}. Université du Québec à Montréal, February 1995. F. M. Dekking. On repetitions of blocks in binary sequences. (1976), 292–299. R. C. Entringer, D. E. Jackson, and J. A. Schatz. On nonrepetitive sequences. (1974), 159–164. U. Grimm. Improved bounds on the number of ternary square-free words. (2001), 01.2.7 (electronic). `http://www.math.uwaterloo.ca/JIS/VOL4/GRIMM/words.html` J. [Karhumäki]{} and J. Shallit. Polynomial versus exponential growth in repetition-free binary words, 2003. Submitted. Preprint available at [ http://www.arxiv.org/abs/math.CO/0304095]{}. J. Noonan and D. Zeilberger. The [Goulden-Jackson]{} cluster method: extensions, applications and implementations. (1999), 355–377. A. Thue. unendliche [Zeichenreihen]{}. (1906), 1–22. Reprinted in [*Selected Mathematical Papers of Axel Thue*]{}, T. Nagell, editor, Universitetsforlaget, Oslo, 1977, pp. 139–158.
--- abstract: | The 6dF Galaxy Survey (6dFGS) aims to measure the redshifts of around [150000]{} galaxies, and the peculiar velocities of a 15000-member sub-sample, over almost the entire southern sky. When complete, it will be the largest redshift survey of the nearby universe, reaching out to about $z\sim0.15$, and more than an order of magnitude larger than any peculiar velocity survey to date. The targets are all galaxies brighter than $K_{\rm tot} = 12.75$ in the 2MASS Extended Source Catalog (XSC), supplemented by 2MASS and SuperCOSMOS galaxies that complete the sample to limits of $(H, J, r_F, b_J) = (13.05, 13.75, 15.6, 16.75)$. Central to the survey is the Six-Degree Field (6dF) multi-fibre spectrograph, an instrument able to record 150 simultaneous spectra over the $5.7^\circ$-field of the UK Schmidt Telescope. An adaptive tiling algorithm has been employed to ensure around 95% fibering completeness over the 17046 deg$^2$ of the southern sky with $|\,b\,|>10^\circ$. Spectra are obtained in two observations using separate V and R gratings, that together give $R \sim 1000$ over at least 4000 – 7500Å and signal-to-noise ratio $\sim 10$ per pixel. Redshift measurements are obtained semi-automatically, and are assigned a quality value based on visual inspection. The 6dFGS database is available at [http://www-wfau.roe.ac.uk/6dFGS/]{}, with public data releases occuring after the completion of each third of the survey. author: - | \ $^1$Research School of Astronomy &; Astrophysics, The Australian National University,\ Weston Creek, ACT 2611, Australia ([heath, lachlan@mso.anu.edu.au]{})\ $^2$Anglo-Australian Observatory, P.O. Box 296, Epping, NSW 2121, Australia ([will, colless, fgw@aao.gov.au]{})\ $^3$Institute for Astronomy, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ, United Kingdom\ $^4$Department of Physics, Macquarie University, Sydney 2109, Australia\ $^5$School of Physics, University of Sydney, NSW 2006, Australia\ $^6$Harvard-Smithsonian Center for Astrophysics, 60 Garden St MS20, Cambridge, MA 02138-1516, USA\ $^7$Infrared Processing and Analysis Center, California Institute of Technology, Mail Code 100-22,\ 770 South Wilson Avenue, Pasadena, CA 91125, USA\ $^8$Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, United Kingdom\ $^9$Department of Physics, University of Durham, South Road, Durham DH1 3LE, United Kingdom\ $^{10}$Institut d’Astrophysique de Paris (CNRS UMR 7095), 98 bis Bd Arago, F-75014 Paris, France\ $^{11}$GEPI (CNRS UMR 8111), Observatoire de Paris, F-92195 Meudon, France\ $^{12}$Faculty of Engineering, Gifu University, Gifu 501–1192, Japan\ date: 'Accepted —. Received —; in original form —.' --- surveys — galaxies: clustering — galaxies: distances and redshifts — cosmology: observations — cosmology: large scale structure of universe INTRODUCTION {#sec:introduction} ============ Wide-scale redshift surveys such as the 2dF Galaxy Redshift Survey and Sloan Digital Sky Survey (2dFGRS, Colless  2001b; SDSS, York   2001) have made significant advances in our understanding of the matter and structure of the wider universe. These include the precise determination of the luminosity function of galaxies ( Folkes   1999, Cross  2001, Cole  2001, Blanton  2001, Madgwick  2002, Norberg  2002, Blanton  2003), the space density of nearby rich galaxy clusters (De Propris  2002, Goto  2003), and large-scale structure formation and mass density (Peacock  2001, Percival  2001, Efstathiou  2002, Verde  2002, Lahav  2002, Zehavi  2002, Hawkins   2003, Szalay  2003). While such surveys have also greatly refined our view of the local universe, better determination of several key parameters can be made where a knowledge of galaxy mass can be combined with redshift. The 2dFGRS and SDSS surveys are both optically-selected, inevitably biasing them in favour of currently star-forming galaxies. The 2dF and Sloan spectrographs also have fields of view too small to allow full sky coverage in realistic timescales, limiting their utility for dynamical and cosmographic studies. The 6dF Galaxy Survey (6dFGS) is a dual redshift/peculiar velocity survey that endeavours to overcome the limitations of the 2dFGRS and SDSS surveys in these areas. The primary sample for the 6dFGS is selected in the $K_s$ band from the 2MASS survey (Jarrett  2000). The magnitudes used in the selection are estimated total magnitudes. These features combined mean that the primary sample is as unbiased a picture of the universe, in terms of the old stellar content of galaxies, as is possible at the current time. The near-infrared selection also enables the survey to probe closer to the Galactic equator before extinction becomes an issue; the survey covers the entire southern sky with $|\,b\,|>10^\circ$. The redshift of a galaxy includes both recessional and peculiar velocity components, so that a redshift survey alone does not furnish a true three-dimensional distribution for the galaxies. However, by measuring these components separately, it is possible to determine the three-dimensional distributions of both the galaxies and the underlying mass. Observationally, significantly greater effort is required to obtain the distances and peculiar velocities of galaxies than their redshifts. Distance estimators based on the Fundamental Plane of early-type galaxies (FP; Dressler  1987, Djorgovski & Davis 1987), as used in the 6dFGS, require measurements of the galaxies’ internal velocity dispersions. Velocity dispersions demand a signal-to-noise ratio (S/N) in the spectrum at least 3 to 5 times higher than redshift measurements. Furthermore, the FP distance estimators need such spectroscopy to be supported by photometry which can be used to determine the galaxies’ surface brightness profiles. We measure peculiar velocities as the discrepancy between the redshift and the estimated distance. For realistic cosmologies, peculiar velocities increase weakly with distance, but remain $<1000 {\mbox{\,km\,s$^{-1}$}}$. Redshift errors are small, and have little or no dependence on distance. On the other hand, all existing distance estimators have significant, intrinsic and fractional, uncertainties in distance; for the FP estimators this uncertainty is typically about 20% for a single galaxy measurement. This linear increase in the errors with distance, compared with more or less fixed peculiar velocities, means that the uncertainty on a single peculiar velocity becomes dominated by the intrinsic uncertainties at redshifts $cz \sim 5000 {\mbox{\,km\,s$^{-1}$}}$. Consequently, all previous peculiar velocity surveys have traced the velocity field, and hence the mass distribution, only out to distances of about 5000 (for relatively dense field samples of individual galaxies;  Dressler  1987, Giovanelli 1998, da Costa  2000) or 15000 (for relatively sparse cluster samples, where distances for multiple galaxies are combined;  Lauer and Postman 1994, Hudson  1999, Colless 2001a). The smaller volumes are highly subject to cosmic variance, while the larger volumes are too sparsely sampled to reveal much information about the velocity field. In order to differentiate cosmological models, and constrain their parameters, both the survey volume and galaxy sampling need to be significantly increased. Moreover, since peculiar velocities amount to at most a few per cent of the velocity over much of the volume, both the underlying sample and the direct distance estimates have to be extraordinarily homogeneous, preferably involving unchanging telescopes, instrumental set ups and procedures in each case. The two facets of the 6dF Galaxy Survey are a redshift survey of [150000]{} galaxies over the southern sky, and a peculiar velocity survey of a subset of some 15000 of these galaxies. The aims of the redshift survey are: (1) To take a near-infrared selected sample of galaxies and determine their luminosity function as a function of environment and galaxy type. From these, the stellar mass function and mean stellar fraction of the local universe can be ascertained and compared to the integrated stellar mass inferred from measures of cosmic star formation history. (2) To measure galaxy clustering on both small and large scales and its relation to stellar mass. In doing so, the 6dFGS will provide new insight into the scale-dependence of galaxy biasing and its relationship to dark matter. (3) To determine the power spectrum of this galaxy clustering on scales similar to those spanned by the 2dF Galaxy Redshift Survey and Sloan Digital Sky Surveys. (4) To delineate the distribution of galaxies in the nearby universe. The 6dFGS will not cover as many galaxies as either the 2dF Galaxy Redshift Survey or Sloan Digital Sky Survey. However, because it is a large-volume survey of nearby galaxies, it provides the ideal sample on which to base a peculiar velocity survey. In this respect, the complete 6dFGS will be more than ten times larger in number and span twice the volume of the PSCz Survey (Saunders  2000), previously the largest survey of the local universe. (5) To furnish a complete redshift catalogue for future studies in this regime. As discussed above, precise distance measures impose stronger demands on signal-to-noise than do redshifts. With such higher quality spectra, it is also possible to infer properties of the underlying stellar population, such as ages and chemical abundances. Having such measurements along with the galaxy mass and knowledge of its local envrionment, will afford unprecedented opportunities to understand the processes driving galaxy formation and evolution. This redshift catalogue will provide the basis for a volume-limited sample of early-type galaxies for the peculiar velocity survey. The aims of the peculiar velocity survey are: (1) A detailed mapping of the density and peculiar velocity fields to around 15000 over half the local volume. (2) The inference of the ages, metallicities and star-formation histories of E/S0 galaxies from the most massive systems down to dwarf galaxies. The influence of local galaxy density on these parameters will also be of key interest to models of galaxy formation. (3) To understand the bias of galaxies (number density versus total mass density field) and its variation with galaxy parameters and environment. One novel feature of the 6dF Galaxy Survey compared to earlier redshift and peculiar velocity surveys is its near-infrared source selection. The main target catalogues are selected from the Two Micron All Sky Survey (2MASS; Jarrett  2000) using total galaxy magnitudes in $JHK$. There are several advantages of choosing galaxies in these bands. First, the near-infrared spectral energy distributions (SEDs) of galaxies are dominated by the light of their oldest stellar populations, and hence, the bulk of their stellar mass. Traditionally, surveys have selected target galaxies in the optical where galaxy SEDs are dominated by younger, bluer stars. Second, the E/S0 galaxies that will provide the best targets for Fundamental Plane peculiar velocity measures represent the largest fraction by galaxy type of near-infrared-selected samples. Finally, the effects of dust extinction are minimal at long wavelengths. In the target galaxies, this means that the total near-infrared luminosity is not dependent on galaxy orientation and so provides a reliable measure of galaxy mass. In our own Galaxy, it means the 6dFGS can map the local universe nearer to the plane of the Milky Way than would otherwise be possible through optical selection. In this paper we describe the key components contributing to the realisation of the 6dF Galaxy Survey. Section \[sec:instrument\] describes the Six-Degree Field instrument; section \[sec:design\] details the compilation of the input catalogues and the optimal placement of fields and fibres on sources therein; section \[sec:implementation\] outlines the methods used to obtain and reduce the spectra, and derive redshifts from them; section \[sec:datarelease\] summarises the First Data Release of [46474]{} unique galaxy redshifts and describes the 6dF Galaxy Survey on-line database; section \[sec:conclusions\] provides concluding remarks. THE SIX-DEGREE FIELD SPECTROGRAPH {#sec:instrument} ================================= Central to the 6dFGS is the Six-Degree Field multi-object fibre spectroscopy facility, (hereafter referred to as 6dF), constructed by the Anglo-Australian Observatory (AAO) and operated on the United Kingdom Schmidt Telescope (UKST). This instrument has three major components: (1) an $r$-$\theta$ robotic fibre positioner, (2) two interchangeable 6dF field plates that contain the fibres to be positioned, and (3) a fast Schmidt spectrograph that accepts the fibre-slit from a 6dF field plate. The process of 6dF operation involves configuring a 6dF field plate on target objects, mounting this configured plate at the focal surface of the UKST and feeding the output target fibre bundle into an off telescope spectrograph. While one target field is being observed the other field plate can be configured. 6dF can obtain up to 150 simultaneous spectra across the 5.7-diameter field of the UKST. Fuller descriptions of the 6dF instrument have been given elsewhere, (Parker  1998, Watson   2000, Saunders  2001); here we summarise only those features important to the Galaxy Survey. The 6dF positioner, though building on the expertise and technology successfully developed and employed in the AAO’s 2dF facility (Lewis  2002), and in previous incarnations of fibre-fed spectrographs at the UKST such as FLAIR (Parker & Watson 1995), is nevertheless a significant departure in both concept and operation. 6dF employs an $r$-$\theta$ robotic fibre positioner constructed as a working proto-type for the OzPoz fibre positioner built under contract by the AAO and now commissioned on the European Southern Observatory Very Large Telescope. This fibre-placement technology can place fibre-buttons directly and accurately onto the convex focal surface of the UKST, via a curved radial arm matched to the focal surface. This is coupled with a complete $>360^\circ$ $\theta$-travel and with a pneumatically controlled fibre-gripper travelling in the $z$-direction. Gripper positioning is honed (to $<10\mu$m) using an inbuilt small CCD camera to permit centroid measurement from back-illuminated images of each fibre. Unlike 2dF, the 6dF positioning robot is off-telescope, in a special enclosure on the dome floor. Two identical field plate units are available, which allows one to be mounted on the telescope taking observations whilst the other is being configured by the 6dF robot. Each field plate contains a ring of 154 fibre buttons comprising 150 science fibres and four bundles of guide fibres all arranged around the curved field plate. Each 100$\mu$m (6.7) science fibre is terminated at the input end by a 5mm diameter circular button, containing an elongated SF2 prism to deflect the light into the fibre and a strong rare-earth magnet for adhesion to the field plate. Targets closer than $5.7'$ on the sky cannot be simultaneously configured due to the clearances required to avoid collisions and interference between buttons. The buttons and trailing fibres are incorporated into individual retractors housed within the main body of the field plate under slight elastic tension. The 150 target fibres feed into a fibre cable wrap 11 metres long, and terminate in a fibre slit-block mounted in the 6dF spectrograph. A full field configuration takes around 30-40 minutes depending on target disposition and target numbers, plus about half this time to park any prior configuration. This is less than the time spent by a configured field plate in the telescope of 1.5 - 2.5 hours (depending on conditions). There is however a $25-30$ minute overhead between fields, needed for parking the telescope, taking arcs and flats, manually unloading and loading the field plates, taking new arcs and flats, and acquiring the new field. The acquisition is via four guide fibres, each consisting of 7 $\times$ 100$\mu$m fibres hexagonally packed to permit direct imaging from four guide stars across the field plate. The guide fibres proved extremely fragile in use, and also hard to repair until a partial redesign in 2002; as a result of this a significant fraction of the data was acquired with three, or occasionally even two, guide fibres, with consequent loss of acquisition accuracy and signal-to-noise. The spectrograph is essentially the previous bench-mounted FLAIR II instrument[^1] but upgraded with new gratings, CCD detector and other refinements. The instrument uses a $1032 \times 1056$ pixel Marconi CCD47-10 device, with 13$\mu$m pixels. All 6dFGS data taken prior to October 2002 used 600V and 316R reflection gratings, covering 4000 – 5600Å and 5500 – 8400Å respectively. Subsequent data uses Volume-Phase transmissive Holographic (VPH) 580V and 425R gratings from Ralcon Development Laboratory, with improved efficiency, focus, and data uniformity. The wavelength coverage is 3900 – 5600Å and 5400 – 7500Å and the grating and camera angles (and hence dimensionless resolutions) are identical. The peak system efficiency (good conditions and acquisition, wavelengths near blaze, good fibres) is 11%, but can be much less. The marginally lower dispersion of the 580V VPH grating, as compared with the 600V reflection grating, is compensated by the better focus allowed by the reduced pupil relief. The UK Schmidt with 6dF is well-suited to low to medium resolution spectroscopy of bright ($V<17$), sparsely distributed sources (1 to 50 deg$^{-2}$). As such, it fills the gap left open by 2dF for large, shallow surveys covering a significant fraction of the total sky. In terms of $A\Omega$ (telescope aperture $\times$ field of view), UKST/6dF is similar to AAT/2dF and Sloan. SURVEY DESIGN {#sec:design} ============= Overview -------- Surveys which cover the sky in a new waveband (such as 2MASS), are invariably shallow and wide-angle, as this maximises the return (in terms of sample numbers), for the intrinsically difficult observations. This also holds true for the IRAS, ROSAT, HIPASS, NVSS and SUMSS surveys. Other projects, such as finding peculiar velocities, are even more strongly driven to being as shallow and wide-angled as possible; and any project using galaxy distributions to predict dynamics requires just as great sky coverage as possible. All such surveys are hence uniquely matched to 6dF, with its ability to map the whole sky in realistic timescales. These arguments have been extended and formalised by Burkey & Taylor (2004), who have recently studied how the scientific returns of 6dFGS should be optimised in light of existing large-scale datasets such as the 2dFGRS and SDSS. Their analysis shows that the combined redshift ($z$) and peculiar velocity ($v$) components of the 6dFGS give it the power to disentangle the degeneracy between several key parameters of structure formation, listed in Table \[tab:params\]. They demonstrate that $A_g$, $\Gamma$ and $\beta$ can be determined to within around 3% if only the redshift survey is used, although, $\omega_b$ and $r_g$ are much less well-constrained. If the combined $z$ and $v$ data are used, all of $A_g$, $\Gamma$, $\beta$ and $r_g$ can be determined to within 2 – 3%. The change in $\beta$ and $r_g$ on different spatial scales can also be determined to within a few percent. Clearly the advantage of the 6dFGS in understanding structure formation comes from its large-scale determination of galaxy masses, in addition to distances. Burkey & Taylor also calculate the optimal observing strategy for 6dFGS, and confirm that the dense sampling and widest possible areal coverage are indeed close to optimal for parameter estimation. Parameter ------------------------------ ------------------------------------------------ $b$ bias parameter $A_g = b A_m$ galaxy power spectrum amplitude $A_v = \Omega_m^{0.6} A_m$ velocity field amplitude $\Gamma = \Omega_m h$ power spectrum shape parameter $\omega_b = \Omega_b h$ mass density in baryons $\beta = \Omega_{m}^{0.6}/b$ redshift-space distortion parameter $r_g$ luminous – dark matter correlation coefficient : Cosmological parameters readily measurable from the 6dF Galaxy Survey. \[tab:params\] Observational Considerations ---------------------------- The original science drivers for the 6dF project were an all-southern-sky redshift survey of NIR-selected galaxies, and a large peculiar velocity survey of early-type galaxies. Three instrumental considerations led to the observations for these projects being merged. Firstly, the combination of the physical size of the 6dF buttons (5mm), the small plate scale of the Schmidt ($67.14''/$mm), and the strong angular clustering of the shallow and mostly early-type input catalogues, meant that acceptable ($\sim 90$%) completeness could only be achieved by covering the sky at least twice, in the sense that the sum of the areas of all tiles observed be at least twice the actual area of sky covered. Secondly, the spectrograph optics and CCD dimensions did not simultaneously permit an acceptable resolution ($R\sim 1000$) over the required minimal wavelength range 4000 – 7500Å; and this meant that each field had to be observed separately with two grating setups. Thirdly, the robot configuring times, and the overheads between fields (parking the telescope, changing field plates, taking calibration frames, acquiring a new field), meant that observations less that 1-2 hours/field were not an efficient use of the telescope. Together, these factors meant that the redshift survey would necessarily take longer than originally envisaged. However, careful consideration of the effects of signal-to-noise and resolution on velocity width measurements (see Wegner  1999), led us to conclude that for the luminous, high surface brightness galaxies expected to dominate the peculiar velocity survey, the resolution and signal-to-noise expected from the redshift survey observations (in some case repeated to increase S/N), would in general allow velocity widths to be determined to the required accuracy. Therefore, in early 2001 a decision was made to merge the observations for the two surveys. These observational considerations implied that the survey would be of $\sim 1500$ fields, with $\sim 1$ hour integration time per field per grating, and covering 4000 – 7500Å. With 100 – 135 fibres available for targets per field, this meant 150 – 200,000 observations could be made in total. Given the $\sim 100,000$ targets desired for the primary $K$-selected survey, and an expected 20% contingency for reobservation (either failures or to increase S/N), there remained the opportunity to include other samples in the survey, especially if they required lower levels of observational completeness than the primary sample. Some of these were selected by the Science Advisory Group to fill out the sample to provide substantial flux-selected samples at $H,J,I,{\mbox{$b_{\rm\scriptscriptstyle J}$}}$, and ${\mbox{$r_{\rm\scriptscriptstyle F}$}}$ wavebands; others were invited from the community as an announcement of opportunity, and resulted in a wide variety of x-ray, radio, optical, near- and far-infrared selected extragalactic samples being included. It is striking that most of these additional samples derive from the first sky surveys in a new waveband; and also that most of them could not be undertaken on any other telescope, being too large for long-slit work, but too sparse for multiplexing in their own right. The Primary Sample {#sec:redsurvey} ------------------ The primary redshift ($z$-survey) sample is a magnitude-limited selection drawn from the 2MASS Extended Source Catalog, version 3 (2MASS XSC; Jarrett  2000). Since the survey is attempting a ‘census’ of the local Universe, we want to avoid any bias against lower-surface-brightness galaxies, and ideally we would use total magnitudes. 2MASS data does include total magnitudes, estimated from curves of growth; these are reliable for high galactic latitudes and/or very bright galaxies, but 2MASS does not have the depth nor resolution to derive robust total magnitudes for galaxies at lower latitudes to our desired flux limit. On the other hand, 2MASS includes very robust isophotal magnitudes ($K_{\rm iso}$) and diameters to an elliptical isophote of $\mu_K=20^m {\rm arcsec}^{-2}$. We found that we were able to make a simple surface-brightness correction to these standard isophotal magnitudes, which gave an excellent approximation to the total magnitude at high latitudes where they were reliable (Fig. \[fig:Ktotal\]): $$K_{\rm cor} = K_{\rm iso} - 1.5 \exp{1.25 (\overline{\mu_{K20}}-20)} .$$ Here, $\overline{\mu_{K20}}$ is the mean surface brightness within the $\mu_K=20$ elliptical isophote, and with a maximum allowed correction of $0.5^m$. This ‘corrected’ isophotal magnitude was also extremely robust to stellar contamination. There remains a smaller second-order bias dependent on the convexity of the profile. Further details are in Burkey (2004). A latitude cut of $|\,b\,| \geq 10^\circ$ was imposed, mostly because extinctions closer to the plane would demand much greater intergation times, and a declination cut of ($\delta < 0^\circ$) was imposed. Our final selection was then 113988 galaxies with $K_{\rm cor}<12.75$, corresponding approximately to $K_{20}<13^m$ for typical $K$-selected galaxies. The Additional Samples ---------------------- Thirteen other smaller extragalactic samples are merged with the primary sample. These include secondary 2MASS selections down to $H_{\rm tot} = 13.05$ and $J_{\rm tot}=13.75$ over the same area of sky, constituting an additional $\sim 5\,000$ sources. Optically-selected sources from the SuperCOSMOS catalogue (Hambly  2001) with ${\mbox{$r_{\rm\scriptscriptstyle F}$}}\ = 15.6$ and ${\mbox{$b_{\rm\scriptscriptstyle J}$}}\ = 16.75$, $|\,b\,| > 20^\circ$ were included, constituting a further $\sim 20\,000$ galaxies. The remaining miscellaneous piggy-back surveys contribute a further $\sim 29\,000$ galaxies in various regions of the sky. These samples heavily overlap, greatly increasing the efficiency of the survey - the combined grand sum of all the samples amounts to 500000 sources, but these represent only [174442]{} different sources when overlap is taken into account. However, at the current rate of completion, we estimate that the eventual number of 6dF galaxy redshifts will be around [150000]{}. Sample (Contact, Institution) Weight Total Sampling ---------------------------------------------- -------- -------- ---------- -- 2MASS $K_s<12.75$ (Jarret, IPAC) 8 113988 94.1% 2MASS $H <13.05$ (Jarret, IPAC) 6 3282 91.8% 2MASS $J <13.75$ (Jarret, IPAC) 6 2008 92.7% SuperCOSMOS $r_F<15.6$ (Read, ROE) 2 6 9199 94.9% SuperCOSMOS $b_J<16.75$ ()Read, ROE 6 9749 93.8% Shapley (Proust, Paris-Meudon) 6 939 85.7% ROSAT All-Sky Survey (Croom, AAO) 6 2913 91.7% HIPASS ($>4\sigma$) (Drinkwater, Queensland) 6 821 85.5% IRAS FSC $6\sigma$ (Saunders, AAO) 6 10707 94.9% DENIS $J<14.00$ (Mamon, IAP) 5 1505 93.2% DENIS $I<14.85$ (Mamon, IAP) 5 2017 61.7% 2MASS AGN (Nelson, IPAC) 4 2132 91.7% Hamburg-ESO Survey (Witowski,Potsdam) 4 3539 90.6% NRAO-VLA Sky Survey (Gregg, UCDavis) 4 4334 87.6% Total 167133 93.3% Table \[tab:6dFGStargets\] summarises the breakdown of source catalogues contributing to the master target list. In total there are 167133 objects with field allocations of which two-thirds are represented by the near-infrared-selected sample. A further 7309 unallocated sources brings the total target list to [174442]{}. The mean surface density of this primary sample is $7$ deg$^{-2}$. Literature redshifts have been incorporated into the redshift catalogue, [19570]{} of these from ZCAT (Huchra  1999) and [8444]{} from the 2000 deg$^{2}$ in common with the 2dF Galaxy Redshift Survey (Colless  2001b). Roughly half the sample is early type. For the primary sample, all galaxies are observed, even where the redshift is already known, to give a complete spectroscopic sample at reasonable resolution ($R\sim 1000$) and signal-to-noise ratio (S/N $\sim 10$). Both tiling (section 3.3) and configuring (3.4) of targets within individual fields used the weights to assign priorities. Peculiar Velocity Survey ------------------------ Peculiar velocities are a vital probe of the large scale mass distribution in the local universe that does not depend on the assumption that light traces mass. Early work (Lynden-Bell  1988) made the unexpected discovery of a large ($\sim$600) outflow (positive peculiar velocities) in the Centaurus region. This led to the idea of a large extended mass distribution, nicknamed the Great Attractor (GA), dominating the dynamics of the local universe. Lynden-Bell  estimated this structure was located at ($l$, $b$, $cz$) $\sim$ (307$^\circ$, 7$^\circ$, 4,350$\pm$350kms$^{-1}$) and had a mass of $\sim$5$\times$ 10$^{16}$M$_\odot$. Attempts to measure the expected GA backside infall have proved controversial and some workers have argued for a continuing high amplitude flow beyond the GA distance perhaps resulting from a more distant gravitational pull of the Shapley concentration (312$^\circ$, 31$^\circ$, 14,000kms$^{-1}$) (Scaramella  1989, Hudson  1999). The goal of the peculiar velocity ($v$-survey) is to measure peculiar velocities for an all-southern-sky sample of galaxies. Peculiar velocities are measured for early-type galaxies through the Fundamental Plane (FP) parameters from 2MASS images and 6dF spectroscopy to give velocity dispersions. The $v$-survey sample consists of all early-type galaxies from the primary $z$-survey sample that are sufficiently bright to yield precise velocity dispersions. Because we cover the sky twice, suitable candidate galaxies (selected on the basis of either 2MASS morphology or first-pass 6dF spectroscopy) can be observed a second time in order to extend the $v$-survey sample to fainter limits. Based on the high fraction of early-type galaxies in the $K$-selected sample and the signal-to-noise ratio obtained in our first-pass spectroscopy, we expect to measure distances and peculiar velocities for 15000 galaxies with $cz < 15\,000$. When linked with the [*predicted*]{} peculiar velocities from all-sky redshift surveys like the PSCz (Branchini  1999), a value for $\Omega$ can be found that is independent of CMB measurements. Field Placement and Tiling Algorithm {#sec:tiling} ------------------------------------ The survey area is $17\,046$ deg$^2$, meaning that the 1360 6dF fields ($5.7^\circ$-diameter) contain a mean of 124 sources per field and cover the sky twice over. An adaptive tiling algorithm was employed to distribute the fields (“tiles”) across the sky to maximise uniformity and completeness, described in full in Campbell  (2004). In brief, this consisted of a merit function, which was the priority-weighted sum ($P=\beta^p$, Sect. \[sec:redsurvey\]) of allocated targets; a method for rapidly determining fibering conflicts between targets; a method of rapidly allocating targets to a given set of tiles so as to maximise the merit function; and a method to make large or small perturbations to the tiling. Tiles were initially allocated in random target positions, and the merit function maximised via the Metropolis algorithm (Metropolis  1953). It quickly became clear that the clusters were too ‘greedy’ under this scheme, in the sense that the completeness was higher in these regions. This is easily seen by considering a tiling with a uniform level of incompleteness everywhere, but with one last tile still to be placed: this will always go to the densest region, as there are the largest density of unconfigured targets here also. To counter this effect, we inversely weighted each galaxy by the local galaxy surface density (as determined from the primary sample) on tile-sized scales; in the above example this means the final tile can be placed anywhere with equal merit. This achieved our aim of consistent completeness, at a very small penalty in overall completeness. It broke down in the heart of the Shapley supercluster, with galaxy densities orders of magnitude higher than elsewhere, and we added 10 tiles by hand in this region. Two major tiling runs of the 6dFGS catalogue have been done: the first in April 2002 before commencement of observations ([ *version A*]{}), and a second revised tiling in February 2003 after the first year of data ([*version D*]{}). The revision was due to the higher-than-expected rate at which fibres were broken and temporarily lost from service (Fig. \[fig:attrition\]), and a major revision in the primary sample itself from IPAC. Figure \[fig:tiling\] shows the relationships between the full source list ([*top*]{}), those that remained unobserved at the time of the [*version D*]{} tiling allocation ([*middle*]{}), and the optimal tile placement to cover these ([*bottom*]{}). Tests of the two-point correlation function were made on the sample selected through the final tiling allocation, to see what systematic effects might arise from its implementation. Mock catalogues were generated, with correlation function as observed by the 2dF Galaxy Redshift Survey (Hawkins  2003), these were tiled as the real data and the resulting 2-point correlation function determined and compared with the original. This revealed an undersampling on scales up to $\sim 1$ , clearly the result of the fibre button proximity limit. No bias was seen on larger scales. Theoretical tiling completenesses of around 95% were achievable for all except the lowest priority samples, and variations in uniformity were confined to $< 5$%. However, fibre breakages have meant that 6dFGS has consistently run with many fewer fibres than anticipated, impacting on the completeness of the lower priority samples in particular. With a fixed timeline for the survey (mid 2005) and a fixed number of fields to observe, there is little choice in the matter. Fibre Assignment ---------------- Within each tile, targets are assigned to fibres by the same [ CONFIGURE]{} software used by the 2dF Galaxy Redshift Survey. This iteratively tries to find the largest number of targets assigned to fibres, and the highest priority targets. Early configurations (until mid-2003) were usually tweaked by hand to improve target yield, after that date a revised version of [CONFIGURE]{} was installed with much improved yields and little or no further tweaking was in general made. SURVEY IMPLEMENTATION {#sec:implementation} ===================== Observational Technique ----------------------- Field acquisition with 6dF is carried out using conventional guide-fibre bundles. Four fibre buttons are fitted with coherent bundles of seven fibres rather than a single science fibre. Fibre diameter is 100$\mu$m (6.7) and the guide fibres are in contact at the outer cladding to give a compact configuration $\sim$20 arcsec in diameter. These fibres are 5 m long and feed the intensified CCD acquisition camera of the telescope. The use of acquisition fibres of the same diameter as the science fibres is sub-optimal, but in practice the four guide-fibre bundles give good alignment, particularly as guide stars near the edge of the field are always chosen. Guide stars are selected from the Tycho-2 catalogue (Hoeg  2000), and have magnitudes typically in the range $8<V<11$. Field acquisition is straightforward in practice, and the distortion modelling of the telescope’s focal surface is sufficiently good that a field rotation adjustment is not usually required, other than a small standard offset. Each field is observed with both V and R gratings, these are later spliced to reconstruct a single spectrum from these two observations. Integrations are a minimum of 1 hr for the V spectrum and 0.5 hr for the R spectrum, although these times are increased in poor observing conditions. This gives spectra with typical S/N around 10 pixel$^{-1}$, yielding $>$90% redshift completeness. This observing strategy typically allows 3-5 survey fields to be observed on a clear night, depending on season. With 75% of the UKST time assigned to 6dFGS, and an average clear fraction of 60%, we typically observe about 400 fields per year. The observational strategy is to divide the sky into three declination strips. Initially, the survey has concentrated on the $\delta=-30^\circ$ declination strip (actually $-42^\circ<\delta<-23^\circ$); the equatorial strip ($-23^\circ<\delta<0^\circ$) will be done next, and then finally the polar cap ($\delta<-42^\circ$). Observations started in June 2001, though final input catalogues and viable reduction tools were not available until 2002. Early data suffered from various problems, including poorer spectrograph focus due to misalignment within the camera; poorer quality control; and use of preliminary versions of the 2MASS data, leading to many observed sources being dropped from the final sample. The 2001 data are not included in this data release. Initial observations were carried out at mid-latitudes for observational convenience, with the actual band corresponding to one of the Additional Target samples. Excursions from this band were made to target other Additional Target areas, where separate telescope time had been allotted to such a program, but the observations could be fruitfully folded in to 6dFGS. The observing sequence conventionally begins with R data (to allow a start to be made in evening twilight). With the telescope at access park position, a full-aperture flat-field screen is illuminated with calibration lamps. First of these is a set of quartz lamps to give a continuum in each fibre. This serves two purposes; (a) the loci of the 150 spectra are defined on the CCD, and (b) the differences between the extracted spectra of the smooth blackbody lamp allow flatfielding of the signatures introduced into the object spectra by pixel-to-pixel variations and fibre-fibre chromatic throughput variations. Then the wavelength-calibration lamps are exposed, HgCd + Ne for the R data and HgCd + He for the V data. After the R calibration exposure, the field is acquired and the 3$\times$10-min red frames obtained. Once they are completed, the grating is changed remotely from the control room and the 3$\times$20-min V frames obtained. At the end of the sequence, the V wavelength calibration and flat-field exposures are made. With the change of field comes a change of slit-unit (because of the two 6dF field plates), so all the calibrations must be repeated for the next field. Usually, the reverse waveband sequence is followed, i.e., beginning with V and ending with R. This process continues throughout the night, as conditions allow. Reduction of Spectra {#sec:reduction} -------------------- The reduction of the spectra uses a modified version of the [2DFDR]{} package developed for the 2dF Galaxy Redshift Survey. Unlike 2dF data, tramline fitting is done completely automatically, using the known gaps in the fibres to uniquely identify the spectra with their fibre number. Because of computing limitations, [TRAM]{} rather than [FIT]{} extractions are performed. [FIT]{} extractions would reduce crosstalk between fibres, but this is already small for 6dF compared with 2dF. Scattered light subtraction is not in general performed, unless there is specific reason for concern, such as during periodic oil-contamination episodes within the dewar. Again, scattered light performance is better with 6dF in general than with 2dF. The extracted spectra for each field are combined, usually weighted by S/N to cope with variable conditions. The S/N is computed at this stage, and a S/N per pixel of 10 in each of V and R frames usually indicates a satisfactorily observed field. All data are then fluxed using 6dF observations of the standard stars Feige 110 and EG274. This fluxing is inevitably crude, in that the same fixed average spectral transfer function is assumed for each plate for all time. Differences in the transfer function between individual fibres are corrected for by the flat-fielding. The resulting R and V spectra for each source are then spliced together, using the overlapping region to match their relative scaling. In order to avoid a dispersion discontinuity at the join in each spectrum, we also rescrunch the lower dispersion R data onto an exact continuation of the V wavelength dispersion. Spectral Quality ---------------- Most spectra have no problems, in the sense that: (1) the S/N is reasonable given the magnitude of the source; (2) both V and R frames are available; and (3) there were no problems in the reduction. However, there are significant caveats of which all users should be aware. - [Many fields were observed in marginal conditions, and have reduced overall S/N as a result. Our philosophy has been to extract what good spectra we can from these fields, and recycle the rest for reobservation. A field was only reobserved in its entirety where the data was valueless.]{} - [Many fields were observed with three or occasionally even two guide fibres, with consequent lower and more variable S/N.]{} - [Some fibres have poor throughputs due to misalignment or poor glueing within the button, and variations of factors of two are normal.]{} - [Many fibres, throughout the duration of the survey, have suffered various damage in use, short of breakage. Very often, this has resulted in strong fringing in the spectral response of the fibre, due to an internal fracture acting as a Fabry-Perot filter. This did not often flat-field out completely.]{} - [The CCD is in any case a thinned blue-sensitive chip; as a result, red data suffers increasing levels of fringing towards longer wavelengths, and this does not always flat-field out.]{} - [Fibre breakages during configuring, or between blue and red observations, or severe differences in acquisition, can lead to occasional missing or mis-spliced red or blue data.]{} - [Some fields have missing red data.]{} - [Though scattered light is not a major problem in general, data at the blue end of the spectra can be corrupted, because the actual counts are so low. In extreme cases, the spectra can become negative. The overall quality of the fluxing is untested, and should be treated with extreme caution.]{} - [All VPH data suffer from a faint but variable, spurious, spectral feature at wavelengths around 4440Å (10 pixel region) in the V grating, and 6430 and 6470Å (10 pixel regions) in the R. The reason, after extensive investigation, was determined to be a ghost caused by dispersed light reflected back off the grating and recollimated by the camera, being undispersed in first order reflection mode by the VPH grating, and refocused onto the chip as a somewhat out-of-focus (10-20 pixel diameter), undispersed, image of the fibre, with an intensity 0.1-1% of the summed dispersed light. Circumventing this problem requires tilting the fringes within the grating (so they are no longer parallel with the normal to the grating) by a degree or two, to throw the ghost image just off the chip.]{} Redshift Measurement {#sec:redshifts} -------------------- Accurate redshift measurement is a fundamental componenent of both the $z$ and $v$-surveys. We started with the semi-automated redshifting [RUNZ]{} software used for the 2dF Galaxy Redshift Survey (Colless   2001b), kindly provided by Will Sutherland. Extensive modifications were made in order to accept 6dF data. The version used for 2dF determined quasi-independent estimators of the redshift from emission and absorption features; this improved the reliability of the redshift estimates, while reducing their accuracy. Since the line identication of the higher S/N and higher dispersion 6dF spectra was usually not in doubt, we decided in general not to patch out emission features in determining cross-correlation redshifts; and in general the cross-correlation redshift was used in preference to the emission-line redshift. Each automated redshift is checked visually to decide whether the software has made an accurate estimate or been misled by spurious spectral features. Such features are typically due to fibre interference patterns or poor sky subtraction and are difficult to identify through software, although easily recognisable to a human operator. The operator checks the automated redshift by comparing it to the original spectrum, the location of night-sky line features and the cross-correlation peak. In some cases, manual intervention in the form of re-fitting of spectral features or of the correlation peaks makes for a new redshift. In the majority of cases, however, the automated redshift value is accepted without change. The final redshift value is assigned a quality, $Q$, between 1 to 5 where $Q=3,4,5$ for redshifts included in the final catalogue. $Q=4$ represents a reliable redshift while $Q=3$ is assigned to probable redshifts; $Q=2$ is reserved for tentative redshift values and $Q=1$ for spectra of no value. $Q=5$ signifies a ‘textbook’ high signal-to-noise spectrum, although in practice is used rarely for the 6dFGS. Figure \[fig:examplespect\] shows a few examples of galaxy spectra across the range of redshift quality, for both emission and absorption-line spectra. The same visual assessment technique was employed for the 2dF Galaxy Redshift Survey and greatly increased the reliability of the final sample: repeat measurements on a set of $\sim 15\,000$ 2dF spectra by two operators were discrepant in only 0.4% of cases (Colless 2001b). Figure \[fig:qualitySN\] shows the relationship between redshift quality $Q$ and mean signal-to-noise of the spectra that yielded them. The vast majority (76%) of the 6dFGS redshifts have $Q=4$ from spectra with a median signal-noise of 9.4 . The $Q=5$ sources are too few (24) to show. For $Q=3$ redshifts the median signal-to-noise ratio drops to 5.3 , indicating minimum of range of redshift-yielding spectra. In both cases, note the long tail to higher signal-to-noise values. The median signal-to-noise ratio for $Q=2$ redshifts (6.3 ) is slightly higher than that for $Q=3$ (5.3 ). This is due to the significant number of Galactic sources such as stars and planetary nebulae, which produce high signal-to-noise spectra, but are assigned $Q=2$ on account of their zero redshift. FIRST DATA RELEASE {#sec:datarelease} ================== Statistics and Plots -------------------- Between January 2002 and July 2003 the 6dF Galaxy Survey Database compiled [52048]{} spectra from which [46474]{} unique redshifts were derived. The numbers of spectra with redshift quality $Q \ge 3$ were [43945]{}  for the full set and [39649]{} for the unique redshifts. Of the [174442]{} total galaxies in the target sample, [28014]{} had existing literature redshifts: [19570]{} from the ZCAT compilation (Huchra  1999) and [8444]{} from the 2dF Galaxy Survey (Colless  2001b). Of the 113988 $K_s$-selected sources, there are 32156 6dF-measured redshifts of redshift quality $Q \ge 3$, plus a further 21151 existing literature redshifts. Table \[tab:breakdown\] summarises these values for the individual sub-samples as they appear in the 6dFGS Database. id survey total $cz\le600$ $cz>600$ 6df $z$ lit $z$ 6df$>600$ Q345 Q1 Q2 Q3 Q4 Q5 no $z$ ----- -------------------------- -------- ------------ ---------- --------- --------- ----------- ------- ------ ------ ------ ------- ---- -------- 1 2MASS $K_s<12.75$ 113988 1750 53051 33650 21151 32983 32156 1312 1494 2708 29433 15 59187 3 2MASS $H <13.05$ 3282 18 853 526 345 512 492 33 34 58 434 0 2411 4 2MASS $J <13.75$ 2008 17 552 333 236 319 304 14 29 28 276 0 1439 5 DENIS $J <14.00$ 1505 11 259 124 146 117 111 26 13 27 84 0 1235 6 DENIS $I<14.85$ 2017 96 191 150 137 63 63 18 87 10 53 0 1730 7 SuperCOSMOS $r_F<15.6$ 9199 137 3310 1539 1908 1439 1407 46 132 104 1302 1 5752 8 SuperCOSMOS $b_J<16.75$ 9749 35 3718 1973 1780 1961 1900 76 73 173 1726 1 5996 78 Durham/UKST extension 466 2 73 10 65 8 8 1 2 6 2 0 391 90 Shapley supercluster 939 9 323 282 50 273 250 22 32 48 202 0 607 113 ROSAT All-Sky Survey 2913 99 535 395 239 300 223 231 172 53 170 0 2279 116 2MASS red AGN Survey 2132 9 252 129 132 121 81 106 48 45 36 0 1871 119 HIPASS ($>4\sigma$) 821 8 268 135 141 130 121 11 14 29 92 0 545 125 SUMSS/NVSS radio sources 6843 321 709 654 376 347 322 89 332 51 270 1 5813 126 IRAS FSC ($6\sigma$) 10707 258 2872 1360 1770 1218 1105 303 255 198 906 1 7577 129 Hamburg-ESO Survey QSOs 3539 73 197 220 50 150 56 204 164 19 37 0 3269 130 NRAO-VLA Sky Surv. QSOs 4334 342 146 483 5 142 62 303 421 42 20 0 3846 Total 174442 3185 67309 41963 28531 40083 38661 2795 3302 3599 35043 19 103948 [**Column Headings:**]{}\ $cz\le600$ — object has a redshift (either 6dF-measured with quality $> 1$ or from the literature) less than or equal to 600.\ $cz>600$ — object has a redshift (either 6dF-measured with quality $> 1$ or from the literature) greater than 600.\ 6df $z$ — total number of 6dF-measured redshifts with quality $Q > 1$.\ lit $z$ — total number of literature redshifts.\ 6df$>600$ — number of 6dF-measured redshifts greater than 600with quality $Q > 1$.\ Q345 — total number of (6dF-measured) sources with redshift quality $Q = 3, 4$ or 5.\ Q1, Q2, Q3, ... — total number of sources with redshift quality $Q = 1$, $Q=2$, $Q=3$, etc.\ no $z$ — number of sources in the database with neither a 6dF (quality $Q > 1$) nor literature redshift.\ Data from [524]{} fields have contributed to the first data release. As shown in Fig. \[fig:obsflds\]([*top*]{}), the majority of these occupy the central declination strip between $-42^\circ < \delta < 23^\circ$. Overall there are [1564]{} on the sky: [547]{} in the equatorial strip, [595]{} in the central strip, and [422]{} in the polar region. Figure \[fig:obsflds\]([*bottom*]{}) shows the corresponding distribution of redshift completeness on the sky for the $K$-band sample. The [*redshift completeness*]{}, $R$, is that fraction of galaxies in the parent catalogue of [174442]{} with acceptable ($Q\ge3$) redshifts in a given area of sky, from whatever source, $$\begin{aligned} R & = & \frac{N_z(\btheta)}{N_p(\btheta)} \nonumber\\ & = & \frac{ N_{{\rm lit}}(\btheta) + N_{{\rm 6dF}}(\btheta) } { N_{{\rm lit}}(\btheta) + N_{{\rm 6dF}}(\btheta) + N_{\rm Gal}(\btheta) + N_{\rm f}(\btheta) + N_{\rm r}(\btheta)} \label{redshiftcompl}\end{aligned}$$ Here, $N_p(\btheta)$ is the number of galaxies from the parent catalogue (per unit sky area) at the location $\btheta$, and $N_z(\btheta)$ is the number with redshifts, either from 6dF ($N_{{\rm 6dF}}(\btheta)$) or the literature ($N_{{\rm lit}}(\btheta)$). Sources in the parent catalogue that have been redshifted and excluded are either stars, planetary nebulae/ISM features (both assigned $Q=2$), or failed spectra ($Q=1$). In Eqn. \[redshiftcompl\] their numbers are denoted by $N_{\rm Gal}(\btheta)$ and $N_{\rm f}(\btheta)$. The remaining sources are those yet to be observed, $N_{\rm r}(\btheta)$. Of the first $\sim 41\,000$ sources observed with 6dF, around 3% were stars, 1% were other Galactic sources, and 11% failed to yield a redshift. The [*field completeness*]{} is the ratio of acceptable redshifts in a given field to initial sources, and hence is only relevant to targets observed with 6dF. It also excludes Galactic features like stars and ISM. Figure \[fig:fldcompl\] shows the distribution of field completeness from the first [524]{} fields and its cumulate. This demonstrates that the redshift success rate of 6dF is good, with both the median and mean completeness around 90%. Observe the large difference between the high [*field*]{} completeness values of Fig. \[fig:fldcompl\] and the lower [*redshift*]{} completeness in Fig. \[fig:obsflds\]([*bottom*]{}). This is due to the high degree of overlap in the 6dFGS field allocation. The large variance in the density of targets has meant that most parts of the sky need to be tiled two or more times over. This is not at all obvious in Fig. \[fig:obsflds\]([*top*]{}) which superimposes all fields, giving the impression of a single layer of tiles. While much of the central strip contains observed and redshifted fields, it also contains other fields in this same region, as yet unobserved. The distribution of 6dFGS redshifts exhibits the classic shape for magnitude-limited surveys of this kind (Fig. \[fig:nz\]). The median survey redshift, $\langle cz \rangle = 16\,008$  ($\bar{z} = 0.055$), is less than half that of the 2dFGRS or SDSS surveys. Figure \[fig:radplot\] shows the radial distribution of galaxies across the southern sky, projected across the full range of southerly declinations ($\delta = 0$ to $-90^\circ$). Projecting in this way has the drawback of taking truly separate 3D space structures and blending them on the 2D page. Figure \[fig:cylplot\] shows the same data plotted $\Delta \delta = 10^\circ$ declination slices and a magnified view of the lowest redshift galaxies within $-40^\circ < \delta < -30^\circ$. Variations in galaxy density apparent in Figs. \[fig:radplot\] and \[fig:cylplot\] are due the incomplete coverage of observed fields and the projection of the Galactic Plane. No 6dFGS galaxies lie within galactic latitude $|\,b\,| \le 10^\circ$. The 6dFGS is also clumpier than optically-selected redshift surveys such as 2dFGRS and SDSS. This is because the near-infrared selection is biased towards early-type galaxies, which cluster more strongly than spirals. The 6dFGS provides the largest sample of near-infrared selected galaxies to determine the fraction of mass in the present-day universe existing in the form of stars. To this end, Jones  (2004) are deriving the $J$, $H$ and $K_{\rm s}$-band luminosity functions from the first [75000]{}redshifts of the 6dF Galaxy Survey, combining data from both before and after the First Data Release. Using the near-infrared luminosity functions and stellar population synthesis models, the galaxy stellar-mass function for the local universe can be estimated. When this is integrated over the full range of galaxy masses, the total mass of the present-day universe in stars can be expressed in units of the critical density. 6dFGS Online Database {#sec:database} --------------------- Data from the 6dF Galaxy Survey are publicly accessible through an online database at [http://www-wfau.roe.ac.uk/6dFGS/]{}, and maintained by the Wide Field Astronomy Unit of the Institute for Astronomy, University of Edinburgh. An early data release of around 17000 redshifts was made in December 2002, along with the opening of the web site and tools for catalogue access. This paper marks the First Data Release of [52048]{} total redshifts measured between January 2002 and July 2003. The design of the database is similar to that used for the 2dF Galaxy Redshift Survey in that parameterised data are stored in a relational database. Each [TARGET]{} object is also represented by a multi-extension FITS file which holds thumbnail images of the object and the spectra. The database is accessed/queried using Structured Query Language (SQL). A combined 6dF-literature redshift catalogue is provided in a separate single master catalogue. The 6dFGS database is housed under Microsoft’s relational database software, [*SQL Server 2000*]{}. The data are organised in several tables (Table \[tab:database1\]). The master target list used to configure 6dFGS observations is represented by the [TARGET]{} table. Spectral observations are stored in the [SPECTRA]{} table. The input catalogues that were merged to make up the master target list are also held in individual tables ([TWOMASS]{}, [SUPERCOS]{} etc.). The [TARGET]{} table forms the hub of the database. Every table is interlinked via the parameters targetid and targetname. These parameters are unique in the master [TARGET]{} table but are not necessarily unique in the other tables, ([*e.g.*]{} [SPECTRA]{}) as objects can and have been observed more than once. The [SPECTRA]{} table holds all the observational and redshift related data. Parameters are recorded for both the V and R frames (with a lot of the values being the same for both frames), and redshift information is derived from the combined VR frame. The [TWOMASS]{} table contains the $K$, $J$ and $H$-selected samples originating from the 2MASS extended source catalogue. The $K$-selected sample represents the primary 6dFGS input catalogue. Table \[tab:database1\] lists the programme details for the other contributing samples. Initially every FITS file, representing each target ([targetname.fits]{}), holds thumbnail images of the target. As data are ingested into the database the reduced spectra are stored as additional FITS image extensions. Table \[tab:database2\] summarises the content within each FITS extension. The first 5 extensions contain the thumbnail images and each have a built-in World Coordinate System (WCS). The optical $B$ and $R$ images come from SuperCOSMOS scans of blue () and red () survey plates. The 2MASS $J$, $H$ and $K$ images were extracted from datacubes supplied by IPAC. Note that although some objects in [TARGET]{} do not have 2MASS images, the corresponding extensions still exist in the FITS file but contain small placeholder images. The remaining extensions contain the spectra. Each 6dFGS observation will usually result in a further 3 extensions, the V grating spectrum, the R spectrum and the combined/spliced VR spectrum. -------------- ---------------------------------------------- ------------ Table name Description Programme ID Numbers [TARGET]{} the master target list [progid]{} [SPECTRA]{} redshifts and observational data $-$ [TWOMASS]{} 2MASS input catalogue $K$, $H$, and $J$ 1, 3, 4 [SUPERCOS]{} SuperCOSMOS bright galaxies $b_J$ and $r_F$ 7, 8 [FSC]{} sources from the IRAS FAINT Source Catalogue 126 [RASS]{} candidate AGN from the ROSAT All-Sky Survey 113 [HIPASS]{} sources from the HIPASS HI survey 119 [DURUKST]{} extension to Durham/UKST galaxy survey 78 [SHAPLEY]{} galaxies from the Shapley supercluster 90 [DENISI]{} galaxies from DENIS $I < 14.85$ 6 [DENISJ]{} galaxies from DENIS $J < 13.85$ 5 [AGN2MASS]{} candidate AGN from the 2MASS red AGN survey 116 [HES]{} candidate QSOs from the Hamburg/ESO Survey 129 [NVSS]{} candidate QSOs from NVSS 130 [SUMSS]{} radio source IDs from SUMSS and NVSS 125 -------------- ---------------------------------------------- ------------ The V and R extensions are images with 3 rows. The 1st row is the observed reduced [SPECTRUM]{}, the 2nd row is the associated variance and the 3rd row stores the SKY spectrum as recorded for each data frame. Wavelength information is provided in the header keywords [CRVAL1]{}, [CDELT1]{} and [CRPIX1]{}, such that $$\begin{aligned} {\rm wavelength\,(\AA)} & = & {\tt CRVAL1} - ({\tt CRPIX1} - {\rm pixel\,number}) \nonumber\\ & & \times \, {\tt CDELT1} .\end{aligned}$$ \[crvaleqn\] Additional WCS keywords are also included to ensure the wavelength information is displayed correctly when using image browsers such as Starlink’s GAIA or SAOimage DS9. The VR extension also has an additional 4th row that represents the [WAVELENGTH]{} axis, which has a continuous dispersion, achieved through the continuation of the V dispersion into the R half from rescrunching. ----------- ------------------------------------------ FITS Contents Extension 1st SuperCOSMOS  image ($1 \times 1$ arcmin) 2nd SuperCOSMOS  image ($1 \times 1$ arcmin) 3rd 2MASS $J$ image (variable size) 4th 2MASS $H$ image (variable size) 5th 2MASS $K$ image (variable size) 6th V-spectrum extension 7th R-spectrum extension 8th combined VR-spectrum extension $n$th additional V, R, and VR data ----------- ------------------------------------------ : Contents of each extension in the database FITS files \[tab:database2\] Access to the database is through two different Hypertext Mark-up Language (HTML) entry forms. Both parse the user input and submit an SQL request to the database. For users unfamiliar with SQL, the menu driven form provides guidance in constructing a query. The SQL query box form allows users more comfortable with SQL access to the full range of SQL commands and syntax. Both forms allow the user to select different types of output (HTML, comma separated value (CSV) or a TAR save-set of FITS files). There are online examples of different queries using either the menu or SQL form at [http://www-wfau.roe.ac.uk/6dFGS/examples.html]{}. More information about the database is available directly from the 6dFGS database website. CONCLUSIONS {#sec:conclusions} =========== The 6dF Galaxy Redshift Survey (6dFGS) is designed to measure redshifts for approximately [150000]{} galaxies and the peculiar velocities of 15000. The survey uses the 6dF multi-fibre spectrograph on the United Kingdom Schmidt Telescope, which is capable of observing up to 150 objects simultaneously over a $5.7^\circ$-diameter field of view. The 2MASS Extended Source Catalog (Jarrett  2000) is the primary source from which targets have been selected. The primary sample has been selected with $K_{\rm tot} \leq 12.75$, where $K_{\rm tot}$ denotes the total $K$-band magnitude as derived from the isophotal 2MASS $K$ photometry. Additional galaxies have been selected to complete the target list down to $(H, J, r_F, b_J) = (13.05, 13.75, 15.6, 16.75)$. Thirteen miscellaneous surveys complete the total target list. The survey covers the entire southern sky (declination $\delta < 0^\circ$), save for the regions within $|\,b\,| \leq 10^\circ$ of the Galactic Plane. This area is has been tiled with around 1500 fields that effectively cover the southern sky twice over. An adaptive tiling algorithm has been used to provide a uniform sampling rate of 94%. In total the survey covers some 17046deg$^2$ and has a median depth of $\bar{z}$=0.05. There are three stages to the observations, which initially target the declination strip $-42^\circ<\delta<-23^\circ$, followed by the equatorial region $-23^\circ<\delta<0^\circ$, and conclude around the pole, ($\delta<-42^\circ$). Spectra are obtained through separate V and R gratings and later spliced to produce combined spectra spanning 4000 – 8400Å. The spectra have 5 – 6Å FWHM resolution in V and 9 – 12Å  resolution in R. Software is used to estimate redshifts from both cross-correlation with template absorption-line spectra, and linear fits to the positions of strong emission lines. Each of these automatic redshift estimates is checked visually and assigned a quality $Q$ on a scale of 1 to 5, where $Q \ge 3$ covers the range of reliable redshift measurements. The median signal-to-noise ratio is 9.4  for redshifts with quality $Q=4$, and 5.3  for $Q=3$ redshifts. The data in this paper constitute the First Data Release of [52048]{}observed spectra and the [46474]{} unique extragalactic redshifts from this set. The rates of contamination by Galactic and failed spectra are 4% and 11% respectively. Data from the 6dF Galaxy Survey are publicly available through an online database at [ http://www-wfau.roe.ac.uk/6dFGS/]{}, searchable through either SQL query commands or a online WWW form. The main survey web site can be found at [http://www.mso.anu.edu.au/6dFGS]{}. Acknowledgements {#acknowledgements .unnumbered} ================ We acknowledge the efforts of the staff of the Anglo-Australian Observatory, who have undertaken the observations and developed the 6dF instrument. We are grateful to P. Lah for his help in creating Fig. \[fig:radplot\]. D. H. Jones is supported as a Research Associate by Australian Research Council Discovery–Projects Grant (DP-0208876), administered by the Australian National University. T. Jarrett and J. Huchra acknowledge the support of NASA. They are grateful to the other members of 2MASS extragalactic team, M.  Skrutskie, R. Cutri, T. Chester and S. Schneider for help in producing the major input catalog for the 6dFGRS. They also thank NASA, the NSF, the USAF and USN and the State of Massachussetts for the support of the 2MASS project and NASA for the support of the 6dF observational facility. The DENIS project has been partly funded by the SCIENCE and the HCM plans of the European Commission under grants CT920791 and CT940627. It is supported by INSU, MEN and CNRS in France, by the State of Baden-Warttemberg in Germany, by DGICYT in Spain, by CNR in Italy, by FFwFBWF in Austria, by FAPESP in Brazil, by OTKA grants F-4239 and F-013990 in Hungary, and by the ESO C&EE grant A-04-046. Blanton, M.R. , 2001, AJ, 121, 235 Blanton, M.R. , 2003, AJ, 125, 2276 Branchini, E. , 1999, MNRAS, 308, 18 Burkey, D., 2004, PhD dissertation, in prep. Burkey, D. & Taylor, A., 2004, MNRAS, submitted Campbell, L.A. , 2004, MNRAS, in press Cole S. , (2dFGRS team), 2001, MNRAS, 326, 255 Colless, M.M. , 2001a, MNRAS, 321, 277 Colless, M.M. , (2dFGRS team), 2001b, MNRAS, 328, 1039 Cross, N. , (2dFGRS team), 2001, MNRAS, 324, 825 da Costa, L.N. , 2000, ApJ, 537, L81 De Propris, R. , (2dFGRS team), 2002, MNRAS, 329, 87 Djorgovski, S. & Davis, M., 1987, ApJ, 313, 59 Dressler, A. , 1987, ApJ, 313, 42 Efstathiou, G. , (2dFGRS team), 2002, MNRAS, 330, 29 Folkes, S. , (2dFGRS team), 1999, MNRAS, 308, 459 Giovanelli, R. , 1998, AJ, 116, 2632 Goto T. , 2003, PASJ, 55, 739 Hambly, N.C. , 2001, MNRAS, 326, 1279 Hawkins, E. , (2dFGRS team), 2003, MNRAS, 346, 78 Hoeg, E. , 2000, A&A, 355, L27 Huchra, J. , ApJS, 121, 287 Hudson, M.J. , 1999, ApJ, 512, L79 Jarrett, T.-H. , 2000, AJ 120, 298 Jones, D.H. , 2004, in prep. Lahav, O. , (2dFGRS team), 2002, MNRAS, 333, 961 Lauer, T.R. & Postman, M., 1994, ApJ, 425, 418 Lewis, I.J. , 2002, MNRAS, 333, 279 Lynden-Bell, D. , 1988, ApJ, 326, 19 Madgwick, D.S. , (2dFGRS team), 2002, MNRAS, 333, 133 Metropolis, N. , 1953, J. Chem. Phys., 21 Norberg, P. , 2002, MNRAS, 336, 907 Parker, Q.A. , 1998, in [*Fiber Optics in Astronomy III*]{}, ASP Conf Series 152, p80 Parker, Q.A & Watson, F.G., 1995, in [*Fiber Optics in Astronomical Applications*]{}, Proc SPIE v2476, ed. S. Barden, p34 Peacock, J.A. , (2dFGRS team), 2001, Nature, 410, 169 Percival W.J. , (2dFGRS team), 2001, MNRAS, 327, 1297 Saunders, W. , 2000, MNRAS, 317, 55 Saunders W. , 2001, [*AAO Newsletter*]{}, 97, 14 Scaramella, R. , Nature, 338, 562 Szalay, A. , 2003, ApJ, 591, 1 Verde, L. , (2dFGRS team), 2002, MNRAS, 335, 432 Watson, F.G , 2000, in [*Optical and IR Telescope Instrumentation and Detectors*]{}, Proc SPIE vol 4008, eds. M. Iye, A.F. Moorwood, p123 Wegner, G.A. , 1999, MNRAS, 305, 259 York, D.G. , 2001, AJ 120, 1579 Zehavi, I. , (SDSS team), 2002, ApJ, 571, 172 [^1]: Fibre-Linked Array-Image Reformatter, Parker & Watson (1995)
--- abstract: 'In the present work, we investigate the thermoelectric properties of a T-shaped double quantum dot system coupled to two metallic leads incorporating the intra-dot Coulomb interaction. We explore the role of the interference effects and Coulomb blockade on the thermoelectric efficiency of the system in the linear and nonlinear regimes. We studied as well the effect of a Van-Hove singularity of the leads density of states (DOS) at the neighborhood of the Fermi energy, a situation that can be obtained using a carbon nanotube, a graphene nano-ribbon or other contacts with one-dimensional properties. The system is studied above the Kondo temperature. The Coulomb blockade of the electronic charges is studied using the Hubbard III approximation, which properly describes the transport properties of this regime. In the linear response, our results show an enhancement of the thermopower and the figure of merit of the system. For a nonlinear situation, we calculate the thermoelectric efficiency and power output, concluding that the T-shaped double quantum dot is an efficient thermoelectric device. Moreover, we demonstrate the great importance of the DOS Van-Hove singularity at the neighborhood of the Fermi energy to obtain a very significant increase of the thermoelectric efficiency of the system.' author: - 'G. G'' omez-Silva' - 'P. A. Orellana' - 'E. V. Anda' nocite: '[@*]' title: 'Enhancement of the thermoelectric efficiency in a T-shaped quantum dot system in the linear and nonlinear regimes' --- \[sec:level1\] Introduction =========================== Thermoelectric effects in low dimensional systems have attracted significant attention in the last decade. Materials with excellent thermoelectric properties can convert heat into electricity (Seebeck effect) or electricity into a temperature gradient (Peltier effect). The performance of a thermoelectric device, in the linear regime, is estimated by the figure of merit $ZT=\mathcal{G}S^2T/\kappa$, where $\mathcal{G}$ is the electronic conductance, $S$ is the thermopower or Seebeck coefficient, $T$ is the temperature and $\kappa$ is the thermal conductivity, which includes contributions from electrons as well as phonons. For practical applications $ZT$ must be as large as possible, so we look for materials characterized by an excellent electronic conductance and at the same time a small thermal conductivity. In bulk materials, these properties are constrained by the well known Wiedemann-Franz (WF) law $L=\kappa/\mathcal{G}T=L_0$, where $L_0=\pi^2k_B^2/3e^2$ is the Lorenz number, $k_B$ the Boltzmann constant and $e$ the electronic charge. This relationship expresses the fact that charge and heat transport are supported by the same scattering processes with a weak dependence of the energy as a consequence of the Fermi liquid theory. Best bulk thermoelectric materials show $ZT<1$, although, to be competitive compared with conventional generators and refrigerators it should be $ZT>3$[@Majumdar]. However, nanostructured systems exhibit higher efficiencies than bulk materials according to theoretical predictions [@Hicks; @Hicks2; @Dubi] as well as experiments[@Venka; @Harman], which also imply the violation of the WF law[@Kubala; @Dutta]. One of the phenomena that explains the improvement of the efficiency is the decrease of the thermal conductivity by the increase of the phonon scattering for low dimensional systems[@Kithun]. Moreover, Mahan and Sofo[@Mahan] show that the efficiency can be improved increasing the density of states (DOS) at the Fermi level. They suggest a maximization of $ZT$ in materials with $\delta$-function form in their DOS. For this reason, quantum dots (QDs) systems are ideal candidates for having a good thermoelectric performance. The figure of merit $ZT$ is a magnitude calculated in the linear response regime[@Hershfield]. This can be understood for bulk materials where the temperature gradient is small inside them, even when the gradient is large throughout the sample. Nevertheless, in nanostructures, especially quantum dots systems, extremely large bias voltage and temperature gradient can be applied. To consider these systems as power generators or cooling devices it is necessary to study the thermoelectric properties in the nonlinear response regime. Due to electronic confinement, transport in nanoscale systems is governed by electron-electron interactions and coherence, which gives rise to interference electronic effects. These are some fundamental ingredients to understand the thermoelectric properties, as it is the case of Fano resonances and Coulomb blockade phenomena. Thermoelectric properties in systems fabricated with one, two or more QDs have been extensively studied[@Boese; @Zianni; @Swirkowicz; @Wierzbicki; @Fu; @Kennes; @Yan; @Thierschmann] in different regimes, mostly in the linear regime and, to a lesser extent, nonlinear one. In the linear regime, it was found that interference effects can significantly improve the figure of merit. On the other hand, in the nonlinear regime, several authors [@Svensson; @Sierra] reported a negative differential thermoconductance, which generates zero thermocurrent at a given temperature gradient. In particular, in the linear regime a T-shaped double quantum dot system was studied in the presence of electron-electron interaction [@Hershfield; @Monteros; @Wojcik; @Xu; @Wojcik2] using methods as unrestricted Hartree-Fock and Hubbard I (H$_{\text{I}}$) approaches. The system has two possible conduction channels, which allows the observation of interference effects in the conduction. It is important to mention that the same characteristics that increase the figure of merit, could improve the efficiency at a finite bias voltage. The present paper focuses on the study of the thermoelectric transport through a T-shaped double-quantum-dot (DQD) system coupled to two metallic leads (see Fig. \[system\]). For a more realistic vision of the problem, we consider an intra-dot Coulomb interaction $U$ in the quantum dots. We explore both, linear and nonlinear regimes. To incorporate the electronic correlations the Hubbard III (H$_{\text{III}}$) approximation is used[@Anda], which provides a reliable description of the Coulomb blockade regime, the transport and thermoelectric properties of the system (see Appendix \[app:hubb\]). This approximation is a correct approach when the system is above the Kondo temperature. We study as well devices with contacts possessing, in their DOS, Van-Hove singularities at the neighborhood of the Fermi level as it is the case of several one-dimensional systems[@Charlier; @Hu; @Nakada]. We prove this to be an essential ingredient, which significantly enhances the thermoelectric efficiency of these systems. The paper is organized as follows. In Sec. \[sec:model\] we describe the model adopted to study the nanosystem. We also outline the H$_{\text{III}}$ approach, as well as the theoretical framework based on the non-equilibrium Green function (NEGF) techniques. In Sec. \[sec:res\], we discuss the numerical results obtained, and finally, a summary is given. In Appendix \[app:hubb\] we conceptually discuss the shortcomings of the H$_{\text{I}}$ approximation. Model {#sec:model} ===== We consider a two single level QDs connected to metallic leads, described in Figure \[system\]. We model the system by a two impurities Anderson Hamiltonian, which can be written as, ![Schematic view of T-shaped DQD system coupled to left ($L$) and right ($R$) metallic leads with an inter-dot coupling denoted by $t_c$.[]{data-label="system"}](fig01.pdf){width="58mm"} $$\label{ham:tot} H=H_{DQD}+H_{leads}+H_{tunnel}.$$ The first term, $H_{DQD}$, describes the DQD molecule, which is given by, $$\begin{aligned} \label{ham:mol} H_{DQD}&=&\sum_{i=0,1;\sigma}\varepsilon_{i\sigma}d_{i\sigma}^{\dag}d_{i\sigma}+\sum_{i=0,1}U_in_{i\uparrow}n_{i\downarrow}\nonumber\\ &&+\sum_{\sigma}t_c\left(d_{0\sigma}^{\dag}d_{1\sigma}+\text{H.c.}\right) ,\end{aligned}$$ where $\varepsilon_{i\sigma}$ is the level energy of the QD$i$ ($i=0,1$), $d_{i\sigma}$ ($d^{\dag}_{i\sigma}$) is the annihilation (creation) operator of an electron in the QD$i$ with spin index $\sigma$ ($\sigma=\downarrow,\uparrow$), $U_i$ is the local electron-electron interaction energy at QD$i$, $n_{i\sigma}=d_{i\sigma}^\dag d_{i\sigma}$ is the number operator and $t_c$ is the inter-dot tunneling coupling. The second term in Eq. describes the electrons in the metallic leads and it is given by, $$\label{ham:lead} H_{leads}=\sum_{k_\alpha,\sigma}\left(\varepsilon_{k_\alpha\sigma}c_{k_\alpha\sigma}^{\dag}c_{k_\alpha\sigma}+\text{H.c.}\right),$$ where $\varepsilon_{k_\alpha\sigma}$ is the energy of the electron described by the state of quantum number $k_\alpha$ and spin index $\sigma$ at the contact $\alpha$ ($\alpha=L,R$) and $c_{k_\alpha\sigma}$ ($c^{\dag}_{k_\alpha\sigma}$) is the operator that annihilates (creates) it. Finally, the third term is the tunneling Hamiltonian between the leads and the QD0 and is written as, $$\label{ham:tun} H_{tunnel}=\sum_{k_\alpha,\sigma}\left(V_{k_\alpha}d_{0\sigma}^\dag c_{k_\alpha\sigma}+\text{H.c.}\right),$$ where $V_{k_L}$ ($V_{k_R}$) is the coupling between the embedded QD and the left (right) lead. The model we propose does not include the inter-dot Coulomb repulsion, which for the parameters taken is at least an order of magnitude less than the intra-dot repulsion. The treatment neglects the splitting between the singlet and triplet configurations of the dots as it is of the order of $t_c^2/U$, an energy value much less than the intra-dot Coulomb repulsion, the dominant many-body interaction in the parameter region where we are studying the system. Besides, no magnetic field is applied so time reversal symmetry is preserved. To study the physical and in particular the thermoelectric transport properties of this system, we use the NEGF formalism. The intra-dot Coulomb repulsion is treated within the H$_{\text{III}}$ approximation extended to the case of two impurities [@Anda]. This approximation correctly describes the electronic and thermoelectric transport in the Coulomb blockade regime. It is important to mention that this is not the case of the H$_{\text{I}}$ approximation[@Hubbard] which loses, particularly at resonance, the Coulomb blockade effect in analyzing the transport properties of a system (see Appendix \[app:hubb\]). The retarded Green function at the QD$0$ is given by, $$\label{green:alph} G_{00}^r=\sum_{i,j=1}^{2}\frac{p_{0,i}p_{1,j}(\varepsilon-\varepsilon_{1,j})}{(\varepsilon-\varepsilon_{0,i})(\varepsilon-\varepsilon_{1, j})-t_c^2+\text{i}\Gamma(\varepsilon)(\varepsilon-\varepsilon_{1,j})/2},$$ where $p_{0,1}=1-\langle n_0\rangle$, $p_{0,2}=\langle n_0\rangle$, $p_{1,1}=1-\langle n_1\rangle$ and $p_{1,2}=\langle n_1\rangle$. Within the H$_{\text{III}}$ approximation, these quantities can be thought to be the probabilities for the QD0 or QD1 to be single or double occupied with single electronic energies $\varepsilon_{0,1}=\varepsilon_0$, $\varepsilon_{0,2}=\varepsilon_0+U_0$, $\varepsilon_{1,1}=\varepsilon_1$ and $\varepsilon_{1,2}=\varepsilon_1+U_1$, respectively. $\Gamma(\varepsilon)=\Gamma_L(\varepsilon)+\Gamma_R(\varepsilon)$ is the broadening of the dots energy levels due to the connection with the continuum, which is given by $\Gamma_{L(R)}(\varepsilon)=\pi\sum_k\delta(\varepsilon-\varepsilon_{k_{L(R)}})V_{k_{L(R)}}^2$. The density of states (DOS) of the QD is given by $\rho=-(1/\pi)\text{Im}G_{00}^r$. As we can observe from the Eq. , DOS of the embedded QD has eight poles, two of them for each of the four fractions. Similarly, we calculate the retarded Green’s function at QD1, which can be written as a function of the same probabilities and energies as follows, $$\label{green:beta} G_{11}^r=\sum_{i,j=1}^{2}\frac{p_{0,i}p_{1,j}[(\varepsilon-\varepsilon_{0,i})+\text{i}\Gamma(\varepsilon)/2]}{(\varepsilon-\varepsilon_{0, i})(\varepsilon-\varepsilon_{1,j})-t_c^2+\text{i}\Gamma(\varepsilon)(\varepsilon-\varepsilon_{1,j})/2}.$$ These Green’s functions require a self-consistent calculation to obtain the occupation numbers given by, $$\langle n_{i}\rangle=\frac{1}{2\pi}\int_{-\infty}^{\infty}G^{<}_{ii}\text{d}\varepsilon,$$ where $G^{<}_{00}=[\Gamma_Lf_L+\Gamma_Rf_R]|G_{00}^r|^2$ and $G^{<}_{11}=[\Gamma_Lf_L+\Gamma_Rf_R]|G_{01}^r|^2$ are the lesser Green’s function calculated using the Keldysh formalism, subindex $i$ corresponds to QD sites 0 or 1, $f_{L(R)}=1/[1+\exp{(\varepsilon-\mu_{L(R)})/(k_BT_{L(R)})}]$ is the Fermi-Dirac distribution being $k_B$ the Boltzmann constant, $T_{L(R)}$ the temperature and $\mu_{L(R)}$ the electro-chemical potential corresponding to lead $L(R)$. Finally, $G_{01}^r$ can be expressed as, $$G_{01}^r=\sum_{i,j=1}^{2}\frac{p_{0,i}p_{1,j}t_c}{(\varepsilon-\varepsilon_{0,i})(\varepsilon-\varepsilon_{1, j})-t_c^2+\text{i}\Gamma(\varepsilon)(\varepsilon-\varepsilon_{1,j})/2}.$$ Linear response regime ---------------------- In the linear response regime, when the temperature gradient and the bias voltage tend to zero, the electric and the heat current, $I$ and $J$, respectively, are given by, $$\begin{aligned} I&=&-e^2\mathcal{L}_0V+\frac{e}{T}\mathcal{L}_1\Delta T,\nonumber\\ J&=&e\mathcal{L}_1V-\frac{1}{T}\mathcal{L}_2\Delta T,\\ &&\nonumber\end{aligned}$$ where $e$ is the electron charge, $\Delta T$ and $V$ are, respectively, the infinitesimal temperature gradient and applied potential between the contacts and $\mathcal{L}_n$ are the kinetic transport coefficients. They can be calculated integrating the transmission of the system $\tau(\varepsilon)$ as follows[@Sivan], $$\mathcal{L}_n=\frac{2}{h}\int_{-\infty}^{\infty}\left(\frac{\partial f}{\partial\varepsilon}\right)_{\varepsilon=\mu}(\varepsilon-\mu)^n\tau(\varepsilon)\text{d}\varepsilon,$$ where, $h$ is the Planck constant. We obtain the transmission function using the Fisher-Lee relation [@FisherLee] $\tau(\varepsilon)=\text{Tr}[\Gamma_LG_{00}^r\Gamma_RG_{00}^a]$ that can be expressed as, $$\label{trans:fun} \tau(\varepsilon)=-\Gamma(\varepsilon)\text{Im}G_{00}^r.$$ The observable variables can be expressed as functions of the kinetic coefficients. The electronic conductance, at zero temperature gradient, is obtained by $\mathcal{G}=e^2\mathcal{L}_0$. The thermopower, $S=(-1/eT)(\mathcal{L}_1/\mathcal{L}_0)$, is defined as the voltage drop induced by a gradient of temperature when the electric current is zero. The electronic thermal conductance, $\kappa_e=1/T(\mathcal{L}_2-\mathcal{L}_1^2/\mathcal{L}_0)$, is the ratio between the heat current and the temperature gradient when the electric current vanishes. Finally, the thermoelectric efficiency at equilibrium can be described by the figure of merit $ZT=\mathcal{G}S^2T/(\kappa_e+\kappa_{ph})$. The phononic thermal conductance, $\kappa_{ph}$, is neglected in this model. At low temperatures, the thermopower can be obtained using the Mott formula[@Jonson], which is expressed in terms of the electronic conductance, and it is given by $S=(\pi^2/3)(k_B^2T/e)(\text{d}\ln\mathcal{G}/\text{d}\varepsilon)_{\varepsilon=\mu}$. However, this formula is no longer valid[@Gomez-Silva] at the presence of Fano antiresonances that causes the conductance to vanish, which would imply a divergence of the thermopower. Nonlinear regime ---------------- In the nonlinear regime, the electric current can be written as, $$I=\frac{e}{h}\int_{-\infty}^{\infty}\tau(\varepsilon)[f_L(\varepsilon)-f_R(\varepsilon)] \text{d}\varepsilon.$$ We also can derive an expression for the heat current using the first law of thermodynamics $$\label{firstlaw} \text{d}U_\alpha=\text{d}Q_\alpha+\text{d}W_\alpha,$$ where $\alpha=L,R$; $\text{d}W_\alpha=\mu_\alpha\text{d}N_\alpha$ is the work done by the reservoir $\alpha$ and d$Q_\alpha$ is the transmitted heat between the reservoirs. We write the rate of change of the quantities in Eq. as $J_E=J_\alpha+\mu_\alpha I$. Finally, the expression for the heat current is[@Yamamoto], $$J_\alpha=\frac{1}{h}\int_{-\infty}^{\infty}(\varepsilon-\mu_\alpha)\tau(\varepsilon)[f_L(\varepsilon)-f_R(\varepsilon)] \text{d}\varepsilon.$$ In this regime, we can regard this system as a heat engine. We set $\mu_R$ higher than $\mu_L$. The work done by the reservoir per unit of time is equivalent to the power output $dW/dt=P=IV$, with $V=(\mu_R-\mu_L)/e$. The efficiency is defined as the ratio between the work done and the heat current extracted from the high-temperature reservoir $\eta=P/J_L$, per unit of time. Therefore, $$\eta=\frac{(\mu_R-\mu_L)\int_{-\infty}^{\infty}\tau(\varepsilon)[f_L(\varepsilon)-f_R(\varepsilon)]\text{d}\varepsilon} {\int_{-\infty}^{\infty}(\varepsilon-\mu_L)\tau(\varepsilon)[f_L(\varepsilon)-f_R(\varepsilon)]\text{d}\varepsilon}.$$ Results {#sec:res} ======= Linear Response {#lin:res} --------------- In this section, we discuss the thermoelectric properties at finite temperature ($T\ne0$). We consider two types of leads. First, we consider normal metallic leads where the wide band limit can be used by taking a constant broadening $\Gamma_{L(R)}$ ($\Gamma_0=\Gamma_L+\Gamma_R$), where we take $\Gamma_0$ as the energy unit. On the other hand, it is known that quasi-1D systems, as it is the case of carbon nanotubes or graphene nanoribbons, exhibit Van-Hove singularities in their DOS. These singularities can be taken to be near the Fermi level. In this second case, we assume that the connection with the continuum $\Gamma_{L(R)}(\varepsilon)=\pi V_{k_{L(R)}}^2\rho(\varepsilon)$ is no longer constant and exhibits a Van-Hove singularity. The 1D leads DOS, $\rho(\varepsilon)$, can be represented around the Fermi level as, $$\rho(\varepsilon)=\begin{cases}A/\sqrt{\varepsilon-\varepsilon_{VH}},\,\qquad \text{if $\varepsilon>\varepsilon_{VH}$},\\ B,\quad\qquad\qquad\qquad \text{if $\varepsilon\leq\varepsilon_{VH}$}, \end{cases}$$ where $A$ and $B$ are constants that depend on the geometry of the leads and $\varepsilon_{VH}$ is the energy where the Van-Hove singularity is localized. For metallic and semiconductors carbon nanotubes, $A$ and $B$ have been explicitly calculated [@Mintmire]. For the sake of simplicity, we take $A=1/\pi \sqrt{2D}$ and $B=1/\pi D'$, being $D$ and $D'$ the bandwidth of two different bands, as it would be the case of a nanoribbon. We take the energy unit as the coupling value $\Gamma_0=\Gamma_{L(R)}(\varepsilon_{F})$. Besides, the Fermi level of the 1D contact can be tuned doping the material[@Kim; @Kongkanand]. Other parameters to be considered in this section are the inter-dot coupling $t_c=2\Gamma_0$, the temperature of the leads $k_BT_L=k_BT_R=0.1\Gamma_0$ and the bias voltage $\Delta\mu=(\mu_L-\mu_R)\rightarrow0$. Besides, we consider the local Coulomb repulsion satisfying $U_0=U_1=U$, symmetric couplings to the leads, $\Gamma_{R}=\Gamma_{L}=\Gamma_0$ and $\varepsilon_F=0$. We choose a set of parameters ($T$, $t_c$ and $U$) ensures that the system is above the Kondo temperature ($T_K=\sqrt{\Gamma U}\exp[-\pi|\varepsilon_0||\varepsilon_0+U|]$), for a gate potential $-U/2<\varepsilon_0<-2\Gamma_0$, where the system would be at the Kondo regime. ![image](fig02L.pdf){width="88mm" height="73mm"}![image](fig02R.pdf){width="88mm" height="73mm"} Figure \[equil1\] displays the electronic conductance $\mathcal{G}$, thermal conductivity $\kappa$, thermopower $S$ and figure of merit $ZT$ as a function of the embedded QD gate voltage $\varepsilon_1$, for normal (left panels) and 1D (right panels) leads. We set the gate voltage of the side coupled QD in two different values that correspond to the resonance energies $\varepsilon_1=-U$ (black solid line) and $\varepsilon_1=0$ (red dashed line). We observe, in Figure \[equil1\] (a) and (b), two peaks in the linear conductance and thermal conductivity, respectively, when $\varepsilon_0$ is as well at resonance with the Fermi level. We could expect the appearance of eight peaks corresponding to the eight poles of the Eq. . However, when $\varepsilon_1$ is fixed at resonance, only two fractions in $G_{00}^r$ depend on $\varepsilon_0$ and, therefore, only two peaks are observed in the conductance. The same peaks can be observed in panels (e) and (f), when the system is connected with 1D type of leads. We note that in this case, in the off resonance region, the values of $\mathcal{G}$ and $\kappa$ are larger than for the system connected to normal leads. This behavior can be attributed to the Van-Hove singularity present in the leads DOS. We see, in Figure \[equil1\] (c) and (g), that the maxima values of thermopower are essentially the same for the two types of leads. However, the thermopower for 1D leads takes higher values between the resonances. As it will be discussed below, this result is relevant for the nonlinear regime. The figure of merit for normal leads assumes small values close to the embedded QD resonances, as shown in Figure \[equil1\] (d). When we take $\varepsilon_0$ far from the resonances, there is a significant increase of $ZT$. We note that when $\varepsilon_1=-U$ (black solid line), the high enhanced of $ZT$ occurs for positive energies, whereas when $\varepsilon_1=0$, it occurs for negative energies. This opposite situations can be explained as follows. It is known that the figure of merit increases due to abrupt changes in the transmission function, which is proportional to the DOS, as shown in Eq. . For this system, there is always a projection of the side coupled QD local levels onto the embedded QD DOS. This projection generates a resonance in the transmission with a broadening which is inversely proportional to the difference between $\varepsilon_0$ and $\varepsilon_1$. So, the larger the difference between the gate potentials, the narrower the broadening of the resonance for the side coupled dot. A narrow resonance implies an abrupt change in the transmission and consequently an enhancement of the figure of merit. When the difference between $\varepsilon_0$ and $\varepsilon_1$ is large enough, this effect on the transmission reduces and the figure of merit begins to decrease. Panel (h) of Figure \[equil1\] shows the figure of merit for 1D leads. We observe an increase of $ZT$ even for the region close to the embedded QD resonance. In this case, it is the projection of the Van-Hove singularity onto the embedded QD DOS that enhances $ZT$. In addition, for normal leads we observe that the curves for $\varepsilon_1=-U$ and $\varepsilon_1=0$ are symmetric around $\varepsilon_0=-U/2$. For 1D leads, this symmetry is broken. ![image](fig03L.pdf){width="88mm"}![image](fig03R.pdf){width="88mm"} In the previous analysis we have studied the equilibrium thermoelectric properties of the system assuming the side coupled dot to be at resonance, $\varepsilon_1=-U$ and $\varepsilon_1=0$. We are now studying the figure of merit $ZT$ in the parameter space. Figure \[ZT\] displays $ZT$ (in a log scale) for all values of gate potentials $\varepsilon_{0}$ and $\varepsilon_{1}$, in the case of normal (left panel) and 1D (right panel) leads. For normal leads (Figure \[ZT\] (a)), we identify two regions of high values of $ZT$; the central region with both QDs at resonance and another region with the side-coupled QD at resonance and the embedded QD off resonance. In the last case, $ZT$ reaches its maxima values but in a narrow region of $\varepsilon_1$. When both, embedded and side-coupled QDs, are off resonance, $ZT$ is essentially zero and it is possible to observe a splitting of the peaks at the central region. It is important to mention that, even when these values of $ZT$ are low, they are placed in a region where the linear conductance take non-zero values. We will discuss the significance of the relationship between efficiency and conductance in the section \[efficiency\]. Alternatively, for 1D leads (Figure \[ZT\](b)), the enhancement of the figure of merit appears in all the $\varepsilon_{0}$, $\varepsilon_1$ parameter space, except when the embedded QD is at resonance. Here, the contact Van-Hove singularity close to the Fermi level produces a very significant enhancement of the figure of merit. This remarkable result implies that the presence of the Van-Hove singularity by itself is very effective to increase the efficiency. Furthermore, as it was as well the case of normal leads, the narrow region where the embedded QD is off resonance and the side-coupled QD is at resonance shows the highest values of $ZT$. Even though the Van-Hove singularity by itself increases the efficiency, the effect of connecting a side-coupled QD at resonance enhances this effect. ![Lorentz number as a function of the gate potential $\varepsilon_0$ for $t_c=2\Gamma_0$, $k_BT=0.1\Gamma_0$, $U=10\Gamma_0$ and two different values of the gate potential $\varepsilon_1$. (a) and (b) panels correspond to normal and 1D leads, respectively.[]{data-label="equil3"}](fig04.pdf){width="88mm" height="73mm"} Figure \[equil3\] displays the Lorentz number for (a) normal and (b) 1D leads, for two values of the gate potential $\varepsilon_1$. In the panel (a) we can appreciate that for $\varepsilon_1=-U$ (black solid line, the side-coupled dot at resonance) the Wiedemann-Franz law is violated in almost all range of $\varepsilon_0$, although, it is not the case for $\varepsilon_1=-U/2$ (red dashed line, the side-coupled dot off resonance) where the Wiedemann-Franz law holds in almost all values of $\varepsilon_0$. The case of panel (b) is different. Here we observe that the Wiedemann-Franz law is violated for all values of $\varepsilon_0$ for the side-coupled dot being or not at resonance. For normal leads, we see that it is the resonance condition of the QDs that controls the fulfillment or not of the law (observe that it is violated as well when $\varepsilon_0=-U$ and $\varepsilon_0=0$). As it was the case of the enhancement of $ZT$, it is the projection of the states of the side-coupled QD onto the embedded QD that generates a narrow resonance and violates the law. For 1D leads, a narrow resonance is always present around the Fermi level due to the Van-Hove singularity of the leads DOS, and therefore the law is violated independently of the values of $\varepsilon_0$ and $\varepsilon_1$. As it expected, the strong dependence of the electronic conductance and thermal conductivity with the Fermi energy causes a strong violation of the Wiedemann-Franz law. Non-linear regime {#nonlin:res} ----------------- In this section, we explore the thermoelectric properties in a nonlinear regime present when the system is under the effect of a finite bias voltage $V$ and a temperature gradient $\Delta T$. We consider $T$ as a background temperature being $T_R=T$ and $T_L=T+\Delta T$. Similarly, we set the chemical potential of the leads $\mu_L=eV/2$ and $\mu_R=-eV/2$, $U=10\Gamma_0$ and we take four different values of the local QD energies $\varepsilon_0=\varepsilon_1=\varepsilon_d$. Figure \[noneq1\] (a) shows the electric current as a function of the bias voltage for a inter-dot coupling $t_c=0.5\Gamma_0$. In this case, the local state of the embedded dot predominates because the inter-dot coupling is weak and we see, for all cases, a characteristic plateau of the Coulomb blockade regime, where there is an increase in the current when the local levels align with the chemical potential of the leads. The same curves are represented in the Figure \[noneq1\] (b) for $t_c=3\Gamma_0$. Here, we observe several increases in the current generated by the larger coupling to side-coupled QD. The connection with this QD gives to the system more channels for the current to go along. In this case, we observe Ohm’s law behavior. The figure shows that the current flowing between the leads is zero when no voltage is applied. In both panels, the curves for $\varepsilon_d=0$ and $\varepsilon_d=-U$ are equivalent because the QDs are at resonance with a Fermi energy, $\varepsilon_F=0$. ![Electrical current as a function of the bias voltage $eV$ for different values of the local energies $\varepsilon_d$ at zero temperature gradient, $\Delta T =0$. Other parameters are $k_BT=0.1\Gamma_0$ and, $t_c=0.5\Gamma_0$ for (a) and $t_c=3\Gamma_0$ for (b) panel, respectively. The values of the local energies are $\varepsilon_d=0$ (black solid line), $\varepsilon_d=-3U/10$ (red dashed line), $\varepsilon_d=-U/2$ (blue dash-dotted line) and $\varepsilon_d=-U$ (green dotted line).[]{data-label="noneq1"}](fig05.pdf){width="88mm" height="73mm"} ![Electrical current as a function of the temperature gradient $k_B\Delta T$ for different values of the local energies $\varepsilon_d$ at zero bias voltage. Other parameters are $k_BT=0.1\Gamma_0$, and, $t_c=0.5\Gamma_0$ for (a) and $t_c=3\Gamma_0$ for (b) panel, respectively. The values of the local energies are $\varepsilon_d=0$ (black solid line), $\varepsilon_d=-3U/10$ (red dashed line), $\varepsilon_d=-U/2$ (blue dash-dotted line) and $\varepsilon_d=-U$ (green dotted line). Panel (c) shows the DOS of the embedded QD as a function of the energy for $k_BT=0.1\Gamma_0$ and $t_c=3\Gamma_0$.[]{data-label="noneq2"}](fig06.pdf){width="88mm" height="73mm"} ![Heat current as a function of the bias voltage $eV$ for different values of the local energies $\varepsilon_d$ at zero temperature gradient. Other parameters are $k_BT=0.1\Gamma_0$, and, $t_c=0.5\Gamma_0$ for (a) and $t_c=3\Gamma_0$ for (b) panel, respectively. The values of the local energies are $\varepsilon_d=0$ (black solid line), $\varepsilon_d=-3U/10$ (red dashed line), $\varepsilon_d=-U/2$ (blue dash-dotted line) and $\varepsilon_d=-U$ (green dotted line).[]{data-label="noneq3"}](fig07.pdf){width="88mm" height="73mm"} In Figure \[noneq2\], we explore the current as a function of the temperature gradient at zero bias for the same local energies and inter-dot couplings as before. As expected, for a weak inter-dot coupling, $t_c=0.5\Gamma_0$ (see Figure \[noneq2\] (a)), we see a nonlinear behavior of the current. For $\varepsilon_d=-U/2$, the current is always zero because the DOS is symmetric around the Fermi energy $\varepsilon_F=0$. For $\varepsilon_d=-U$ we observe a little increase of the thermocurrent with $\Delta T$. However, when we continue heating one lead, it reaches a maximum, decreases and even changes its sign. This situation was already studied and explained by Sierra *et al.*[@Sierra]. For $t_c=3\Gamma_0$ (see Figure \[noneq2\] (b)), the curve for $\varepsilon_d=-3U/10$ (red dashed line) shows that the thermocurrent is positive, then it changes to negative values and finally it becomes positive as we increase $\Delta T$. This behavior can be explained looking at the embedded QD DOS in Figure \[noneq2\] (c). The temperature gradient is applied taking $\varepsilon_F=0$. Then the states with positive energy contribute to the current with electronic carriers while the negative energy carriers are holes. Since the DOS is not symmetric with respect to $\varepsilon_F$ due to the side-coupled QD and the Coulomb interaction, the current changes its sign twice as we increase $\Delta T$. Similar results were found in a parallel coupled double quantum dot system[@Sierra2]. Figure \[noneq3\] shows the heat current (considering $J\equiv J_L$) for different values of $\varepsilon_d$ and two different values of $t_c$. We observe that $J$ is symmetric around $V=0$ only for the particle-hole symmetry point $\varepsilon_d=-U/2$. Moreover, we see an invariance between $J(eV)$ for $\varepsilon_d=0$ and $J(-eV)$ for $\varepsilon_d=-U$, i. e., when we invert $\varepsilon_d$ around the particle-hole symmetry point and change $eV$ for $-eV$. The same invariance is observed for a single QD[@Sierra3]. It is also clear the nonlinearity of the process for both $t_c=0.5\Gamma_0$ and $t_c=3\Gamma_0$, however, the heat current depends linearly on $eV$ when $\varepsilon_d$ is far for the Fermi energy of the leads, which is easier to observe when $t_c=0.5\Gamma_0$. Efficiency calculation {#efficiency} ---------------------- ![image](fig08LU.pdf){width="88mm" height="73mm"} ![image](fig08RU.pdf){width="88mm" height="73mm"} ![image](fig08LD.pdf){width="88mm" height="73mm"} ![image](fig08RD.pdf){width="88mm" height="73mm"} In order to use this system as a heat engine, we consider the temperature of the left lead $T_L=T+\Delta T$, while the right lead remains at the background temperature $T_R=T$. This generates a voltage $eV=\mu_R-\mu_L$ and a power output $P=IV$. Ideal candidates for efficient heat engines are systems which transmission can be represented by a $\delta$-function. In this case, the system efficiency reaches the Carnot value $\eta=\eta_C$. Unfortunately, the power output goes to zero for this transmission. The problem has been extensively discussed by Hershfield *et al.*[@Hershfield] for a non-interacting model. Whitney[@Whitney] proposes a boxcar form of the transmission as a candidate that would allow to reach a high efficiency with a large power output. In the following calculations, we consider $k_BT_R=0.0862\Gamma_0$, $k_B\Delta T=0.2\Gamma_0$ (Carnot efficiency $\eta_C\approx0.7$), $\varepsilon_F=0$, $U=10\Gamma_0$, $\varepsilon_0=\varepsilon_1=\varepsilon_d$ and symmetric couplings to the leads, $\Gamma_{R}=\Gamma_{L}=\Gamma_0/4$. Figure \[effic0\] displays the contour plots for the efficiency (left panels) and power output (right panels) as a function of the gate voltage $\varepsilon_d$ and bias $eV$, for the case of normal (upper panels) and 1D (lower panels) leads. As we can observe in the figure for both kind of leads, the system shows high efficiency and power output in different regions in the parameters space ($\varepsilon_d,eV$). In the case of 1D leads, in these areas, the efficiency and power reach higher values in comparison with the normal leads. In both cases, the efficiency and the power output are optimized when $\varepsilon_d\approx t_c$. As we discussed in the previous section, the enhancement of the efficiency is due to the abrupt change of some relevant quantity around the Fermi energy. In the case of normal leads, the Fano antiresonances produce sudden changes in the transmission, and in the 1D leads case the presence of the Van-Hove singularity in the DOS is responsible for the enhancement of the thermoelectric efficiency. Black regions in panel (a) and (c) correspond to situations where the system receives work from outside and thus the efficiency is greater than the Carnot value. Alternatively, the increase of the power output is characterized by the integrable area of the transmission function in the positive region of the function $F(\varepsilon)\equiv f_L(\varepsilon)-f_R(\varepsilon)$, described in Figure \[effic2\] (a), which depends on the bias $eV$ and the temperature gradient $\Delta T$. The larger the integrable area in this region, the larger the power output. The appearance of a maximum value in the power output when $\varepsilon_d\approx t_c$, is a consequence of the fact that there is not contribution of the transmission function in the parameter space where $F(\varepsilon)$ takes negative values, i. e., the QDs local levels are all in the positive region of $F(\varepsilon)$. Figure \[effic2\] (b) shows the transmission function for normal (black solid line) and 1D (red dashed line) leads, around the condition $\varepsilon_d\approx t_c$ where both, efficiency and power output, are enhanced. We see, for normal leads, that for $F(\varepsilon)<0$, the transmission function is essentially zero, existing another peaks in the positive region that correspond to the QDs resonances in $\varepsilon_d+U$. For the case of 1D leads, we observe that the resonance becomes wider increasing the power output, and becomes more abrupt, which enhances the efficiency. It is necessary that the Van-Hove singularity to be located around the energy where the function $F(\varepsilon)$ crosses over from negative to positive values. In this case this energy is $\varepsilon\approx 0.28\Gamma_0$ but in general it can be defined by, $$\tilde{\varepsilon}=\frac{\mu_RT_L-\mu_LT_R}{T_L-T_R}.$$ ![(a) $F(\varepsilon)\equiv f_L(\varepsilon)-f_R(\varepsilon)$ and (b) transmission function for normal (black solid line) and 1D (red dashed line) as a function of the energy for $\varepsilon_d=4.6\Gamma_0$, $t_c=4\Gamma_0$, $k_BT_R=0.0862\Gamma_0$, $k_B\Delta T=0.2\Gamma_0$ and $eV=0.3\Gamma_0$.[]{data-label="effic2"}](fig09.pdf){width="88mm" height="73mm"} ![Efficiency as a function of the power output for normal (black solid lines) and 1D leads (red dashed lines) for $k_BT_R=0.0862\Gamma_0$ and $U=10\Gamma_0$. The local level of the dots are $\varepsilon_d=4.6\Gamma_0$ for (a) and (c) panel, and $\varepsilon_d=\Gamma_0$ for (b) panel.[]{data-label="effic1"}](fig10.pdf){width="88mm" height="123mm"} Figure \[effic1\] (a) shows the efficiency vs power output changing the applied voltage for normal and 1D leads for $\varepsilon_d=4.6\Gamma_0$. This value optimizes both efficiency and power output, almost simultaneously as we can see in Figure \[effic0\]. For comparison we include, in Figure \[effic1\] (b), the results for a single QD (i. e. $t_c=0$). For normal leads, it is clear that there is a considerable increase in the efficiency when we connect the side-coupled QD. Nevertheless, for 1D leads, the increase of the efficiency and the power output occurs for both, single and T-shaped QD system. This allow us ti think that a single QD, which is more scalable than the T-shaped configuration, has a great thermoelectric performance when is connected to 1D leads with a Van-Hove singularity near the Fermi level. The Van-Hove singularity transforms the transmission function introducing a very abrupt resonance, but in the case of the T-shaped system, this abrupt change in the transmission is increased by the presence of the Fano resonance originated by the side-coupled QD. Figure \[effic1\] (c) shows also the efficiency as a function of the power output, modifying the applied potential, for $t_c=4\Gamma_0$ and a larger temperature gradient $\Delta T = 2\Gamma_0$ (Carnot efficiency $\eta_C\approx0.96$). The efficiency shows a little decrease of its maximum value in comparison with panel (a) for normal and 1D leads, however, the power output is increased by an order of magnitude. ![Maximum efficiency (black squares) and maximum power output (red circles) as a function of the temperature gradient for normal leads, $k_BT_R=0.0862\Gamma_0$, $t_c=4\Gamma_0$, $U=10\Gamma_0$ and $\varepsilon_d=4.6\Gamma_0$.[]{data-label="effic3"}](fig11.pdf){width="88mm"} Figure \[effic3\] shows the maxima values of the efficiency and the power output as a function of the temperature gradient. We observe that the maximum power output grows almost linearly with the temperature gradient while the maximum efficiency has its highest value for $k_B\Delta T\approx0.6\Gamma_0$. Although the positive region of $F(\varepsilon)$ is larger increasing $\Delta T$, which produces an enhancement of the power output, the transmission function becomes smooth and the efficiency drops. Summary {#summary .unnumbered} ======= We study the thermoelectric properties of a T-shaped double QD that has been shown to possess high thermoelectric efficiency in the linear and nonlinear regimes. To develop a realistic description of this system, we have incorporated the intra-dot Coulomb repulsion always present in a QD. To do so, we use the Green functions formalism within the Hubbard III approximation, which properly treats the Coulomb blockade regime. The effect of the Coulomb repulsion, because it opens other channels for the electron to go along, reduces the thermoelectric efficiency of the system in both, equilibrium and out of equilibrium conditions. In the nonlinear regime, we carry out a detailed analysis of the thermoelectric efficiency of the system, and we were able to optimize it as a function of the applied voltage and temperature gradient between the leads. We obtain a notable enhancement of the efficiency in comparison with the case with a single QD with normal leads. We have analyzed in detail the case in which the Fermi energy is near a Van-Hove singularity of the contacts DOS. We show that in this case, by adequately manipulating the parameters that define the device, it is possible to obtain a remarkable performance regarding its thermoelectric efficiency. We conclude that the Fano effect and moreover, the Van-Hove singularities near the Fermi energy of contacts with one-dimensional properties are essential ingredients to design a thermoelectric device. G. G.-S. and E. V. A. acknowledge financial support from the Brazilian agencies CAPES, CNPq and FAPERJ and P. A. O. acknowledges to FONDECYT grant number 1140571. Linear conductance in the Hubbard I and Hubbard III approximation {#app:hubb} ================================================================= In this appendix, we study the linear conductance of a system with a strongly correlated region connected to leads using the H$_{\text{I}}$ and H$_{\text{III}}$ approximations. We emphasize the shortcomings derived from the H$_{\text{I}}$ treatment. We consider this an important discussion as the H$_{\text{I}}$ approximation have been extensively used to study the conductance of these type of system for temperatures above the Kondo temperature. For the sake of simplicity, we consider a single QD connected to two metallic leads in the wide band limit. In order to study the conductance, we calculate the retarded Green function at the QD in the H$_{\text{I}}$ approximation. It is given by, $$G^I_{00,\sigma}(\varepsilon)=g^I_\sigma(\varepsilon)/(1+\text{i}g^I_\sigma(\varepsilon)\Gamma),$$ where, $$g^I_\sigma(\varepsilon)=\frac{1-\langle n_{\bar\sigma}\rangle}{\varepsilon-\varepsilon_0}+\frac{\langle n_{\bar\sigma}\rangle}{\varepsilon-\varepsilon_0-U},$$ and $\Gamma$ represents the self-energy that results from the connection of the QD to the leads and it is supposed to be frequency independent assuming the wide band limit. The linear conductance at zero temperature, $\mathcal{G}_\sigma(\varepsilon_F)$, is proportional to $-\text{Im}\{G_{00,\bar\sigma}(\varepsilon_F)\}$ and, within this approximation, can be written as, $$\begin{aligned} &&\mathcal{G}^{I}_\sigma(\varepsilon_F)=\\\nonumber &&\frac{\gamma[\varepsilon_F-\varepsilon_0-U(1-\langle n_{\bar\sigma}\rangle)]^2}{[(\varepsilon_F-\varepsilon_0)(\varepsilon_F-\varepsilon_0-U)]^2+[\varepsilon_F-\varepsilon_0-U(1-\langle n_{\bar\sigma}\rangle)]^2\Gamma^2},\end{aligned}$$ where, $\gamma=(2e^2/h)\Gamma^2$. In order to study this approximation, we calculate the conductance at resonance, $\varepsilon_0=0$ and $\varepsilon_0=-U$, where we suppose the Fermi level to be $\varepsilon_F=0$. The value for the conductance is the same for these two resonance conditions, $$\mathcal{G}_\sigma^{I}(\varepsilon_0=0,\varepsilon_F=0)=\mathcal{G}_\sigma^{I}(\varepsilon_0=-U,\varepsilon_F=0)=\frac{\gamma}{\Gamma^2}.$$ Surprisingly enough, the conductance does not depend on the occupation number, $\langle n_{\bar\sigma}\rangle$. This is an indication that, within this approximation, the electronic spin $\sigma$ current does not depend upon the QD been charged with electrons with opposite spin. According to this result, all Coulomb blockade effects are eliminated at resonance. The conductance assumes the same value obtained at the one body limit at resonance, $\varepsilon_0=\varepsilon_F$, with no Coulomb repulsion $U=0$. This is obviously an incorrect result. For the sake of comparison we calculate this conductance using the H$_{\text{III}}$ approximation. The Green function $G_{00,\sigma}^{III}(\varepsilon)$ is given by, $$G_{00,\sigma}^{III}(\varepsilon)=\frac{1-\langle n_{\bar\sigma}\rangle}{\varepsilon-\varepsilon_0+\text{i}\Gamma}+\frac{\langle n_{\bar\sigma}\rangle}{\varepsilon-\varepsilon_0-U+\text{i}\Gamma}.$$ Then, we write the conductance as, $$\mathcal{G}^{III}_\sigma(\varepsilon_F)=\frac{\gamma(1-\langle n_{\bar\sigma}\rangle)}{(\varepsilon_F-\varepsilon_0)^2+\Gamma^2}+\frac{\gamma\langle n_{\bar\sigma}\rangle}{(\varepsilon_F-\varepsilon_0-U)^2+\Gamma^2}.$$ Now, we set again the gate voltage at $\varepsilon_0=\varepsilon_F=0$, aligned with the Fermi level, and evaluate the conductance, $$\label{cond:e0} \mathcal{G}^{III}_\sigma(\varepsilon_0=0,\varepsilon_F=0)=\frac{\gamma(1-\langle n_{\bar\sigma}\rangle)}{\Gamma^2}+\frac{\gamma\langle n_{\bar\sigma}\rangle}{U^2+\Gamma^2}.$$ Finally, for the other resonance condition $\varepsilon_0=-U$, the conductance is obtained as, $$\label{cond:eU} \mathcal{G}^{III}_\sigma(\varepsilon_0=-U,\varepsilon_F=0)=\frac{\gamma\langle n_{\bar\sigma}\rangle}{\Gamma^2}+\frac{\gamma(1-\langle n_{\bar\sigma}\rangle)}{U^2+\Gamma^2}.$$ We note that the expressions for the two resonant conductance are not formally equal. However, due to symmetry reasons, the QD occupation $\langle n^1_{\bar\sigma}\rangle$ at $\varepsilon_0=0$ and $\langle n^2_{\bar\sigma}\rangle$ when $\varepsilon_0=-U$ satisfy that $\langle n^1_{\bar\sigma}\rangle+\langle n^2_{\bar\sigma}\rangle=1$, in which case Eqs. and are equivalent. The result for the conductance, using the H$_{\text{III}}$ approximation, depends on the QD occupation number, reflecting the effect of the Coulomb interaction. This is a fundamental difference compared to H$_{\text{I}}$ approximation. Finally, we set the gate voltage at the electron-hole symmetry point($\varepsilon_0=-U/2$). The conductance using the H$_{\text{I}}$ is given by, $$\mathcal{G}^I_\sigma(\varepsilon_0=-U/2,\varepsilon_F=0)=\frac{\gamma(\langle n_{\bar\sigma}\rangle-1/2)^2}{U^2/16+(\langle n_{\bar{\sigma}}\rangle-1/2)^2\Gamma^2}.$$ We see that for the electron-hole symmetry condition, ${\langle n_{\bar\sigma}\rangle}=0.5$, the conductance, within this approximation, results to be discontinuous. It assumes the value $\gamma/\Gamma^2$ for $U=0$ and zero for any $U>0$. This is an incorrect result. On the other hand, the H$_{\text{III}}$ conductance is given by, $$\mathcal{G}^{III}_\sigma(\varepsilon_0=-U/2,\varepsilon_F=0)=\frac{\gamma}{U^2/4+\Gamma^2}.$$ We note that the expression is different from zero, and assintotically goes to zero in the limit $U\rightarrow\infty$, which is the qualitatively correct result. [33]{} A. Majumdar, [Science](http://dx.doi.org/10.1126/science.1093164) **304**, 777 (2004). L. D Hicks and M. S. Dresselhaus, [Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.47.12727) **47**, 12727 (1993). L. D Hicks and M. S. Dresselhaus, [Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.47.16631) **47**, 1631 (1993). Y. Dubi and M. Di Ventra, [Rev. Mod. Phys.](http://dx.doi.org/10.1103/RevModPhys.83.131) **83**, 131 (2011). R. Venkatasubramanian, E. Siivola, T. Colpitts, and B. O’Quinn, [Nature (London)](http://dx.doi.org/10.1038/35098012) **413**, 597 (2001). T. C. Harman, P. J. Taylor, M. P. Walsh, and B. E. LaForge, [Science](http://dx.doi.org/10.1126/science.1072886) **297**, 2229 (2002). B. Kubala, J. König, and J. Pekola, [Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.100.066801) **100**, 066801 (2008). B. Dutta, J. T. Peltonen, D. S. Antonenko, M. Meschke, M. A. Skvortsov, B. Kubala, J. König, C. B. Winkelmann, H. Courtois, J. P. Pekola, [Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.119.077701) **119**, 077701 (2017). A. Kithun, A. Balandin, J. L. Liu, and K. L. Wang, [J. Appl. Phys.](http://dx.doi.org/10.1063/1.373723) **88**, 696 (2000). G. D. Mahan and J. O. Sofo, [Proc. Natl. Acad. Sci. USA](http://dx.doi.org/10.1073/pnas.93.15.7436) **93**, 7436 (1996). S. Hershfield, K. A. Muttalib, and B. J. Nartowt, [Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.88.085426) **88**, 085426 (2013). D. Boese and R. Fazio, [Europhys. Lett.](http://dx.doi.org/10.1209/epl/i2001-00559-8) **56**, 576 (2001). X. Zianni, [Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.75.045344) **75**, 045344 (2007). M. Wierzbicki and R. Świrkowicz, [J. Phys.: Condens. Matter](http://dx.doi.org/10.1088/0953-8984/22/18/185302) **22**, 185302 (2010). M. Wierzbicki and R. Świrkowicz, [Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.84.075410) **84**, 075410 (2011). H.-H. Fu and K.-L. Yao, [J. Appl. Phys.](http://dx.doi.org/10.1063/1.3653231) **110**, 094502 (2011). D. M. Kennes, D. Schurucht, and V. Meden, [Europhys. Lett.](http://dx.doi.org/10.1209/0295-5075/102/57003) **102**, 57003 (2013). Y. Yan, H. Wu, F. Jiang, and H. Zhao, [Eur. Phys. J. B](http://dx.doi.org/10.1140/epjb/e2014-50312-1) **87**, 244 (2014). H. Thierschmann, R. Sánchez, B. Sothmann, H. Buhmann, and L. W. Molenkamp, [C. R. Phys.](http://dx.doi.org/10.1016/j.crhy.2016.08.001) **17**, 1109 (2016). S. F. Svensson, E. A. Hoffmann, N. Nakpathomkun, P. M. Wu, H. Xu, H. A. Nilsson, D. Sánchez, V. Kashcheyevs, and H. Linke, [New J. Phys.](http://dx.doi.org/10.1088/1367-2630/15/10/105011) **15**, 105011 (2013). M. A. Sierra and D. Sánchez, [Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.90.115313) **90**, 115313 (2014). A. L. Monteros, G. S. Uppal, S. R. McMillan, M. Crisan, and I. Ţifrea, [Eur. Phys. J. B.](http://dx.doi.org/10.1140/epjb/e2014-50656-4) **87**, 302 (1998). K. P. Wójcik and I. Weymann, [Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.93.085428) **93** 085428 (2016). L. Xu, Z. Li, Q. Wang, and Y. Nie, [AIP Adv.](http://dx.doi.org/10.1063/1.4971844) **6**, 125012 (2016). K. P. Wójcik and I. Weymann, [J. Phys.: Condens. Matter](http://dx.doi.org/10.1088/1361-648X/29/5/055303) **29**, 055303 (2017). E. V. Anda, [J. Phys. C: Solid State Phys.](http://dx.doi.org/10.1088/0022-3719/14/33/002) **14**, L1037 (1981). J. C. Charlier and J. P. Issi, [App. Phys. A](http://dx.doi.org/10.1007/s003390050741) **67**, 79 (1998). J. Hu, T. W. Odom, and C. M. Lieber, [Acc. Chem. Res.](http://dx.doi.org/10.1021/ar9700365) **32**, 435 (1999). K. Nakada, M. Fujita, G. Dresselhaus, and M. S. Dresselhaus, [Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.54.17954) **54**, 17954 (1996). J. Hubbard, [Proc. Roy. Soc. A](http://dx.doi.org/10.1098/rspa.1963.0204) **276**, 238 (1963). U. Sivan and Y. Imry, [Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.33.551) **33**, 551 (1986). D. S. Fisher and P. A. Lee, [Phys. Rev. B](http://dx.doi.org/10.1103/PhysRevB.23.6851) **23**, 6851 (1981). M. Jonson and G. D. Mahan, Phys. Rev. B **21**, 4223 (1980). G. Gómez-Silva, O Ávalos-Ovando, M. L. Ladrón de Guevara, and P. A. Orellana, [J. Appl. Phys.](http://dx.doi.org/10.1063/1.3689817) **111**, 053704 (2012). K. Yamamoto and N. Hatano, [Phys. Rev. E](http://dx.doi.org/10.1103/PhysRevE.92.042165) **92**, 042165 (2015). J. W. Mintmire and C. T. white, [Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.81.2506) **81**, 2506 (1998). K. K. Kim, J. J. Bae, H. K. Park, S. M. Kim, H.-Z. Geng, K. A. Park, H.-J. Shin, S.-M. Yoon, A. Benayad, J.-Y. Choi, and Y. H. Lee, [J. Am. Chem. Soc.](http://dx.doi.org/10.1021/ja8038689) **130**, 12757 (2009). A. Kongkanand and P. V. Kamat, [ACS Nano](http://dx.doi.org/10.1021/nn700036f) **1**, 13 (2007). M. A. Sierra, M. Saiz-Bretín, F. Domínguez-Adame, and D. Sánchez, [Phy. Rev. B](http://dx.doi.org/10.1103/PhysRevB.93.235452) **93**, 235452 (2016). M. A. Sierra and D. Sánchez, [Mater. Today Proc.](http://dx.doi.org/10.1016/j.matpr.2015.05.066) **2**, 483 (2015). R. S. Whitney, [Phys. Rev. Lett.](http://dx.doi.org/10.1103/PhysRevLett.112.130601) **112**, 130601 (2014).
--- abstract: | We prove that if $p \in [2, \infty)$ and if the activation function is a monotone sigmoid, relu, elu, softplus or leaky relu, then the shallow neural network is a universal approximator in $L^{p}(\mathbb{R} \times [0, 1]^n)$. This generalizes classical universal approximation theorems on $[0,1]^n.$ We also prove that if $p \in [1, \infty)$ and if the activation function is a sigmoid, relu, elu, softplus or leaky relu, then the shallow neural network expresses no non-zero functions in $L^{p}(\mathbb{R} \times \RR^+)$. Consequently a shallow relu network expresses no non-zero functions in $L^{p}(\mathbb{R}^n)(n \ge 2)$. Some authors, on the other hand, have showed that deep relu network is a universal approximator in $L^{p}(\mathbb{R}^n)$. Together we obtained a qualitative viewpoint which justifies the benefit of depth in the context of relu networks. author: - 'Ming-Xi Wang [^1]' - Yang Qu title: '**Approximation capabilities of neural networks on unbounded domains**' --- Universal approximation theorem, unbounded domain, neural networks, sigmoid, relu, elu, softplus, leaky relu, tail risk, benefit of depth Introduction ============ The universal approximation theorem in the mathematical theory of artificial neural networks was established by Cybenko [@Cybenko], with various versions and different proofs contributed by Hornik-Stinchcombe-White [@Hornik1989] and Funahashi [@Funahashi]. This theory treats the approximation of continuous functions as well as $L^p$ integrable functions. Classical universal approximation theorems are mainly focused on bounded domains. We list here some articles which have also explored the thoery on certain unbounded domains. Some authors, such as [@Ito] and [@Chen1991], studied the approximation capabilities of neural networks in the space of contintous functions of $\RR^n$ vanishing at the infinity. Hornik [@Hornik1991] studied the approximation capabilities of neural networks in $L^p(\RR^n, \mu)$, with respect to a finite input space enviroment measure $\mu$. Regarding $L^p$ approximation on unbounded domains with respect to its Lebesgue measure, Lu et al. [@width] showed that deep narrow relu network is a universal approximator in $L^1(\RR^n)$; Clevert et al. [@deeprelu] proved that deep relu network with at most $[\log_2(n+1)]$ hidden layers is a universal approximator in $L^p(\RR^n)(p \in [1, \infty))$; Qu and Wang [@QW] proved that an artificial neural network with a single hidden layer and logistic activation is a universal approximator in $L^{2}(\RR \times [0, 1])$. Qu and Wang [@QW] pointed out the following connection between the universal approximation theory on $\RR \times [0,1]$ and the management of tail risks in option trading: experiments in [@QW] demonstrated that in the design of a option price learning model, a decision function that fits into the approximation capability of networks in $L^2(\RR \times [0, 1])$ yields faster learning and better generalization performance. This suggests that a further study of $L^p$ approximation capabilities of neural networks on unbounded domains is not only of value in theory but also of practical applications. The main result of this paper is Theorem \[theorem1\] and Theorem \[theorem2\]. We proved that if $p \in [2, \infty)$ and if the activiation function is a monotone sigmoid, relu, elu, softplus or leaky relu then the shallow neural network is a universal approximator in $L^{p}(\RR \times [0, 1]^{n})$. We also prove that if $p \in [1, \infty)$ and if the activiation function belongs to a sigmoid, relu, elu, softplus or leaky relu then the shallow neural network expresses no non-zero function in $L^{p}(\RR \times \RR^+)$. As a corollary, a relu shallow neural network expresses no non-zero function in $L^{p}(\RR^n) (n \ge 2)$. In contrast, [@width] and [@deeprelu] showed that deep shalow neural network (even narrow indeed) is a universal approximator in $L^{p}(\RR^n)$. Together we obtained a qualitative viewpoint which justifies the benefit of depth for relu neural networks. The organization of this article is as follows. In section \[sectionmain\] we prove Theorem \[maintheorem\] which demonstrates that the universal approximation capacity in $L^p(\RR \times [0,1]^n)$ is equivalent to the universal approximation capacity in $L^p(\RR)$. In section \[ridge\] we prove a result which will be used to prove that typical shallow networks are not integrable functions on $\RR \times \RR^+$ as well as(as a corollary) on $\RR^{n}(n \ge 2)$. In section \[sectionbounded\] we discuss shallow networks with bounded, eventually monotone activation functions. In section \[unbounded\] we discuss shallow networks with relu and other popular unbounded activation functions. Approximation capabilities of networks on $\RR \times [0, 1]^n$ {#sectionmain} =============================================================== Given any measurable function $\phi: \RR \to \RR, y \in \RR^n$ and $\varrho \in \RR$ we define $$\begin{aligned} \phi^{\tau_{\varrho}}:&= x \in \RR \mapsto \phi(x+\varrho) \\ \phi^{\beta_{y}}:&= x \in \RR^{n} \mapsto \phi(\langle y, x\rangle) \\ \phi^{\tau_{\varrho} \delta_{y}} &= (\phi^{\tau_{\varrho}})^{\delta_{y}}.\end{aligned}$$ These $\phi^{\delta_{y}}$ are called ridge functions. The space of shallow networks on $\RR^n$ with activation function $\phi$ is denoted by $\mathcal{S}_{n}(\phi)$ and consists of $$\begin{aligned} \sum_{i=1}^{k}t_i\phi^{\tau_{\varrho_i} \delta_{y_i}}, k \in \NN, y_i \in \RR^{n}, t_i \in \RR, \varrho_i \in \RR.\end{aligned}$$ The closure of a subset $\mathcal{S}$ of a metric space is denoted by $\overline{\mathcal{S}}$. The canonical Lebesgue measure on $\RR^m$ will be denoted by $\lambda_{m}.$ The indicator function of a set $A$ will be denoted by $I_A.$ Before proving Theorem \[maintheorem\] we recall some facts from harmonic analysis. If $f \in L^1(\RR^n)$ then its Fourier transform $\widehat{f}$ is a bounded continuous function with the integral form $$\begin{aligned} \label{f1} \widehat{f}(\xi) = \int_{x \in \RR^n} e^{-2\pi i\langle x, \xi \rangle}f(x)\mathrm{d}x, \ \ \ \xi \in \RR^n.\end{aligned}$$ Let $\mathscr{S}(\RR^n) \subset L^1(\RR^n)$ be the Schwartz space on $\RR^n$ ([@Fourier]) which is a Frechet space. Let $\mathscr{S}'$ be the space of temperate distributions which is the dual of $\mathscr{S}$. The Fourier transform $\widehat{u} \in \mathscr{S}'$ of $u \in \mathscr{S}'$ is defined by $$\begin{aligned} \label{f2} \widehat{u}(\varphi) = u(\widehat{\varphi}), \ \ \ \varphi \in \mathscr{S}.\end{aligned}$$ For $p \in [1, \infty)$ there is a natural embedding $L^p(\RR^n) \to \mathscr{S}'$([@Fourier]\[p.135\]). If $f \in L^1(\RR^n)$ then its Fourier transform (\[f2\]) agrees with (\[f1\]). To make Cybenko’s strategy working in our case we need to show that certian $h \in L^{q}(\RR^n)$ is zero. Because that Fourier transformation is an isomorphism of $\mathscr{S}'$([@Fourier]\[Theorem IX.2\]), it suffices to show that $\widehat{h} =0.$ \[maintheorem\] Let $n \in \NN^{+}$, $p \in [2, \infty)$ and $\Omega = \RR \times [0, 1]^n$. If a measurable function $\phi: \RR \to \RR$ satisfies $\overline{\mathcal{S}_{1}(\phi) \cap L^{p}(\RR)} = L^{p}(\RR)$ then $\overline{\mathcal{S}_{n+1}(\phi) \cap L^{p}(\Omega)} = L^{p}(\Omega).$ Write $q = p/(p-1)$. Suppose $\overline{\mathcal{S}_{1}(\phi) \cap L^{p}(\RR)} = L^{p}(\RR)$ and our theorem is not true. By Hahn-Banach Theorem there exists a nonzero $u \in L^{p}(\Omega)^*$ and a nonzero real valued $h \in L^q(\Omega)$ such that if $\varphi \in \mathcal{S}_{n+1}(\phi) \cap L^{p}(\Omega)$ then $$\begin{aligned} \label{orthogonal} u(\varphi) = \int_{\Omega} \varphi(x) h(x) dx = 0,\end{aligned}$$ and we have $||u|| = ||h||_{L^q(\Omega)}$. If $y = (y_0, \bold{y^{\perp}_0}) \in \RR \times \RR^n$ satisfies $y_0 \neq 0$ and if $\gamma \in L^{p}(\RR)$, then we show $\gamma^{\delta_{y}} \in L^p(\Omega)$. Indeed by change of variables $t_0= \langle y, x \rangle, t_i = x_i(1 \leq i \leq n)$ we have $$\begin{aligned} \left|\left|\gamma^{\delta_{y}}\right|\right|_{L^p(\Omega)}^p &= \int_{\RR \times [0,1]^n} \left|\gamma(\langle y, x \rangle)\right|^p dx_0dx_1\ldots dx_n \\ &= |y_0|^{-1} \int_{\RR \times [0,1]^n} |\gamma(t_0)|^p dt_0dt_1\ldots dt_n \\ &= |y_0|^{-1}||\gamma||^p_{L^p(\RR)}.\end{aligned}$$ Moreover by definition one can check $\mathcal{S}_{1}(\phi)^{\delta_{y}} \in \mathcal{S}_{n+1}(\phi)$. Put together, if $y = (y_0, \bold{y^{\perp}_0}) \in \RR \times \RR^n$ satisfies $y_0 \neq 0$ and if $\gamma \in \mathcal{S}_{1}(\phi)$ then $\gamma^{\delta_{y}} \in \mathcal{S}_{n+1}(\phi) \cap L^{p}(\Omega)$, and then by (\[orthogonal\]) $\int_{\Omega} \gamma^{\delta_{y}}(x)h(x) dx = 0.$ Next we prove a stronger fact: if $y = (y_0, \bold{y^{\perp}_0}) \in \RR \times \RR^n$ satisfies $y_0 \neq 0$ and $\gamma \in L^{p}(\RR)$ then $$\begin{aligned} \label{orthogonal1} u(\gamma^{\delta_{y}}) = \int_{\Omega} \gamma^{\delta_{y}}(x)h(x) dx = 0.\end{aligned}$$ Because $\overline{\mathcal{S}_{1}(\phi) \cap L^{p}(\RR)} = L^{p}(\RR)$, for any $\epsilon >0$ there exists $\gamma_{y_0, \epsilon} \in \mathcal{S}_{1}(\phi)$ such that $ ||\gamma-\gamma_{y_0, \epsilon}||_{L^p(\RR)} < ||u||^{-1} |y_0|^{1/p} \epsilon. $ From (\[orthogonal1\]) we have $u(\gamma_{y_0, \epsilon}^{\delta_{y}})=0$ and furthermore $$\begin{aligned} \left|u(\gamma^{\delta_{y}})\right| &= \left|u(\gamma^{\delta_{y}}-\gamma_{y_0, \epsilon}^{\delta_{y}})+ u(\gamma_{y_0, \epsilon}^{\delta_{y}})\right| \\ &= \left|u(\gamma^{\delta_{y}}-\gamma_{y_0, \epsilon}^{\delta_{y}})\right| \\ &\leq ||u||\cdot||(\gamma-\gamma_{y_0, \epsilon})^{\delta_{y}}||_{L^p(\Omega)} \\ &= ||u|| \cdot |y_0|^{-1/p} ||\gamma-\gamma_{y_0, \epsilon}||_{L^p(\RR)} \\ &\leq \epsilon.\end{aligned}$$ The above inequality holds for all $\epsilon >0$ and proves (\[orthogonal1\]). For $k \in \NN^{+}$ write $\Omega_k = (-k, k) \times [0,1]^n$. We identify $u \in L^p(\Omega)^*$ as an element of $\mathscr{S}'$ by assigning $\varphi \in \mathscr{S} \to u(\varphi \cdot I_{\Omega})$, and we define elements $u_k$ of $\mathscr{S}'$ by setting $\varphi \in \mathscr{S} \to u(\varphi \cdot I_{\Omega_k})$. With respect to the topology of $\mathscr{S}'$ we have $\lim\limits_{k \to \infty} u_{k}=u$ and therefore $$\begin{aligned} \lim_{k \to \infty} \widehat{u_{k}} = \widehat{u}.\end{aligned}$$ By setting $h_k = h\cdot I_{\Omega_k}$ we have $h_k \in L^1(\RR^{n+1})$ and $$\begin{aligned} u_k(\varphi) = \int_{\RR^{n+1}}\varphi(x) h_k(x) dx.\end{aligned}$$ The Fourier transform $\widehat{h_k}$ of $L^1$ integrable $h_k$ is represented by the integral form $$\begin{aligned} \label{fl1} \widehat{h_k}(\xi) = \int_{\Omega_k} e^{-2\pi i \langle x, \xi \rangle}h(x) dx.\end{aligned}$$ As $u_k$ is represented by $h_k$, $\widehat{u_{k}}$ is also represented by $\widehat{h_k}$. Hence if $\varphi \in \mathscr{S}$ then $$\begin{aligned} \widehat{u_k}(\varphi) = \int_{\RR^{n+1}}\varphi(\xi) \widehat{h_k}(\xi)d\xi = \int_{\RR^{n+1}}\varphi(\xi) \int_{\Omega_k} e^{-2\pi i \langle x, \xi \rangle}h(x) dx d\xi.\end{aligned}$$ Let $\mathcal{K}$ be any compact set in $\RR^+\times \RR^{n}$ and let $\kappa=\min{ \{z_0: z = (z_0, \bold{z_0^{\perp}}) \in \mathcal{K}\} }$. There is a constant $C_1$ such that $ \left| \langle \{0\} \times [0,1]^n, \mathcal{K} \rangle \right|$ is bounded from above by $C_1$. Let $K = 2C_1/\kappa$. If $k>K$, $x = (x_0, \bold{x_0^{\perp}}) \in \{k\} \times [0,1]^n$ and $z = (z_0, \bold{z_0^{\perp}}) \in \mathcal{K}$, then $x_0 z_0 > (2C_1/\kappa) \cdot \kappa = 2C_1$ and $|\langle (0,\bold{x_0^{\perp}}), z \rangle| < C_1.$ Therefore $\langle x, z \rangle = x_0 z_0 + \langle (0, \bold{x_0^{\perp}}), z \rangle > C_1.$ Similarly if $k>K$, $x = (x_0, \bold{x_0^{\perp}}) \in \{-k\} \times [0,1]^n$ and $z \in \mathcal{K}$ then $\langle x, z \rangle < -C_1$. In particular we have proved that if $k >K$ then $$\begin{aligned} \label{1220} \langle \{0\} \times [0,1]^n, \mathcal{K} \rangle \cap \langle \{\pm k\} \times [0,1]^n, \mathcal{K} \rangle = \emptyset.\end{aligned}$$ For any $z \in \mathcal{K}$ let $X_{z, k}^{\pm}$ respectively $\overline{X}_{z, k}^{\pm}$ consist of all point $x \in \Omega_k$ respectively $x \in \Omega$ such that there exists $x^{\partial} \in \{\pm k\} \times [0, 1]^n$ satisfying $\vec{z} \perp x^{\partial}-x$: $$\begin{aligned} X_{z, k}^{\pm} &= \{ x \in \Omega_{k} : \langle z, x \rangle \in \langle z, \{ \pm k\} \times [0,1]^n \rangle \} \\ \overline{X}_{z, k}^{\pm} &= \{x \in \Omega : \langle z, x \rangle \in \langle z, \{ \pm k\} \times [0,1]^n \rangle \}.\end{aligned}$$ ![For the purpose of a rigorous limit argument, instead of a single $X^{\pm}_{z, k}$, our estimation relies on $\bigcup_{z \in \mathcal{K}}X^{\pm}_{z, k}$, where $\mathcal{K}$ is a compact set in $\RR^{\pm} \times \RR^{n}$. If $k$ is sufficiently large $\bigcup_{z \in \mathcal{K}}X^{\pm}_{z, k}$ is stable in the sense of (\[12211503\]). []{data-label="proofpng"}](proof1.png){width="\textwidth"} ![For the purpose of a rigorous limit argument, instead of a single $X^{\pm}_{z, k}$, our estimation relies on $\bigcup_{z \in \mathcal{K}}X^{\pm}_{z, k}$, where $\mathcal{K}$ is a compact set in $\RR^{\pm} \times \RR^{n}$. If $k$ is sufficiently large $\bigcup_{z \in \mathcal{K}}X^{\pm}_{z, k}$ is stable in the sense of (\[12211503\]). []{data-label="proofpng"}](proof2.png){width="\textwidth"} Because $\overline{X}_{z, k}^{\pm}$ are connected, (\[1220\]) implies that if $k>K$ and $z \in \mathcal{K}$ then $$\begin{aligned} \label{1221} X_{z, k}^{\pm} \subset \overline{X}^{\pm}_{z, k} \subset \RR^{\pm} \times \RR^n.\end{aligned}$$ Let $k_2$ and $k_1$ be integers greater than $K$ we want to show that $$\begin{aligned} \label{1221251} X_{z, k_2}^{\pm} = (\pm(k_2-k_1), \bold{0}) + X_{z, k_1}^{\pm}. \end{aligned}$$ Let $x=(x_0, \bold{x_0^{\perp}} ) \in X_{z, k_1}^{+}$, we have $|x_0|<k_1$ and $\langle z, x \rangle \in \langle z, \{ k_1\} \times [0,1]^n \rangle$. It is obviously that $\langle z, x+ (k_2-k_1, \bold{0}) \rangle \in \langle z, \{k_2\} \times [0,1]^n \rangle$ and $x_0+k_2-k_1<k_2$ which implies $x+ (k_2-k_1, \bold{0}) \in \overline{X}_{z, k_2}^{+}$. By (\[1221\]), we have $x_0+k_2-k_1>0$ and therefore $x+ (k_2-k_1, \mathbf{0}) \in X_{z, k_2}^{+}$. We have proved that $(k_2-k_1, \mathbf{0} ) + X_{z, k_1}^{+} \subset X_{z, k_2}^{+}.$ The same argument proves also $(k_1-k_2, \mathbf{0} ) + X_{z, k_2}^{+} \subset X_{z, k_1}^{+}$ and consequently $X_{z, k_2}^{+} = (k_2-k_1, \mathbf{0} ) + X_{z, k_1}^{+}.$ Similar arguments prove another part of (\[1221251\]). By (\[1221\]) and by (\[1221251\]), the number $$\begin{aligned} \nu_{\mathcal{K}} &:= \lambda_{n+1}\left(\bigcup_{z \in \mathcal{K}} X_{z, k}^+\right)+\lambda_{n+1}\left(\bigcup_{z \in \mathcal{K}} X_{z, k}^-\right)\end{aligned}$$ is finite and is independent of the choice of $k > K$. Moreover (\[1221251\]) leads to $$\begin{aligned} \label{12211503} \bigcup_{z \in \mathcal{K}} X_{z, k_2}^{\pm} = (\pm(k_2-k_1), \bold{0}) + \bigcup_{z \in \mathcal{K}} X_{z, k_1}^{\pm},\end{aligned}$$ which implies that $\bigcup_{z \in \mathcal{K}} X_{z, k}^{\pm}$ moves to infinity as $k$ goes to infinity. Write $$\begin{aligned} X^{\dag}_{z, k} &= X_{z, k}^+\cup X_{z, k}^- \\ Y^{\dag}_{z, k} &= \Omega_k \setminus X^{\dag}_{z, k}.\end{aligned}$$ We want to prove $$\begin{aligned} \label{12211229} \omega \in \Omega ~ \text{and} ~ \langle z, \omega \rangle \in \langle z, Y^{\dag}_{z, k}\rangle \Leftrightarrow \omega \in Y^{\dag}_{z, k}.\end{aligned}$$ By definition the $\Leftarrow$ part of (\[12211229\]) is true. Now assume that $\omega \in \Omega$ and $\langle z, \omega\rangle \in \langle z, Y^{\dag}_{z, k}\rangle$. By assumption there exists $w^{in} \in \Omega_k \setminus X^{\dag}_{z,k}$ such that $\langle z, \omega\rangle = \langle z, w^{in}\rangle.$ Suppose $w \notin \Omega_{k}$ then the line $\overline{w,w^{in}}$ intersects the boundary of $\Omega_k$, as $$\begin{aligned} \{t w + (1-t) w^{in}: t \in [0, 1] \} \cap \{\pm k\} \times [0,1]^n \neq \emptyset.\end{aligned}$$ Let $w^{\partial}$ be the intersection point. We have $\langle z, w^{\partial}\rangle = \langle z, \omega\rangle = \langle z, w^{in}\rangle$. This together with the definition of $X^{\dag}_{z, k}$ gives $w^{in} \in X^{\dag}_{z, k}$, contradicting to $w^{in} \in \Omega_k \setminus X^{\dag}_{z, k}$. Therefore $w \notin \Omega_{k}$ is not ture, and instead $w \in \Omega_{k}$. Suppose $w \in X^{\dag}_{z, k}$ then there exists $w^{\partial} \in \{\pm k\} \times [0,1]^n$ such that $\langle z, w^{\partial}\rangle = \langle z, \omega\rangle= \langle z, w^{in}\rangle$, which in the same way contradicts to $w^{in} \in \Omega_k \setminus X^{\dag}_{z, k}$. Therefore $w \in X^{\dag}_{z, k}$ is not ture, and instead $w \notin X^{\dag}_{z, k}$. Putting together we have proved that if $w$ satisfies the left hand side of $(\ref{12211229})$ then $w \in \Omega_{k} \setminus X^{\dag}_{z, k} = Y^{\dag}_{z, k}.$ Finally we proved $(\ref{12211229})$, which leads to $$\begin{aligned} \label{12211429} I_{Y^{\dag}_{z, k}} = I_{\left\{\omega: \omega \in \Omega, \langle z, \omega \rangle \in \langle z, Y^{\dag}_{z, k}\rangle \right\}}.\end{aligned}$$ For $z \in \mathcal{K}, k \ge K, \alpha \in L^{\infty}(\RR)$, we define $\alpha_{k} = \alpha I_{z, Y^{\dag}_{z,k}}$. Because $\alpha_{k}$ is bounded and with bounded support, $\alpha_{k} \in L^{p}(\RR)$. By $h_k = hI_{\Omega_{k}}$ and (\[12211429\]), for $\alpha \in L^{\infty}(\RR)$, $$\begin{aligned} \int_{Y^{\dag}_{z, k}} \alpha(\langle z, x \rangle) h_k(x) d x &= \int_{\Omega} \alpha(\langle z, x \rangle) h(x)I_{\Omega_{k}}(x) I_{Y^{\dag}_{z, k}}(x) d x \\ &= \int_{\Omega} \alpha(\langle z, x \rangle) h(x)I_{Y^{\dag}_{z, k}}(x) d x \\ &= \int_{\Omega} \alpha(\langle z, x \rangle) h(x) I_{\{\omega: \omega \in \Omega, \langle z, \omega \rangle \in \langle z, Y^{\dag}_{z, k}\rangle \}}(x) d x \\ &= \int_{\Omega} \alpha(\langle z, x \rangle) I_{\{\omega: \langle z, \omega \rangle \in \langle z, Y^{\dag}_{z, k}\rangle \}}(x) \left(h(x) I_{\Omega}(x)\right) d x \\ &= \int_{\Omega} \alpha_k(\langle z, x \rangle) h(x) d x.\end{aligned}$$ Therefore for any $\alpha \in L^{\infty}(\RR)$ we have $$\begin{aligned} \int_{\RR^{n+1}} \alpha(\langle z, x \rangle) h_k(x) d x &= \int_{X^{\dag}_{z, k}} \alpha(\langle z, x \rangle) h_k(x) d x + \int_{Y^{\dag}_{z, k}} \alpha(\langle z, x \rangle) h_k(x) d x \\ &= \int_{X^{\dag}_{z, k}} \alpha(\langle z, x \rangle) h_k(x) d x + \int_{\Omega} \alpha_k(\langle z, x \rangle) h(x) d x\end{aligned}$$ Because $\alpha_k \in L^{p}(\RR)$, by (\[orthogonal1\]) we have $\int_{\Omega} \alpha_k(\langle z, x \rangle) h(x) d x = 0$ and therefore $$\begin{aligned} \int_{\RR^{n+1}} \alpha(\langle z, x \rangle) h_k(x) d x = \int_{X^{\dag}_{z, k}} \alpha(\langle z, x \rangle) h_k(x) d x.\end{aligned}$$ Take $\alpha(t) := e^{-2\pi i t}$ then by (\[fl1\]) and the above equality, $$\begin{aligned} \label{1515} \left|\widehat{h_k}(z)\right| &= \left|\int_{\RR^{n+1}}\alpha(\langle z, x \rangle) h_k(x) \mathrm{d}x\right| = \left|\int_{X^{\dag}_{z, k}}\alpha(\langle z, x \rangle)h_k(x) \mathrm{d}x\right| \nonumber \\ &\leq ||\alpha^{\delta_z}||_{L^p(X^{\dag}_{z, k})}||h_k||_{L^q(X^{\dag}_{z, k})} \nonumber \\ &\leq \nu_{\mathcal{K}}^{1/p} ||h_k||_{L^q(X^{\dag}_{z, k})}\end{aligned}$$ Let $\psi \in C^{\infty}(\RR^{n+1})$ with ${\operatorname{supp}}{\psi} \subset \mathcal{K}$ where $\mathcal{K}$ is a compact subset of $\RR^+\times \RR^{n}$. By (\[12211503\]), for any $\epsilon>0$ there exists $N_{\epsilon}$ such that if $k > N_{\epsilon}$ then $$\begin{aligned} ||h_k||_{L^q\left(\bigcup_{z \in \mathcal{K}} X^{\dag}_{z, k} \right)} < ||\psi||^{-1}_{L^{\infty}(\RR^{n+1})} \lambda^{-1}_{n+1}(\mathcal{K}) \nu_{\mathcal{K}}^{-1/p} \epsilon\end{aligned}$$ and for all such $k$ and $z \in \mathcal{K}$, using (\[1515\]) and the fact that $X^{\dag}_{z, k} \subset \bigcup_{z \in \mathcal{K}} X^{\dag}_{z, k}$, $$\begin{aligned} \left|\widehat{h_k}(z)\right| < ||\psi||^{-1}_{L^{\infty}(\RR^{n+1})} \lambda^{-1}_{n+1}(\mathcal{K}) \epsilon .\end{aligned}$$ For all $k > N_{\epsilon}$ we have $$\begin{aligned} \left|\widehat{u_k}(\psi)\right| &= \left|\int_{\RR^{n+1}}\psi(z) \widehat{h_k}(z) \mathrm{d}z\right| \nonumber \\ &= \left|\int_{\mathcal{K}}\psi(z) \widehat{h_k}(z) \mathrm{d}z\right| \nonumber \\ &\leq \lambda_{n+1}(\mathcal{K}) ||\psi||_{L^{\infty}(\RR^{n+1})} ||\psi||_{L^{\infty}(\RR^{n+1})}^{-1} \lambda^{-1}_{n+1}(\mathcal{K}) \epsilon \\ &= \epsilon.\end{aligned}$$ Consequently for all $\psi \in \mathscr{D}(\RR^+ \times \RR^n)$ we have $$\begin{aligned} \widehat{u}(\psi) = \lim_{k\to \infty}\widehat{u_k}(\psi) = 0.\end{aligned}$$ By similar arguments for for all $\psi \in \mathscr{D}(\RR^- \times \RR^n)$ we have $$\begin{aligned} \widehat{u}(\psi) = \lim_{k\to \infty}\widehat{u_k}(\psi) = 0.\end{aligned}$$ This implies that $\widehat{u}$ is supported on the hyperplane $\{0\} \times \RR^n \subset \RR^{n+1}$. Because $p \in[2, \infty)$, $q \in (1, 2]$ and $u \in L^q(\RR^{n+1})$. As $\widehat{u}$ is represented by some function in $L^p(\RR^{n+1})$, $\widehat{u}=0$ and therefore $u =0$. Which contradicts to the fact that $h$ is non-zero. Inexpressivity of sums of ridge functions {#ridge} ========================================= It is well known that “The graph of ridge function is a ruled surface, and it is this ruled nature that make ridge functions difficult to use in any standard way in approximation theory”([@ridge]). For instance [@ridge]\[Proposition 1.1\] shows that the space $L^2(\RR^2)$ contains no ridge function except 0. In this section we shall prove a stronger \[prop\] Let $n \in \NN, p \in [1, \infty), \Omega = \RR \times \RR^+, 1 \leq k\leq n$, $y_k \in \RR^2$ and $F_k: \RR \to \RR$. Suppose for all $t_k, \beta_k \in \RR$, $$\begin{aligned} \label{star} \lim\limits_{t \to \infty} \sum_{k=1}^n t_kF_k(\beta_k t) \in [-\infty, +\infty]. \end{aligned}$$ If $F =\sum_{k=1}^n F_k^{\delta_{y_k}}$ is uniformly continuous on $\Omega$ and $F|_{\Omega} \in L^p(\Omega)$ then $F =0.$ Thougout the proof, $re^{i\theta}$ refers to the point $(r\cos{\theta}, r\sin{\theta})$ in $\RR^2.$ Because $$\begin{aligned} G^{\delta_{\rho e^{i \theta}}} + H^{\delta_{s e^{i \theta}}} &= (G^{\delta_{\rho}}+H^{\delta_{s}})^{\delta_{e^{i\theta}}} \\ G^{\delta_{e^{i(\theta+\pi)}}} &= (G^{\delta_{-1}})^{\delta_{e^{i\theta}}},\end{aligned}$$ without loss of generality we can assume $$\begin{aligned} \label{F} F =\sum\limits_{k=1}^n F_k^{\delta_{e^{i\theta_k}}},\end{aligned}$$ where $\theta_k$ are different numbers in $[0, \pi)$ and $\lim_{t \to \infty} F_k(t), \lim_{t \to -\infty} F_k(t)$ are all well-defined in $[-\infty, +\infty]$. We claim that $F_k^{\delta_{e^{i\theta_k}}}|_{\Omega}$ are all constant. If this is not true then pick $j \in \{1,\ldots, n\}$ for which $F_j^{\delta_{e^{i\theta_j}}}$ is not constant. Because $F_j^{\delta_{e^{i\theta_j}}}$ is not constant, there exists $x^\flat=(x^\flat_1, x^\flat_2) \in \Omega$ such that $F_j^{\delta_{e^{i\theta_j}}}(x^\flat) \neq \lim\limits_{t \to \infty}F_j(t)$. The function $\overline{F}: x \in \RR^2 \mapsto F(x+x^\flat)$ satisfies: [***Fact 1.***]{} $\overline{F} \in L^p(\Omega)$ and $\overline{F}|_{\Omega}$ is uniformly continuous. [***Fact 2.***]{} $\overline{F} =\sum_{k=1}^n \overline{F_k}^{\delta_{e^{i\theta_k}}}$ is in the same form (\[F\]) as $F$. [***Fact 3.***]{} $\overline{F_j}(0) \neq \lim_{t \to \infty}\overline{F_j}(t)$. Fact 1 follows from $||\overline{F}||_{L^p(\Omega)} \leq ||F||_{L^p(\Omega)}$. Fact 2 follows from $$\begin{aligned} \overline{F}(x) & = \sum\limits_{k=1}^n F_k( (x_1 + x_1^{\flat}) \cos\theta_k+ (x_2 + x_2^{\flat}) \sin\theta_k) \\ & = \sum_{k=1}^n \overline{F_k}^{\delta_{e^{i\theta_k}}}(x),\end{aligned}$$ where $\overline{F_k}(x) = F_k(x+ x^{\flat}_1\cos{\theta_k}+x^{\flat}_2\sin{\theta_k})$. Fact 3 follows from $\lim\limits_{t \to \infty}\overline{F_j}(t) = \lim\limits_{t \to \infty}F_j(t)$, $\overline{F_j}(0) = F_j(x^{\flat}_1\cos{\theta_j}+x^{\flat}_2\sin{\theta_j}) = F_j^{\delta_{e^{i\theta_j}}}(x^\flat)$ and the choice of $x^{\flat}$. By change of variables $(x_1, x_2) = re^{i\theta}$ we have $$\begin{aligned} \int_{\Omega} \left|\overline{F}(x_1, x_2)\right|^p \mathrm{d}x_1\mathrm{d}x_2 = \int_{0}^{\pi}\int_{0}^{\infty} r \left|\overline{F}(re^{i\theta})\right|^p \mathrm{d}r\mathrm{d}\theta \end{aligned}$$ is finite. By Fubini’s theorem, $$\begin{aligned} \label{is0} \lim\limits_{r \to \infty}\overline{F}(re^{i\theta})=0\end{aligned}$$ for almost every $\theta \in (0, \pi)$. For $\theta \in [0, \pi)$ let $\theta^{\perp}$ be the unique number in $[0, \pi)$ such that $\cos{(\theta - \theta^{\perp})} =0$, or equivalently $\{ \theta \pm \pi/2 \} \cap [0, \pi).$ There exists a small $\omega_{\theta_j} >0$ such that $(\theta_j^{\perp}, \theta_j^{\perp}+\omega_{\theta_j}) \subset (0, \pi)$ and $\{\theta_k^{\perp} | 1 \leq k \leq n\} \cap (\theta_j^{\perp}, \theta_j^{\perp}+\omega_{\theta_j}) = \emptyset$. For every $\theta \in (\theta_j^{\perp}, \theta_j^{\perp}+\omega_{\theta_j})$ and $k \in \{1, \ldots, n\}$ it must satisfy $\cos(\theta-\theta_k) \neq 0$. Otherwise $\theta = \{\theta_k \pm \pi/2 \} \cap [0, \pi) = \theta_k^{\perp}$, contradicting to the fact that $\theta_k^{\perp} \notin (\theta_j^{\perp}, \theta_j^{\perp}+\omega_{\theta_j}).$ Therefore $\delta_k := \operatorname{sgn}_{\theta \in (\theta_j^{\perp}, \theta_j^{\perp}+\omega_{\theta_j})}\cos{(\theta-\theta_k)}$ is well defined and $\delta_j = 1$. Moreover if $k \neq j$ then $\delta_k = \operatorname{sgn}\cos{(\theta_j^{\perp}-\theta_k)}$, otherwise $\cos{(\theta_j^{\perp}-\theta_k)} = 0$ which contradicts to the fact that $\theta_k, \theta_j$ are distinct numbers in $[0, \pi)$. Let $\theta$ be a generic number in $(\theta_j^{\perp}, \theta_j^{\perp}+\omega_{\theta_j})$ and use the fact that $\lim_{t\to \infty}\overline{F_j}(t)$ exist in $[-\infty, \infty]$, we have $$\begin{aligned} \lim\limits_{r \to \infty} \overline{F_j}(r\cos{(\theta-\theta_j)}) &= \lim_{t \to \infty}\overline{F_j}(t) \\ \lim\limits_{r \to \infty} \overline{F_j}(r\cos{(\theta^{\perp}_j-\theta_j)}) &= \overline{F_j}(0).\end{aligned}$$ Above equalities together with the Fact 3 of $\overline{F}$ yield $$\begin{aligned} \label{Fj} \lim\limits_{r \to \infty} \overline{F_j}(r\cos{(\theta-\theta_j)}) \neq \lim\limits_{r \to \infty} \overline{F_j}(r\cos{(\theta^{\perp}_j-\theta_j)}).\end{aligned}$$ For $k \neq j$ we use the fact that $\lim_{t\to \pm\infty}\overline{F_k}(t)$ exist in $[-\infty, \infty]$ and have $$\begin{aligned} \label{Fi} \lim\limits_{r \to \infty} \overline{F_k}(r\cos{(\theta-\theta_k)}) = \lim\limits_{r \to \infty} \overline{F_k}(r\cos{(\theta^{\perp}_j-\theta_k)}),\end{aligned}$$ as they both equal $\lim_{t \to \delta_i\cdot \infty}\overline{F_k}(t)$. The sum of (\[Fj\]) and (\[Fi\]) for all $k$ gives $$\begin{aligned} \sum_{k=1}^n\lim\limits_{r \to \infty} \overline{F_k}(r\cos{(\theta-\theta_k)}) \neq \sum_{k=1}^n \lim\limits_{r \to \infty} \overline{F_k}(r\cos{(\theta^{\perp}_j-\theta_k)}).\end{aligned}$$ By the fact $\overline{F_k}(r\cos{(\theta-\theta_k)})=\overline{F_k}^{\delta_{e^{i\theta_k}}}(re^{i\theta})$, the above identity is equivalent to $$\begin{aligned} \label{Fn} \lim\limits_{r \to \infty} \overline{F}(re^{i\theta}) \neq \lim\limits_{r \to \infty} \overline{F}(re^{i\theta_j^{\perp}}).\end{aligned}$$ It follows from (\[is0\]) that the left hand side of (\[Fn\]) is 0, hence the right hand side of (\[Fn\]) is not. Consequently there exists $\rho > 0$ and $C>0$ such that if $r>\rho$ then $|\overline{F}(re^{i\theta_j^{\perp}})|\ge C.$ As $\overline{F}$ is uniformly continuous there exists $\delta >0$ such that if $|x-y|<\delta$ then $|\overline{F}(x)-\overline{F}(y)|<C/2.$ Let $\eta = \min{\{\rho (\pi-\theta_j^{\perp}), \delta\}}$. If $r > \rho$ and $\theta \in (\theta_j^{\perp}, \theta_j^{\perp}+\eta/r)$ then $re^{i\theta} \in \Omega$, and moreover $$\begin{aligned} |re^{i\theta}-re^{i\theta_j^{\perp}}| = 2r\sin{(\theta/2 - \theta^{\perp}_j/2)}< r(\theta - \theta^{\perp}_j) < \eta \leq \delta,\end{aligned}$$ and therefore $|\overline{F}(re^{i\theta}) - \overline{F}(re^{i\theta_j^{\perp}})|<C/2$. Consequently $$\begin{aligned} |\overline{F}(re^{i\theta})| &\ge |\overline{F}(re^{i\theta_j^{\perp}})|-|\overline{F}(re^{i\theta}) - \overline{F}(re^{i\theta_j^{\perp}})| \\ & \ge C/2.\end{aligned}$$ With this inequality, $$\begin{aligned} \int_{\Omega} \left|\overline{F}(x_1, x_2)\right|^p \mathrm{d}x_1\mathrm{d}x_2 &\ge \int_{\rho}^{\infty} r \int_{\theta_j^{\perp}}^{\theta_j^{\perp}+\eta/r}\left|\overline{F}(re^{i\theta})\right|^p \mathrm{d}\theta\mathrm{d}r \\ &\ge \int_{\rho}^{\infty} \eta (C/2)^p \mathrm{d}r \end{aligned}$$ This contradicts to the fact that $\overline{F} \in L^p(\Omega).$ Therefore $F_k^{\delta_{e^{i\theta_k}}}|_{\Omega}$ are all constant, and so is $F|_{\Omega}$. As a function in $L^p(\Omega)$, $F$ must be zero as desired. Bounded activation functions {#sectionbounded} ============================ To apply our Theorem \[maintheorem\], we need the following result of [@SW]\[Lemma 3.3\]: \[lemmasw\] Let $\phi:\RR \to \RR$ belong to $L^1(\RR) \cap L^p(\RR), p \in [1, \infty)$. If $\int_{\RR}\phi(t)dt \neq 0$ then $\overline{\mathcal{S}_{1}(\phi) \cap L^{p}(\RR)} = L^{p}(\RR)$. Park and Sandberg [@Park] generalized this result to $\RR^n$, for integrable bounded and a.e. continuous $\phi.$ We need Stinchcombe-White’s version to treat more general activation functions. Let $\mathcal{P}$ be a property such as bounded, differentiable, monotone or Lipschitz. We define that a function $\phi: \RR \to \RR$ is essentially $\mathcal{P}$ on $U$ if there exists a zero measure set $Z \subset \RR$ such that $\phi|_{U \setminus Z}$ is $\mathcal{P}$; eventually $\mathcal{P}$ if there exists $X>0$ such that $\phi|_{(X, \infty)}$ and $\phi|_{(-\infty, -X)}$ are $\mathcal{P}$; eventually essentially $\mathcal{P}$ if there exists $X>0$ such that $\phi$ are essentially $\mathcal{P}$ on $(X, \infty)$ as well as on $(-\infty, -X)$. We call $\lim_{x \to t}\phi(x) = c, a.e.$ if there exits a zero measure set $Z \subset \RR$ such that $\lim_{x \notin Z, x \to t}\phi(x) = c$. The following lemma generalizes [@Funahashi]\[Lemma 1\] from the case of monotone functions to the case of eventually monotone ones. This generalization is partially motivated by elu, a popular activation function proposed in [@elu]. In section \[unbounded\] we shall see the benefit of the notion of eventually monotone for investigating elu networks. Let $\Delta^1_{\varrho}$ be the 1st difference operator with step $\varrho$ defined by $\Delta_{\varrho}^1[f](x) = f(x+\varrho)-f(x)$, and let $\Delta^n_{\varrho} = \Delta^1_{\varrho} \circ \Delta^{n-1}_{\varrho}$ for all integer $n>1$. \[integrable unit\] Let $\phi: \RR \to \RR$ be measurable, essentially bounded, eventually essentially monotone and with $\lim_{x \to \infty} \phi(x) \ a.e. \neq \lim_{x \to -\infty} \phi(x) \ a.e.$. If $\varrho \in \RR$ and $p \in [1, \infty]$ then $\Delta_{\varrho}^1[\phi]\in L^p(\RR)$. Moreover if $\varrho \neq 0$ then $\int_{\RR} \Delta_{\varrho}^1[\phi](x) dx \neq 0.$ Without loss of generality we suppose $\lim_{x \to \infty}\phi(x) = 1$, $\lim_{x \to -\infty}\phi(x) = 0$ and $||\phi||_{L^{\infty}(\RR)} = C$. The first claim of our lemma is obvious for $p = \infty$. Now take $p \in [1, \infty)$ and $\varrho\in\RR$. There exists $X>0$ such that $|\Delta_{\varrho}[\phi]|$ is essentially less than 1 on $\RR \setminus (-X,X)$ and that $\phi$ is essentially monotone on $[X-\varrho, \infty)$ as well as on $(-\infty, -X+\varrho]$. The function $\Delta_{\varrho}^1[\phi]$ is bounded from above by $2C$, and $\Delta_{\varrho}^1[\phi]$ is essentially non-positive (or essentially non-negative) on $(-\infty, X)$ as well as on $(X, \infty)$. For any $p \in [1, \infty)$ $$\begin{aligned} \int_{-\infty}^{+\infty} |\Delta_{\varrho}^1[\phi](x)|^p dx &= \int_{-\infty}^{-X} |\Delta_{\varrho}^1[\phi](x)|^p dx+\int_{-X}^{+X} |\Delta_{\varrho}^1[\phi](x)|^p dx+\int_{X}^{+\infty} |\Delta_{\varrho}^1[\phi](x)|^p dx \\ &\leq \int_{-\infty}^{-X} |\Delta_{\varrho}^1[\phi](x)| dx+2^{p+1}XC^p+\int_{X}^{+\infty} |\Delta_{\varrho}^1[\phi](x)| dx \\ &= \left|\int_{-\infty}^{-X} \Delta_{\varrho}^1[\phi](x) dx\right|+2^{p+1}XC^p+\left|\int_{X}^{+\infty} \Delta_{\varrho}^1[\phi](x) dx\right| \\ &= \lim_{M \to \infty} \left|\int_{-M}^{-X} \Delta_{\varrho}^1[\phi](x) dx\right| + \lim_{M \to \infty} \left|\int_{X}^{M} \Delta_{\varrho}^1[\phi](x) dx\right| + 2^{p+1}XC^p \\ &= \lim_{M \to \infty} \left|\int_{-X}^{-X+\varrho}\phi(x) dx - \int_{-M}^{-M+\varrho}\phi(x) dx \right|+\\ &\lim_{M \to \infty} \left|\int_{M}^{M+\varrho}\phi(x) dx - \int_{X}^{X+\varrho}\phi(x) dx \right| + 2^{p+1}XC^p \\ & = \left|\int_{-X}^{-X+\varrho} \phi(x) dx \right|+\left|\varrho - \int_{X}^{X+\varrho} \phi(x) dx \right| + 2^{p+1}XC^p \\ &< \infty.\end{aligned}$$ Therefore for all $\varrho \in \RR$ and $p \in [1, \infty)$ we have $\Delta_{\varrho}^1[\phi] \in L^p(\RR)$. Moreover $$\begin{aligned} \int_{-\infty}^{+\infty} \Delta_{\varrho}^1[\phi](x) dx &= \lim_{Y \to \infty} \int_{-Y}^{Y} (\phi(x+\varrho) - \phi(x)) dx \\ &= \lim_{Y \to \infty} \left(\int_{Y}^{Y+\varrho} \phi(x) dx - \int_{-Y}^{-Y+\varrho} \phi(x) dx\right) \\ &= \varrho.\end{aligned}$$ This proves our lemma. \[sigmoid\] Let $\phi: \RR \to \RR$ be measurable, essentially bounded, eventually essentially monotone and with $\lim \limits_{x \to \infty} \phi(x) \ a.e. \neq \lim\limits_{x \to -\infty} \phi(x) \ a.e.$. Foir $p \in [1, +\infty)$ $$\begin{aligned} \overline{\mathcal{S}_{1}(\phi) \cap L^{p}(\RR)} = L^{p}(\RR).\end{aligned}$$ By Lemma \[integrable unit\], there exists $\varrho$ such that $\Delta_{\varrho}^1[\phi] \in L^1(\RR) \cap L^p(\RR)$ and $\int_{-\infty}^{+\infty}\Delta_{\varrho}^1[\phi](x) dx \neq 0.$ By Lemma \[lemmasw\], $\mathcal{S}_{1}(\Delta_{\varrho}^1[\phi])$ is dense in $L^{p}(\RR)$. Together with the simple fact $\mathcal{S}_{1}(\Delta_{\varrho}^1[\phi]) \subset \mathcal{S}_{1}(\phi)$ we conclude that $\overline{\mathcal{S}_{1}(\phi) \cap L^{p}(\RR)} = L^{p}(\RR).$ This lemma together with Theorem \[maintheorem\] lead to \[theorembounded11\] Let $\Omega = \RR \times [0, 1]^n$, $\phi: \RR \to \RR$ be measurable, essentially bounded, eventually essentially monotone and with $\lim \limits_{x \to \infty} \phi(x) \ a.e. \neq \lim\limits_{x \to -\infty} \phi(x) \ a.e.$. For $p \in [2, +\infty)$, $$\begin{aligned} \overline{\mathcal{S}_{n+1}(\phi) \cap L^{p}(\Omega)} = L^{p}(\Omega).\end{aligned}$$ To investigate expressivity of neural networks with sigmoid activations on $L^p(\RR \times \RR^+)$, we need the following lemma. \[lemma1\] If a continuous function $f: \RR \to \RR$ satisfies $\lim_{x \to \infty}f(x) \in \RR$ and $\lim_{x \to -\infty}f(x) \in \RR$, then $\phi$ is uniformly continuous. If this is not the case then there exists some positive $\epsilon$ and $x_i, y_i \in \RR(i \in \NN_{\ge 1})$ such that $|x_i-y_i|<1/i$ and $|f(x_i)-f(y_i)|>\epsilon$. As $\lim\limits_{x\to \pm\infty}f(x) \in \RR$, there exists $X>1$ such that if $\{x, y\} \subset (-\infty, -X)$ or if $\{x, y\} \subset (X, +\infty)$ then $|f(x)-f(y)|<\epsilon.$ This forces $ \{x_i, y_i\} \subset (-X-1/i, X+1/i)$ for all $i$. Which contradicts to the fact that $f$ is uniformly continuous on the compact interval $[-X-1, X+1]$. We use Proposition \[prop\] to prove \[theorembounded111\] Let $\Omega = \RR \times \RR^+$ and $\phi: \RR \to \RR$ be measurable, essentially bounded, with both $\lim_{x \to \infty} \phi(x) \ a.e.$ and $\lim_{x \to -\infty} \phi(x) \ a.e.$ well-defined in $\RR$. For $p \in [1, +\infty)$, $$\begin{aligned} \mathcal{S}_{2}(\phi) \cap L^{p}(\Omega)= {0}.\end{aligned}$$ Suppose there are $t_i, \varrho_i \in \RR$ and non-zero $y_i \in \RR^2$ such that $$\begin{aligned} \Phi =t_0+ \sum_{i=1}^n t_i\phi^{\tau_{\varrho_i}\delta_{y_i}}\end{aligned}$$ and that $\Phi|_{\Omega} \in L^p(\Omega)$. Pick a smooth function $\rho \in L^1(\RR^2)$ that satisfies $\int_{\RR^2}\rho(x)\mathrm{d}x = 1$ and ${\operatorname{supp}}{\rho} \subset \{x=(x_1, x_2) \in \RR \times \RR^- : |x|<1 \},$ and define $\rho_{\epsilon}$ by $\rho_{\epsilon}(x) = \epsilon^{-n}\rho(x/\epsilon)$ for $\epsilon>0$. For all $z = (z_1, z_2) \in \Omega$, $$\begin{aligned} \label{singleconv} \phi^{\tau_{\varrho_i}\delta_{y_i}} * \rho_{\epsilon} (z) &= \int_{\RR^2} \phi^{\tau_{\varrho_i}\delta_{y_i}}(z-x) \rho_{\epsilon}(x)\mathrm{d}x_1 \mathrm{d}x_2 \nonumber \\ &= \int_{\RR^2} \phi(\langle y_i, z \rangle -\langle y_i, x\rangle + \varrho_i ) \rho_{\epsilon}(x)\mathrm{d}x_1 \mathrm{d}x_2 \nonumber \\ &= \phi_{\epsilon, i}^{\delta_{y_i}}(z)\end{aligned}$$ where $\phi_{\epsilon, i}$ are given by $$\begin{aligned} \phi_{\epsilon, i}(s) = \int_{\RR^2} \phi(s -\langle y_i, x\rangle + \varrho_i ) \rho_{\epsilon}(x)\mathrm{d}x_1 \mathrm{d}x_2.\end{aligned}$$ Summing (\[singleconv\]) for all $i$ we have $$\begin{aligned} \label{globalconv} \Phi * \rho_{\epsilon} = t_0 + \sum_{i=1}^n t_i\phi_{\epsilon, i}^{\delta_{y_i}}.\end{aligned}$$ Because $\phi$ is essentially bounded and $\rho_{\epsilon}$ is integrable, we can use dominated convergence theorem to prove that $\phi_{\epsilon, i}$ are continuous and that $\lim_{s \to \pm \infty}\phi_{\epsilon, i}(s) = \lim_{s \to \pm \infty} \phi(s) \ a.e.$. By Lemma \[lemma1\], $\phi_{\epsilon, i}$ is uniformly continuous. As a composition of $\phi_{\epsilon, i}$ with a linear function (which is uniformly continuous), $\phi_{\epsilon, i}^{\delta_{y_i}}$ is also uniformly continuous. Consequently $\Phi * \rho_{\epsilon}$ is uniformly continuous. Next we want to verify that $\Phi * \rho_{\epsilon}|_{\Omega} \in L^p(\Omega)$. As by assumption we have only $\Phi|_{\Omega} \in L^p(\Omega)$ but not $\Phi \in L^p(\RR^2)$, the argument will be slightly lengthier than an expected one. Define $\overline{\Phi} = \Phi I_{\Omega}$ then $\overline{\Phi} \in L^p(\Omega)$. By the fact that $\rho_{\epsilon}|_{\Omega}=0$ and the fact that if $z \in \Omega, x \in \RR \times \RR^-$ then $z-x \in \Omega$, for $z \in \Omega$ we have $$\begin{aligned} \overline{\Phi} * \rho_{\epsilon} (z) &= \int_{\RR \times \RR} \overline{\Phi}(z-x) \rho_{\epsilon}(x)\mathrm{d}x_1 \mathrm{d}x_2 \\ &= \int_{\RR \times \RR^-} \overline{\Phi}(z-x) \rho_{\epsilon}(x)\mathrm{d}x_1 \mathrm{d}x_2 \\ &= \int_{\RR \times \RR^-} \Phi(z-x) \rho_{\epsilon}(x)\mathrm{d}x_1 \mathrm{d}x_2 \\ &= \int_{\RR \times \RR} \Phi(z-x) \rho_{\epsilon}(x)\mathrm{d}x_1 \mathrm{d}x_2 \\ &= \Phi * \rho_{\epsilon} (z).\end{aligned}$$ Therefore $\overline{\Phi} * \rho_{\epsilon}|_{\Omega} = \Phi * \rho_{\epsilon}|_{\Omega}$. By [@measure]\[Theorem 3.9.4\], $\overline{\Phi} * \rho_{\epsilon} \in L^p(\RR^2)$, and also $\overline{\Phi} * \rho_{\epsilon}|_{\Omega} \in L^p(\Omega)$. Now we have proved that $\overline{\Phi} * \rho_{\epsilon}$ is uniformly continuous and that $\overline{\Phi} * \rho_{\epsilon}|_{\Omega} \in L^p(\Omega)$. The condition (\[star\]) in Proposition \[prop\] is also satisfied, due to the fact that $\phi$ has finite limits at infinity. Applying the proposition to (\[globalconv\]) we have $ \Phi * \rho_{\epsilon}|_{\Omega} =0$. This leads to $\overline{\Phi} * \rho_{\epsilon}|_{\Omega} =0$. As $\epsilon$ is arbitrary chosen this is true for all $\epsilon>0$. By [@measure]\[Theorem 4.24\] and the fact that $\overline{\Phi} \in L^p(\RR^2)$, $\lim\limits_{\epsilon \to 0} \overline{\Phi} * \rho_{\epsilon} = \overline{\Phi}$ in $L^p(\RR^2)$ and in particular $\lim\limits_{\epsilon \to 0} \overline{\Phi} * \rho_{\epsilon}|_{\Omega} = \overline{\Phi}|_{\Omega}$ in $L^p(\Omega)$. This leads to $\overline{\Phi}|_{\Omega} = 0$. By definition $\overline{\Phi}|_{\Omega} = \Phi|_{\Omega} $ and we have $\Phi|_{\Omega} = 0$ as desired. To sum up, Corollary \[theorembounded11\], Corollary \[theorembounded111\] and Fubini’s theorem give \[theorem1\] Let $K \subset \RR^n$ be compact, $\Omega = \RR \times K$, $\phi: \RR \to \RR$ be measurable, essentially bounded, eventually essentially monotone with $\lim \limits_{x \to \infty} \phi(x) \ a.e. \neq \lim\limits_{x \to -\infty} \phi(x) \ a.e.$. For $p \in [2, +\infty)$, $$\begin{aligned} \overline{\mathcal{S}_{n+1}(\phi) \cap L^{p}(\Omega)} = L^{p}(\Omega).\end{aligned}$$ Let $U \subset \RR^n$ be measurable, $\Omega = \RR^2 \times U$ or $ \Omega = \RR \times \RR^+ \times U$, $\phi: \RR \to \RR$ be measurable, essentially bounded, with both $\lim_{x \to \infty} \phi(x) \ a.e.$ and $\lim_{x \to -\infty} \phi(x) \ a.e.$ well-defined in $\RR$. For $p \in [1, +\infty)$, $$\begin{aligned} \mathcal{S}_{n+2}(\phi) \cap L^{p}(\Omega) = 0.\end{aligned}$$ Popular bounded activation function used in literature or in practice are sigmoids ([@Cybenko]) which are measurable $\phi$ with $\lim\limits_{t \to \infty}\phi(t) = 1$ and $\lim\limits_{t \to -\infty}\phi(t) = 0$. Most popular ones are monotone sigmoids( [@Hornik1989] and [@Funahashi]). By Theorem \[theorem1\] we have Let $\Omega = \RR \times [0, 1]^n$ and $\phi$ a monotone sigmoid. For $p \in [2, +\infty)$, $$\begin{aligned} \overline{\mathcal{S}_{n+1}(\phi) \cap L^{p}(\Omega)} = L^{p}(\Omega).\end{aligned}$$ Let $\Omega = \RR^2 \times [0, 1]^n$ and $\phi$ a sigmoid. For $p \in [1, +\infty)$, $$\begin{aligned} \mathcal{S}_{n+2}(\phi) \cap L^{p}(\Omega) = 0.\end{aligned}$$ Unbounded activation functions {#unbounded} ============================== Traditionaly activation functions employed in neural networks were bounded functions such as sigmoids and rbf. Recently unbounded activation functions particularly relu [@relu] becomes very popular. In this section for $p \in [2, \infty)$ and for many popular unbounded activation functions we shall prove that corresponding shallow networks are universal approximators in $L^p(\RR \times [0,1]^n)$. Let $n \in \NN^{+}$ and let $\phi: \RR \to \RR$ be eventually $n+1$ times differentiable. If $\phi^{(n)}$ is eventually monotone then $\Delta_1^n[\phi]$ is eventually monotone. It suffices to check $(\Delta_1^n[\phi])'$ at the infinity. By the fact that $\phi$ is eventualy $n+1$ times differentiable, there exists $X_1>0$ such that if $|x|>X_1$ then $$\begin{aligned} \label{difference} (\Delta_1^{n}[\phi])'(x) = \Delta_1^{n}[\phi'](x) = \phi^{(n+1)}(\overline{x})\end{aligned}$$ for some $\overline{x} \in [x, x+n].$ By the fact that $\phi^{(n)}$ is eventually monotone and eventually $n+1$ times differentiable, there exists $X_2>0$ such that $\phi^{(n+1)}|_{(X_2, \infty)}$ and $\phi^{(n+1)}|_{(-\infty, -X_2)}$ are non positive(or non negative). Set $X = \max{\{X_1, X_2+n\}}$. By (\[difference\]), $(\Delta_1^{n}[\phi])'|_{(X_2, \infty)}$ and $(\Delta_1^{n}[\phi])'|_{(-\infty, -X_2)}$ are non positive(or non negative). This implies that $\Delta_1^n[\phi]$ is eventually monotone. \[relu\] Let $p \in [1, +\infty)$ and $\phi$ be one of the following activation functions: 1. relu: $\phi(x) = x \cdot I_{(0, +\infty)},$ 2. elu: $\phi(x) = \alpha (e^{x}-1) \cdot I_{(-\infty, 0]}+x \cdot I_{(0, +\infty)},$ 3. softplus: $\phi(x) = \log(e^{x}+1),$ 4. leaky relu: $\phi(x) = \alpha x \cdot I_{(-\infty, 0]}+x \cdot I_{(0, +\infty)},$ where in elu $\alpha \in \RR$ and in leaky relu $\alpha \neq 1$, then $$\begin{aligned} \overline{\mathcal{S}_{1}(\phi) \cap L^{p}(\RR)} = L^{p}(\RR).\end{aligned}$$ If $\phi$ falls into any of above cases, then $\phi'$ is eventually monotone. According to previous lemma $\Delta_1^1[\phi]$ is also eventually monotone. Moreover in each case the limit of $\Delta_1^1[\phi]$ at infinity satisfies 1. relu: $\lim \limits_{x\to +\infty}\Delta_1^1[\phi](x) = 1, \lim \limits_{x\to -\infty}\Delta_1^1[\phi](x) = 0,$ 2. elu: $\lim \limits_{x\to +\infty}\Delta_1^1[\phi](x) = 1, \lim \limits_{x\to -\infty}\Delta_1^1[\phi](x) = 0,$ 3. softplus: $\lim \limits_{x\to +\infty}\Delta_1^1[\phi](x) = 1, \lim \limits_{x\to -\infty}\Delta_1^1[\phi](x) = 0,$ 4. leaky relu: $\lim \limits_{x\to +\infty}\Delta_1^1[\phi](x) = 1, \lim \limits_{x\to -\infty}\Delta_1^1[\phi](x) = \alpha.$ Therefore $\Delta_1^1[\phi]$ is bounded with different limits at $\pm \infty$. Applying Lemma \[sigmoid\], we obtain that $\overline{\mathcal{S}_{1}(\Delta_1^1[\phi]) \cap L^p(\RR)} = L^p(\RR)$. As $\mathcal{S}_{1}(\Delta_1^1[\phi]) \subset \mathcal{S}_{1}(\phi)$, this leads to $\overline{\mathcal{S}_{1}(\phi) \cap L^p(\RR)} = L^p(\RR)$ as desired. This lemma together with Theorem \[maintheorem\] lead to \[theorembounded\] Let $p \in [2, +\infty), \Omega = \RR \times [0, 1]^n$ and $\phi$ be one of the following activation functions: 1. relu: $\phi(x) = x \cdot I_{(0, +\infty)},$ 2. elu: $\phi(x) = \alpha (e^{x}-1) \cdot I_{(-\infty, 0]}+x \cdot I_{(0, +\infty)},$ 3. softplus: $\phi(x) = \log(e^{x}+1),$ 4. leaky relu: $\phi(x) = \alpha x \cdot I_{(-\infty, 0]}+x \cdot I_{(0, +\infty)},$ where in elu $\alpha \in \RR$ and in leaky relu $\alpha \neq 1$, then $$\begin{aligned} \overline{\mathcal{S}_{n+1}(\phi) \cap L^{p}(\Omega)} = L^{p}(\Omega).\end{aligned}$$ To investigate representability of neural networks with unbounded activations on $L^p(\RR \times \RR^+)$, we need the following lemmas. If a continuous function $\phi: \RR \to \RR$ is eventually uniformly continuous, then $\phi$ is uniformly continuous. There exists $X>0$ such that on intervels $J_1=(-\infty, -X), J_2=(-X-1, X+1), J_3=(X, \infty)$, $\phi|_{J_j}(1 \leq j \leq 3)$ are all uniformly continuous. Suppose the lemma is not true. There exists some positive $\epsilon$ and $x_i, y_i \in \RR(i \in \NN_{\ge 1})$ such that $|x_i-y_i|<1/i$ and $|f(x_i)-f(y_i)|>\epsilon$. Each pair $\{x_i, y_i\}$ is contained in at least one of $J_j(1 \leq j \leq 3)$, hence there exists $J \in \{J_1, J_2, J_3\}$ and a subsequence $i_k(k \in \NN_{\ge 1})$ of $\NN_{\ge 1}$ such that $\{x_{i_k}, y_{i_k}\} \subset J$ for all $k$. Which contradicts to the fact that $\phi$ is uniformly continuous on $J$. \[Lipschitz\] If a continuous function $\phi: \RR \to \RR$ is eventually Lipschitz, then $\phi$ is uniformly continuous. Being eventually Lipschitz implies being eventually uniformly continuous. This lemma then follows from the previous one. With above lemmas we have \[theorembounded2\] Let $p \in [1, +\infty), \Omega = \RR \times \RR^+$ and $\phi$ be one of the following activation functions: 1. relu: $\phi(x) = x \cdot I_{(0, +\infty)},$ 2. elu: $\phi(x) = \alpha (e^{x}-1) \cdot I_{(-\infty, 0]}+x \cdot I_{(0, +\infty)},$ 3. softplus: $\phi(x) = \log(e^{x}+1),$ 4. leaky relu: $\phi(x) = \alpha x \cdot I_{(-\infty, 0]}+x \cdot I_{(0, +\infty)},$ where in elu $\alpha \in \RR$ and in leaky relu $\alpha \neq 1$, then $$\begin{aligned} \mathcal{S}_{2}(\phi) \cap L^{p}(\Omega)=0.\end{aligned}$$ By estimating derivatives at $\pm \infty$, relu, elu, softplus and leaky relu are all eventually Lipschitz. By Lemma \[Lipschitz\], they are all uniformly continuous. Suppose there are $k \in \NN, y_i \in \RR^2, t_i \in \RR, \varrho_{i} \in \RR$ such that $F=\sum_{i=1}^{k} (t_i\phi^{\tau_{\varrho_i}})^{\delta_{y_i}} \in L^{p}(\Omega).$ As a linear sum of composition of uniformly continuous $\phi$ and linear functions (which are also uniformly continuous), $F$ is uniformly continuous. Moreover, in case $\phi$ is one the above activation functions, for all $s_i, \beta_i \in \RR$ there exists $C \in \RR$ and $G:\RR \to \RR$ such that $$\begin{aligned} \sum_{i=1}^k s_i t_i\phi^{\tau_{\varrho_i}}(\beta_i x) & = Cx+G(x) \\ \lim_{x \to \infty} G(x) & = 0.\end{aligned}$$ Which implies that the condition (\[star\]) of Proposition \[prop\] is satisfied. Applying the proposition we have $F=0.$ To sum up, Corollary \[theorembounded\] and Corollary \[theorembounded2\] and Fubini’s theorem give \[theorem2\] Let $\phi$ be one of the following activation functions: 1. relu: $\phi(x) = x \cdot I_{(0, +\infty)},$ 2. elu: $\phi(x) = \alpha (e^{x}-1) \cdot I_{(-\infty, 0]}+x \cdot I_{(0, +\infty)},$ 3. softplus: $\phi(x) = \log(e^{x}+1),$ 4. leaky relu: $\phi(x) = \alpha x \cdot I_{(-\infty, 0]}+x \cdot I_{(0, +\infty)},$ where in elu $\alpha \in \RR$ and in leaky relu $\alpha \neq 1$. Let $K \subset \RR^n$ be compact and let $\Omega = \RR \times K$. If $p \in [2, +\infty)$ then $$\begin{aligned} \overline{\mathcal{S}_{n+1}(\phi) \cap L^{p}(\Omega)} = L^{p}(\Omega).\end{aligned}$$ Let $U \subset \RR^n$ be measurable and let $\Omega = \RR^2 \times U$ or $ \Omega = \RR \times \RR^+ \times U$. If $p \in [1, +\infty)$ then $$\begin{aligned} \mathcal{S}_{n+2}(\phi) \cap L^{p}(\Omega) = 0.\end{aligned}$$ It is currently a hot topic to justify the benefit of deep networks (see for instance [@depth]) over shallow ones. Classical universal approximation theorems tell us that shallow networks carring almost any popular activation function are already universal approximators in popular functions spaces on bounded domains. Indeed most authors have explored the benefit of depth in a quantitative way. On unbounded domains there might be more opportunities, in qualitative ways, to explore the benefit of depth. For instance a result of this paper and results of [@width] or [@deeprelu] gives the following example Let $n \ge 2$ and $p \in[1, \infty)$. The shallow relu network expresses no non-zero function in $L^p(\RR^n)$, while the deep relu network is a universal approximator in $L^p(\RR^n)$. [99]{} R. Arora, A. Basu, P. Mianjy and A. Mukherjee, “Understanding deep neural networks with rectified linear units,” ICLR,2018. T. Chen, H. Chen and R.W. Liu,“Approximation Capability in $C(\overline{R}^n)$ by Multilayer Feedforward Networks and Related Problems,” IEEE Transactions on neural networks,Vol.6, pp.25-30,1991. D.-A. Clevert,T. Unterthiner and S. Hochreiter, “Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs),” Proceedings of the International Conference on Learning Representations (ICLR),2016. G. Cybenko, “Approximations by superpositions of sigmoidal functions,” Mathematics of Control, Signals, and Systems,Vol.2, pp.303-314,1989. K. Funahashi, “On the approximate realization of continuous mappings by neural networks,” Neural Networks, Vol.2, pp.183-191,1989. K. Hornik, M. Stinchcombe, H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks,Vol.2,pp.359-366,1989. K. Hornik, “Approximation Capabilities of Multilayer Feedforward Networks,” Neural Networks,Vol.4,pp.251-257,1991. Y. Ito, “Approximation of continuous functions on $\RR^d$ by linear combinations of shifted rotations of a sigmoid function with and without scaling,” Neural Networks,Vol.5,pp.105-115,1992. W. Light, “Ridge Functions, Sigmoidal Functions and Neural Networks,” Approximation Theory VII,163-206,Academic Press,1993. Z. Lu, H. Pu, F. Wang, Z. Hu, and L. Wang, “The expressive power of neural networks: A view from the width,”Advances in Neural Information Processing Systems, pp.6232-6240,2017. V. Nair and G.E. Hinton, “Rectified linear units improve restricted Boltzmann machines,” Proceedings of the 27th International Conference on Machine Learning (ICML10),807-814,2010. J. Park and I.W. Sandberg, “Universal Approximation Using Radial-Basis-Function Networks,”Neural Computation, Vol.3,pp.246-257,1991. M. Reed, B. Simon, Methods of Modern Mathematical Physics: Functional analysis, Academic Press,1980. M. Stinchcombe, H. White, “Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions,” International 1989 Joint Conference on Neural Networks, pp.613-617,1989. M. Telgarsky, “Benefits of depth in neural networks,”JMLR,Vol.49,1-23,2016. B. Vladimir, Measure Theory, I, Berlin, Heidelberg, New York: Springer-Verlag, 2007. Y. Qu and M.X. Wang,“The option pricing model based on time values: an application of the universal approximation theory on unbounded domains,” arXiv:1910.01490. [^1]: Corresponding author
--- author: - Sarah Klein - 'Cécile Appert-Rolland' - Ludger Santen bibliography: - 'biblio.bib' title: 'Fluctuation effects in bidirectional cargo transport.' --- Introduction ============ [Many cellular functions depend on active transport processes, which are driven by molecular motors. Molecular motors are proteins which are able to perform directed motion along the intracellular network of biopolymers, i.e. the cytoskeleton. The cytoskeleton consists of three different kinds of filaments - microtubules, actin-filaments and intermediate filaments. Transport on large length scales occurs mainly along microtubules from the cell center to the membrane and *vice versa* [@alberts2002]. Microtubules are long, polarized biopolymers with a well-defined plus- and minus-end. Molecular motors, in the case of microtubules the families of kinesins and dyneins, can perform steps preferentially to the plus-end (kinesin) or to the minus-end, respectively, under the consumption of Adenosin triphosphate (ATP). In recent years much knowledge has been accumulated in particular with respect to the properties of single molecular motors [@Carter; @vale2000]. Despite this progress many rather fundamental questions of motor-driven transport are still not answered. This is particularly true for systems of interacting motor proteins, as for example a cargo that is driven by teams of molecular motors. Depending on the configuration of the attached motors cargos can be transported uni- or bidirectionally [@welte1998; @soppina2009]. Actually,]{} for several kinds of cargos - for example for endosomes [@soppina2009], mitochondria [@hollenbeck_s2005] or lipid droplets [@welte1998] - it was observed that they move in a saltatory manner: the trajectories show pieces with persistent motion in one direction and then sudden turns in the other direction. [These properties of the cargo trajectories evidence that both types of motors can apply forces on the cargo.]{} Besides, the motor-cargo complex shows interesting statistical properties which has been characterized as anomalous diffusive behavior. The mainly disputed point is whether a coordination mechanism is needed to control the interplay between the two teams of motors [@welte2004] or if stochastic fluctuations are sufficient to produce this kind of cargo motion [@mueller_k_l2008]. We explore in this work whether the second scenario can explain the observed characteristics of cargo trajectories. In experiments the cargo motion is often characterized via the mean square displacement (MSD) $ \langle (X(t + \Delta t) - X(t))^2\rangle$ of cargo trajectories $X(t)$. The brackets here indicate the average over $t$. For a ballistic motion the MSD is proportional to $t^2$ while it is linear in $t$ for the purely diffusive case without bias. In several [*in vivo* ]{}experiments [@kulic2008; @caspi_g_e2002; @salman2002] it was detected that the cargo’s MSD shows a time-dependence $\Delta t^\gamma$ with exponents $\gamma<1$ and $1<\gamma<2$, depending on the time scale ([anomalous diffusion]{}). In fact, the [time dependence of the MSD is difficult to interpret for finite times. Indeed, apparent superdiffusion ($\gamma >1$) may originate either from a biased but uncorrelated motion of the cargo or indicate positive temporal correlations of the cargo’s displacements. In the latter case it is, compared to normal diffusion, more likely that the cargo continues its motion in the same direction.]{} [The difference between these two kinds of particle motion can be easily distinguished by analyzing the variance $$\begin{aligned} \label{vari} {\mathbb V\mathrm{ar}}[X(\Delta t)] = \langle (X(t + \Delta t) - X(t))^2 \rangle - \langle(X(t + \Delta t) - X(t))\rangle^2.\end{aligned}$$ instead of the MSD.]{} We showed in a preceding publication [@EPL14] that a cargo transported by two teams of motors with different single motor properties most time exhibits a biased motion, so that variance and MSD are not equal. We shall focus on the variance throughout this paper. It was found for [*in vivo* ]{}experiments [@kulic2008; @salman2002] that for small time lags the cargo moves mainly subdiffusively and crosses over to a superdiffusive motion for intermediate time lags. On long time scales, way bigger than the average turning time, the motion becomes (sub)diffusive again [@caspi_g_e2002]. It was suggested that the cargo [exhibits anomalous diffusive behavior]{} because of the inner cellular structure that presents several obstacles which can impede cargo’s motion [@weiss2004; @caspi_g_e2002]. Here we want to study the statistics of cargo trajectories in the absence of such additional effects due to the inner cell structure. Another explanation of the observed anomalous diffusion could be the motion of microtubules which bend and than relax again and so add a velocity component to the cargo motion [@kulic2008]. Also the underlying network can influence the MSD or variance. On a branched network a purely ballistic motion also shows ${\mathbb V\mathrm{ar}}[X] \sim \Delta t^\gamma$ with $1<\gamma<2$ depending on the turning angle distribution [@reza2014]. In this work, as we want to ignore all network effects we propagate the cargo along a static one dimensional track. The model introduced in this article describes bidirectional cargo transport mediated by two different teams of molecular motors. We will show that a subdiffusive (${\mathbb V\mathrm{ar}}[x_C(t)] \sim \Delta t^\gamma$ with $\gamma<1$) motion occurs at small time scales if the thermal fluctuations of the cellular environment are taken into account. Besides, superdiffusive ($1<\gamma<2$) motion occurs at longer time scales and no further interaction with the environment is needed to observe this anomalous diffusion. In a second part, we analyze how the motion is influenced by a viscous barrier in the system representing highly crowded areas. [Having two teams of motors attached to the cargo provides in this scenario an efficient mechanism to pass highly crowded areas.]{} Model ===== In this work we introduce a model to describe bidirectional cargo transport by teams of molecular motors. Each motor consists of a head which is bound to the filament at position $x_i$ and a tail, the so-called neck linker, connecting the head and the cargo. To calculate the force $F_i$ produced by these motors and applied on the cargo we take the position of each single motor head into account as shown in Fig. \[skizze\]. [This kind of model for motor-cargo complexes has already been ]{}introduced in [@korn2009; @kunwar2008] and [@kunwar2011] with one and two teams of motors, respectively. Since we want to compare our simulation results with [*in vivo* ]{}experiments, we model the differences between the two different kinds of motors in detail. We analyze the motion of a cargo at position $x_C(t)$ at time $t$ pulled by two teams of molecular motors, each consisting of $N=5$ motors. We assume that the neck linker of motors can be modeled as a linear spring with spring constant $\alpha$ and an untensioned length $L_0$ such that the motors exerts no force on the cargo as long as $|x_C(t) - x_i|<L_0$. The motors are tightly bound to the cargo but can detach with a force dependent rate $k_d^\pm(F_i)$ from the filament, where the superscript $\pm$ gives the rate for $+$ and $-$ motors, respectively. Once detached from the filament, the motors can [reattach to the filament with a constant rate $k_a$. The motors attach within a region $x_C(t)\pm L_0$ and therefore apply no force.]{} We introduce this untensioned length because a motor that binds to the filament will not directly apply a force, since the motor’s neck is not spontaneously stretched but by the motion of the motor’s head along the filament. The total force applied on the cargo by the $n_+ / n_-$ puling $+/-$-motors then reads &F (x\_C(t),{x\_i}) = \_[i=1]{}\^[n\_++n\_-]{} F\_i (x\_i-x\_C(t))\ & F\_i (x\_i -x\_C(t)) = (x\_i-x\_C(t) +L\_0),      & x\_i-x\_C(t)&lt;-L\_0\ 0 , & |x\_i-x\_C(t)|&lt;L\_0\ (x\_i-x\_C(t)-L\_0), &x\_i-x\_C(t)&gt;L\_0 As long as the motor is bound to the filament it can perform steps with the force dependent rate $s^\pm(F_i)$. The model’s dynamics are schematically represented in Fig. \[skizze\]. As already mentioned in the introduction[ the motor properties of the two teams are different if realistic biological parameters are used]{}. That is why we use for stepping [@schnitzer_v_b2000; @toba2006] and detachment [@kunwar2011] rates some expressions based on experimental results and also take the influence of ATP concentration into account. The detachment rate for plus motors (kinesin) is given by $$\begin{aligned} \label{kinesin} k_d^+(F_i) = \begin{cases} k_d^0 \exp\left(\frac{|F_i|}{2.5f}\right) &F_i<F_S \\ k_d^0 \left(0.186\frac{|F_i|}{f} + 1.535\right) &F_i \geq F_S \end{cases}\end{aligned}$$ and for minus motors (dynein) by $$\label{dynein} k_d^-(F_i) = \begin{cases} k_d^0 \exp\left(\frac{|F_i|}{2.5f}\right) \! &F_i>-F_S \\ k_d^0 \left[1.5\left(1-\exp\left(\frac{-|F_i|}{1.97f}\right)\right)\right]^{-1} \! &F_i\leq -F_S \end{cases}.$$ with the force-free detachment rate $k_d^0$ [@kunwar2011][^1]. In and we introduce two force scales - the stall force $F_S$ which is the maximum force under which the stepping rate in the preferred motor direction does not vanish and a normalization force $f = 1 $ pN to get the right units. For the stepping rate $s^\pm(|F_i|,[ATP])$ in the region of forces smaller than the stall force we use a two-state Michaelis-Menten equation as suggested in [@schnitzer_v_b2000] $$\label{MMe} s (|F_i|,[ATP])= \frac{k_\text{cat}(|F_i|)[ATP]}{[ATP] + {k_\text{cat}(|F_i|)}{k_\text{b}(|F_i|)}^{-1}},$$ with the catalytic-turnover rate constant $k_\text{cat}(|F_i|)$ and a second-order rate constant for ATP binding $k_\text{b}(|F_i|)$. Schnitzer *et al.* [@schnitzer_v_b2000] also introduce a Boltzmann-type force relation for the rate constants $$\label{delta} k_j(F_i) = \frac{k_j^0}{p_j + q_j \exp(\beta F_i\Delta)} \ \ \ \ \ \ j=\{\text{cat, b} \}$$ with constants $k_j$, $p_j + q_j = 1$, $\beta = (k_\text{b} T)^{-1}$ and $\Delta$ (see [@schnitzer_v_b2000] for more details). It was measured for kinesin [@schnitzer_v_b2000] and dynein [@toba2006] that the stepping rate, depending on \[ATP\] and the load force $F_i$, can be described by eq. (\[MMe\]). If the force on a motor is bigger than the stall force the motors step backwards with a constant rate $s_b^\pm=v_b/d$. The stall force for minus motors is taken to vary in a linear affine manner from 0.3 pN at vanishing ATP concentration up to 1.2 pN for saturating ATP levels [@mallik_g2004] while we leave kinesin’s stall force constant at 2.6 pN [@kunwar2011]. This determines $\Delta$ as defined in eq. to ensure that the stepping rate is zero at stall. Now knowing the motor kinetics we further have to define how a cargo with mass $m$ and radius $R$ reacts on these forces. We describe the motion of the cargo via a Langevin equation $$\begin{aligned} m \frac{\partial^2 x_C(t) }{\partial t^2 } = -\beta \frac{\partial x_C(t)}{\partial t} + F(x_C(t),\{x_i\}) + F_{therm}(t) \label{eqofm}\end{aligned}$$ with Stokes’ friction $\beta = 6 \pi \eta R$ due to the cytosol’s viscosity $\eta$ and the stochastic force $F_{therm}(t) = \sqrt{2k_B T \beta}\xi(t)$ due to thermal noise inside the cell. Here $k_B$ is the Boltzmann constant, $T$ the temperature and $\xi(t)$ a normalized white-noise process, hence $$\langle \xi(t) \rangle = 0 \ \ \ \text{ and } \ \ \ \langle \xi(t)\xi(t') \rangle = \delta(t-t') \ \ \ \forall \ t,t'.$$ The cargo has to be propagated in continuous time according to equation  between two motors events (stepping, detachment or attachment). In constrast to our previous work [@EPL14], this evolution equation contains some thermal noise which requires a specific treatment. In [@gillespie1996; @norrelykke2011], a stochastic procedure was proposed in order to generate at discrete times some fluctuations in the cargo trajectory, with the same amplitude as would result from the integration of the Langevin equation. We follow this procedure and define the moments at which these thermal fluctuations are included as “shot events”. At each shot event $E_i$ at time $t_{E_i}$, two independent random numbers $\varphi_{i}$ and $\zeta_i$ are chosen from a zero-mean, unit-variance Gaussian distribution. Then, some contributions of thermal noise to the position and velocity of the cargos are build up until the next shot event, according to the expressions [@gillespie1996; @norrelykke2011] $$\begin{aligned} \label{sig} x_C^{therm}(t-t_{E_i}) = \sigma_{xx}(t-t_{E_i}) \varphi_i \end{aligned}$$ $$\begin{aligned} \label{sig2} v_C^{therm}(t-t_{E_i}) = \frac{\sigma_{xv}(t-t_{E_i})^2}{\sigma_{xx}(t-t_{E_i})}\varphi_i+ \sqrt{\sigma_{vv}(t-t_{E_i})^2 - \frac{\sigma_{xv}(t-t_{E_i})^4}{\sigma_{xx}(t-t_{E_i})^2}} \zeta_{i}.\end{aligned}$$ The expressions for the time-dependent $\sigma_{xx}(t-t_{E_i}),\ \sigma_{xv}(t-t_{E_i})$ and $\sigma_{vv}(t-t_{E_i})$ are given in Appendix \[app\] and verify $x_C^{therm}(0) = v_C^{therm}(0) = 0$. At the next shot event $E_{i+1}$ (we assume here that no motor event occured in the meantime), the built-up thermal fluctuations are added to the cargo components before drawing new random numbers : $$\begin{aligned} x_C(t_{E_{i+1}}) & = & x_C^d(t_{E_{i+1}}) + x_C^{therm}(t_{E_{i+1}}-t_{E_i}) \label{shot1}\\ v_C(t_{E_{i+1}}) & = & v_C^d(t_{E_{i+1}}) + v_C^{therm}(t_{E_{i+1}}-t_{E_i}) \label{shot2}\end{aligned}$$ Here $x_C^d$ and $v_C^d$ are the deterministic cargo position and velocity calculated by solving the equation of motion without the stochastic force $F_{therm}(t)$ as described in [@TGF13] and with the initial condition at $t_{E_i}$ to be at position $x_C(t_{E_i})$ with $v_C(t_{E_i})$. The history of the system is punctuated by two types of events, motor events and shot events. The knowledge of the cargo position at every time $t$ is necessary to get the force on the cargo at every arbitrary time so that we can use Gillespie’s for time-dependent rates [@gillespie1978], in order to predict which event will occur next, and when. In order to know the cargo position not only at discrete times as in  but in continuous time, we interpolate between the shot events by generalizing the expression  to $$x_C(t) = x_C^d(t) + x_C^{therm}(t-t_{E_i}) \label{interpol}$$ for all times $t_{E_i} \leq t < t_{E_j}$ between the two successive shot events $E_i$ and $E_j$. While the thermal contributions added at discrete times have the same statistics than those that would be obtained from the direct solution of the Langevin equation, as proved in [@gillespie1996; @norrelykke2011], the interpolation in  is an approximation, which should be correct if the shot frequency is high enough. Still, as we want to analyze the long time behavior, the frequency cannot be too high due to computational limitations. We chose to let the thermal shots occur with a constant rate $k_s$ which is at least hundred times bigger than the average single motor event rate, thus in the order of $k_s=10^5$ s$^{-1}$. We checked that our choice for $k_S$ is sufficient to avoid discretization effects. For completeness all simulation parameters are given in Table \[paraset\]. Results and Discussion ====================== In this section we first concentrate on the time-dependent variance of the cargo motion and analyze how it evolves with time. Secondly, we introduce a viscous barrier in the system and investigate how this change in the effective viscosity influences the cargo’s transport efficiency. Cargo’s displacement Variance ----------------------------- In this first subsection we calculate the variance of the cargo trajectories as given in eq.  Using the set of parameters given in Table \[paraset\] we observe subdiffusive motion ${\mathbb V\mathrm{ar}}[x_C(t)] \sim \Delta t^\gamma$ with an exponent $\gamma=0.6$ for times smaller than $10$ ms and superdiffusion for larger times with an exponent $\gamma =1.3$ as shown in Fig. \[msd\]. [The subdiffusive cargo motion can be attributed to thermal fluctuations in the external potential]{} of the motor springs whereas the superdiffusion is observed due to the correlation of the motor stepping events. This is for our chosen set of parameters observable on time scales of several hundred motor steps. We have already shown in [@EPL14] that without thermal noise the superdiffusive behavior is observed while subdiffusion is not. [Recently, the MSD has been calculated from particle trajectories in Drosophila S2 cells [@kulic2008]. In this work a crossover from sub- to superdiffusive has been reported at $\Delta t \approx 30$ms. In the subdiffusive regime, $\gamma = 0.59 \pm 0.28$ has been obtained, where exponents are varying from 0.2 to 1.2 for different trajectories. At times $\Delta t > 30$ms, a mean value $\gamma = 1.62 \pm 0.29$ has been established, where the results for single trajectories are in the range of 1.2 to 2. We notice a remarkable agreement between experimental and model results in the subdiffusive regime (for which we checked that variance and MSD do not differ). The results for $\gamma$ in the superdiffusive regime strongly depend on the bias of the cargo if the MSD is considered and not the variance of the particle position. Taking this into account our results are at least not contradicting the experimental findings. However, in order to test the actual agreement between mo del and experimental results the variance should be taken into account which characterizes more directly the correlation of the motors’ dynamics. ]{} ![Variance of the cargo pulled by $N+=N_-=5$ motors through a viscous medium. At short times the cargo shows subdiffusive behavior $ \sim \Delta t^\gamma$ with an exponent $\gamma=0.6$. For times $t>10$ ms the cargo moves superdiffusively with $\gamma=1.3$.[]{data-label="msd"}](MSD_thermal_noise_ATP500_SHOT1e5_n.pdf){width="50.00000%"} Viscous barrier --------------- In a crowded compartment of a cell the effective viscosity can be enhanced by a factor up to 1000 in comparison to the viscosity of pure water [@luby1999]. In a previous publication we have shown that the above described model exhibits non-monotonous dependence of the bias on the viscosity [@EPL14]. This motivated us to analyze the influence of spatial confinement in a crowded area of the cell on the cargo dynamics. Crowded areas are considered as regions of high effective viscosity. In order to assess the mobility of the motor-cargo complex we compare its motion to pure diffusion in the same environment. We introduce two viscous barriers with increased effective viscosity $\eta^{*}$ and with a given length $L_B$ at positions $\pm x_B$ which represent a highly crowded area (see Fig. \[barr\_draw\]). In Fig. \[barrier\] the mean first passage time (MFPT) to cross the barrier at position $\pm x_B \pm L_B$ starting at position $x_0 =0$ is shown. To make conclusions about the time needed to cross the barrier we compare it on the one hand to the purely diffusive case (green curve in Fig. \[barrier\]) and also to the barrier-free case (Fig. \[barrier\]**(a)**). ![Schematic drawing of the arrangement of the barriers. The blue rectangles represent the area of increased viscosity $\eta^*$ .[]{data-label="barr_draw"}](barrier.pdf){width="50.00000%"} [Interestingly, in the case of low barrier viscosities $\eta^*$ and unbiased cargo dynamics (Fig. \[barrier\]**(a)**) pure diffusion outperforms the active transport of cargo. Irrespective of the chosen barrier length our results for the MFPT are much longer in case of actively transported cargos as compared to diffusion, for which exact results for the MFPT ]{} $$\frac{\beta}{2k_BT}(x_B+L_B)^2$$ are known [@redner2001], with $\beta = 6\pi\eta R$. With slightly increased viscosity in the barrier ($\eta^* = 10\eta$) the diffusion is still faster, especially for small barriers but one can already recognize that for $L_B=90$ nm the MFPT for diffusion and active transport are already the same in the range of error. If the effective viscosity is further increased inside the barrier ($\eta^* = 100\eta$) the active transport is significantly faster (\[barrier\]**(c)**) and the MFPT does not show the quadratic behavior with the interval length $2\cdot(x_B +L_B)$ anymore. ![MFPT for different barrier viscosities (**(a)** $\eta^*=\eta$, **(b)** $\eta^* = 10 \eta$, **(c)** $\eta^* = 100\eta$) and different barrier lengths $L_B$. We show the cases with active transport (blue) and pure diffusion (green). In some regimes it is more efficient to diffuse through the cell, especially for small distances and low viscosities. If the effective viscosity is considerably higher it is more productive to actively transport the cargo. The red line in **(a)** shows the analytic solution for the MFPT for diffusion on an interval. The errorbars give the Gaussian error.[]{data-label="barrier"}](MFPT_1e-11.pdf "fig:"){width="60.00000%"}\ **(a)** ![MFPT for different barrier viscosities (**(a)** $\eta^*=\eta$, **(b)** $\eta^* = 10 \eta$, **(c)** $\eta^* = 100\eta$) and different barrier lengths $L_B$. We show the cases with active transport (blue) and pure diffusion (green). In some regimes it is more efficient to diffuse through the cell, especially for small distances and low viscosities. If the effective viscosity is considerably higher it is more productive to actively transport the cargo. The red line in **(a)** shows the analytic solution for the MFPT for diffusion on an interval. The errorbars give the Gaussian error.[]{data-label="barrier"}](MFPT_1e-10.pdf "fig:"){width="60.00000%"}\ **(b)** \ ![MFPT for different barrier viscosities (**(a)** $\eta^*=\eta$, **(b)** $\eta^* = 10 \eta$, **(c)** $\eta^* = 100\eta$) and different barrier lengths $L_B$. We show the cases with active transport (blue) and pure diffusion (green). In some regimes it is more efficient to diffuse through the cell, especially for small distances and low viscosities. If the effective viscosity is considerably higher it is more productive to actively transport the cargo. The red line in **(a)** shows the analytic solution for the MFPT for diffusion on an interval. The errorbars give the Gaussian error.[]{data-label="barrier"}](MFPT_1e-09.pdf "fig:"){width="70.00000%"} **(c)** Conclusion ========== [In this article we analyzed a model for bidirectional cargo transport driven by teams of molecular motors. We studied the impact on the cargo’s motion of thermal noise, and also of crowded areas of the cell, which we represented by a spatially structured effective viscosity.]{} First, we analyzed the change in the cargo displacement variance if thermal noise is taken into account. With the given parameter combination in this paper we have shown that for times smaller than 10 ms the trajectories exhibit subdiffusive behavior with ${\mathbb V\mathrm{ar}}[x_C(t)] \sim \Delta t^{0.6}$ which transfers into superdiffusive behavior at longer times. [The subdiffusive motion is a result of the thermal fluctuations of the cargo’s position, which is trapped in the external potential of the motor springs. If this potential is stronger (for example stiffer springs, lower detachment rate, higher attachment rate) we expect that the subdiffusive exponent will decrease. We actually checked that this is the case for stiffer springs. Contrary, superdiffusion can be obtained because the cargo motion has a finite correlation time and tends to continue moving in the same direction. ]{} In [@kulic2008] it was concluded that the observed anomalous diffusion of cell organelles cannot be described solely in terms of cargo movement along stationary microtubule tracks, but instead includes a strong contribution from the movement of the tracks. Our results question this interpretation since in our model the motor-cargo complex moves on a single and infinite track, i.e. the structure and dynamics of the MT-network has not at all been taken into account and still we find the same anomalous diffusion. Additionally, we characterized the influence of a change in the effective viscosity representing differently crowded areas. For low viscosities or small areas of increased viscosity the pure cargo diffusion is faster than an actively transported one, since in the latter case the cargo can be trapped in the potential of motor springs. In crowded areas of the cell, however, the situation is inverted: While the diffusively moving cargo slows down completely, the actively transported one keeps its motility to a large extend. This illustrates that the cell can use the asymmetry between the motors as a driving force. In this paper, the effect of the environment was modeled as an effective viscosity. Within our modeling approach it is also possible to treat external forces explicitly. It would be interesting to build experiments that would allow direct comparison with the model. Very recently it became possible to use dynein in motility assays [@mckenney2014]. Due to this achievement it will also be possible to study bidirectionally transported motor-cargo complexes [*in vitro* ]{}. Such experiments could be used in order to validate the modeling approach. Then both, experiment and model, could be modified to account for more complex situations, example giving the influence of well-defined network structures. newusecounter\#1[nmbrlisttruelistctr[\#1]{}]{} Thermal noise propagation {#app} ========================= In order to give the expressions for $\sigma_{xx}(\epsilon_i)$, $\sigma_{xv}(\epsilon_i)$ and $\sigma_{vv}(\epsilon_i)$ given in eq.  and  we define the diffusion coefficient $D = {k_BT}/{\beta}$, the cyclic frequencies of the damped and the undamped harmonic potential $\omega = ({\beta^2}/{4m^2}-{\alpha}/{m})^{\frac{1}{2}}$ and $\omega_0 = ({\alpha}/{m})^{\frac{1}{2}}$ respectively and the characteristic time in the presence of frictional forces $\tau = {m}/{\beta}$ is. The $\sigma_{i,j}$ (here $i,j = \{x,v\}$) depend explicitly on time $\epsilon_i \in [t_{E_i},t]$. We have $$\begin{aligned} \label{sigma_xx} \sigma_{xx}(\epsilon_i)^2=&\frac{D}{4\omega^2\omega_0^2 \tau^3}\cdot \biggl(4\omega^2\tau^2 - \frac{1}{2}\left [\exp\left(-\epsilon_i \tau^{-1} + 2\omega \epsilon_i\right)(1 + 2\omega\tau) \right. \\ & \left.+ \exp\left(-\epsilon_i \tau^{-1} - 2\omega\epsilon_i\right)(1 - 2\omega\tau)\right]+( 1 - 4\omega^2\tau^2)\exp(-\epsilon_i \tau^{-1}) \biggl) \nonumber\end{aligned}$$ $$\begin{aligned} \sigma_{xv}(\epsilon_i)^2 = &\frac{D}{4\omega^2 \tau^3}\cdot \biggl(4\omega^2\tau^2 - \frac{1}{2}\left [\exp\left(-\epsilon_i \tau^{-1} + 2\omega \epsilon_i\right)(1 - 2\omega\tau) \right. \nonumber \\ & \left.+ \exp\left(-\epsilon_i \tau^{-1} - 2\omega\epsilon_i\right)(1 + 2\omega\tau)\right]+( 1 - 4\omega^2\tau^2)\exp\left(-\epsilon_i \tau^{-1}\right) \biggl) \end{aligned}$$ and $$\begin{aligned} \sigma_{vv}(\epsilon_i)^2 = \frac{D}{\omega^2\tau^2}\biggl(\exp\left(-\epsilon_i\tau^{-1} + 2\omega\epsilon_i\right)+\exp\left(-\epsilon_i\tau^{-1} - 2\omega\epsilon_i\right )- 2\exp\left(-\epsilon_i \tau^{-1}\right)\biggl).\end{aligned}$$ Here, $\sigma_{xx}$, $\sigma_{xv}$ and $\sigma_{vv}$ are the elements of the variance-covariance matrix which fully characterizes a Gaussian distribution [@gillespie1996; @norrelykke2011]. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) within the collaborative research center SFB 1027 and the research training group GRK 1276. [|c|c|c|c|]{} & **kinesin** & **dynein** & Ref.\ $d$ & & [@Carter; @toba2006]\ $N_\pm$ & & [@welte1998]\ $L_0$& & [@kunwar2011]\ $v_{f}$ & & [@Carter; @toba2006]\ $v_{b}$ & & [@Carter; @Gennerich]$^*$\ $\alpha$ & & [@kunwar2011]$^*$\ $k_a$ & & [@mueller_k_l2008; @leduc2004]\ $k_d^0$ & & [@kunwar2011]$^*$ \ $f$ & &\ $F_S$ & 2.6 pN & 0.3-1.2 pN &[@Mallik2004; @Shubeita]\ $k_\text{cat}^0$ & & [@schnitzer_v_b2000]\ $k_\text{b}^0$ & & [@schnitzer_v_b2000]\ $q_\text{cat}$ & & [@schnitzer_v_b2000]\ $ q_\text{b}$ & & [@schnitzer_v_b2000]\ $\Delta$ &4267.3 nm & $\max\left(\frac{2.9\cdot 10^5}{[ATP]^{0.3}}-28111.1,8534.6\right)$ nm & eq. (\[delta\])\ \ $k_s$ & &  \ $\eta$ & & [@kulic2008]$^*$\ \[ATP\] & &\ $T$ & &\ $x_B$ & &\ \ $R$ & & [@thiam2013]$^*$\ $m$ & &\ [^1]: Since the [*in vivo* ]{}and [*in vitro* ]{}behavior differs significantly in [@kunwar2011] some model-parameters in the detachment rate expressions have been adjusted in order to avoid dominance of one motor species. Here we chose slightly different parameter values to achieve the same goal.
--- abstract: 'We made a narrowband NB973 (bandwidth of 200Å centered at 9755Å) imaging of the Subaru Deep Field (SDF) and found two $z=7$ Ly$\alpha$ emitter (LAE) candidates down to NB973 $=24.9$. Carrying out deep follow-up spectroscopy, we identified one of them as a real $z=6.96$ LAE. This has established a new redshift record, showing that galaxy formation was in progress just 750 Myr after the Big Bang. Meanwhile, the Ly$\alpha$ line luminosity function of LAEs is known to decline from $z=5.7$ to 6.6 in the SDF. $L^*$ at $z=6.6$ is 40–60% of that at $z=5.7$. We also confirm that the number density of $z=7$ LAE is even only 17% of the density at $z=6.6$ comparing the latest SDF LAE samples. This series of significant decreases in LAE density with increasing redshift can be the result of galaxy evolution during these epochs. However, using the UV continuum luminosity functions of LAEs, those of Lyman break galaxies and a LAE evolution model based on hierarchical clustering, we find that galaxy evolution alone cannot explain all the decrease in density. This extra density deficit might reflect the attenuation of the Ly$\alpha$ photons from LAEs by the neutral hydrogen possibly left at the last stage of the cosmic reionization at $z \sim 6$–7.' author: - 'Kazuaki Ota, Masanori Iye, Nobunari Kashikawa, Kazuhiro Shimasaku, Masakazu Kobayashi, Tomonori Totani, Masahiro Nagashima, Tomoki Morokuma, Hisanori Furusawa, Takashi Hattori, Yuichi Matsuda, Tetsuya Hashimoto, Masami Ouchi' title: 'Reionization and Galaxy Evolution probed by $z=7$ Ly$\alpha$ Emitters' --- Introduction ============ Investigating high redshift galaxies as well as other distant objects in the early Universe, especially within the first 1 Gyr after the Big Bang, has been the key to understanding how galaxies have formed and evolved, probe their star formation histories, and constrain the epoch of cosmic reionization. The latest measurements of the polarization of the cosmic microwave background (CMB) by [*Wilkinson Microwave Anisotropy Probe*]{} ([*WMAP*]{}) constrained the optical depth to electron scattering during reionization and suggests that the average redshift of reionization was $z=10.9^{+2.7}_{-2.3}$ [@Spergel07; @Page07]. Also, Gunn-Peterson (GP) troughs [@GP65] in $z \sim 6$ quasar spectra imply reionization ended at $z \sim 6$ with an estimated fraction of intergalactic medium (IGM) neutral hydrogen, $x_{\rm HI}^{z\sim6.2}\sim 0.01$–0.04 [@Fan06]. Moreover, a spectral modeling analysis of a $z\sim6.3$ gamma ray burst (GRB) shows that the Universe seems to have been largely reionized at $z\sim6.3$ with $x_{\rm HI}^{z\sim6.3} = 0$ and the upper limit of $x_{\rm HI}^{z\sim6.3} < 0.17$–0.6, which suggests only some reasonable amount of neutral gas in the GRB host galaxy [@Totani06]. Another probe of reionization is Ly$\alpha$ emitters (LAEs), young galaxies in the distant universe showing in their spectra redshifted Ly$\alpha$ emission from their interstellar gas illuminated by massive stars. The observed Ly$\alpha$ line luminosity function (Ly$\alpha$ LF) is expected to decline beyond the redshift $z \sim 6$ where reionization is thought to have completed as the increasing fraction of IGM neutral hydrogen absorbs or scatters the Ly$\alpha$ photons from young galaxies [@HiSpa99; @RM01; @Hu02]. Nevertheless, recent LAE surveys show that Ly$\alpha$ LF seems not to change from $z = 3$ to 5.7 [@Ouch03; @Ajiki03; @Tran04; @vBrk05; @Ouch07]. For the earlier epoch, @MR04 suggest that Ly$\alpha$ LF does not evolve between $z=5.7$ and 6.6. This might be because their sample could be somewhat biased since it consists of several LAE subsamples taken from various surveys with different kinds of factors such as selection criteria, analysis methods, sky areas, survey volumes, and depths in order to compile as large a sample as possible. On the other hand, the Subaru Deep Field [@kashik04 SDF] surveys have tried to keep all these factors as consistent as possible among different redshifts, surveyed exceptionally large volume and made large amount of LAE samples at $z=4.8$, 5.7 and 6.6. Their latest survey has for the first time confirmed that the Ly$\alpha$ LF declines as $L^*_{z=6.6} \sim L^*_{z=5.7} \times$(0.4–0.6) from $z=5.7$ to 6.6 even after correcting cosmic variance [@kashik06]. From this decline of the LF, they estimated the upper limit of the neutral fraction at $z=6.6$ to be $0 \leq x_{\rm HI}^{z=6.6}\leq 0.45$. If the nenutral IGM remains at $\sim 50$% level at $z=6.6$, this constraint supports late reionization and contradicts the [*WMAP*]{} result. Also, the decline of Ly$\alpha$ LF at $z=5.7$–6.6 can also be ascribed to the evolution of LAE population itself. Meanwhile, the ionized fraction $x_i < 1$ and the morphology of H$_{\rm II}$ regions during patchy reionization would modulate the observed distribution of LAEs and enhance the observed clustering of them [@Furlanetto06; @McQuinn07]. @McQuinn07 investigated the angular correlation function of the SDF photometric sample of $z=6.6$ LAEs obtained by @kashik06 and suggest that the Universe is fully ionized at $z=6.6$ with the mean volume ionized fraction of $\bar{x_i} \sim 1$. @McQuinn07 also pointed out the difficulty in distinguishing the effect of evolution of LAE population on Ly$\alpha$ LF from that of reionization. LFs of high-$z$ galaxies also tell us about galaxy evolution itself in terms of how many galaxies existed at each luminosity and epoch in the history of the Universe and how it has changed with cosmic time. To obtain this kind of information at high redshifts, LFs of Lyman break galaxies (LBGs) and LAEs have been mainly observed. Ultraviolet continuum luminosity functions (UVLFs) of LBGs have been investigated from $z\sim 3$ to $z\sim 7$ and found to decline as redshift increases [@LB03; @Ouch04; @Bouw06; @Yoshida06; @06BI]. Since the UV continuum redder than 1216Å is not attenuated by neutral IGM hydrogen and if dust extinction is precisely corrected, the decline of UVLF reflects the evolution of galaxies. One recently observed example of this is a large decline of UVLF of dropout galaxies at $6 < z \lesssim 7$–8, which is considered to be a clear sign of galaxy evolution over these redshifts [@06BI]. They conclude that very luminous galaxies are quite rare at $z=7$–8. On the other hand, the UVLF of LAEs was confirmed not to evolve at $z\sim3$–5 [@Ouch03]. In addition, studying LAEs in an even wider sky region, $\sim 1.0$ deg$^2$ of the Subaru/[*XMM-Newton*]{} Deep Survey (SXDS) field, @Ouch07 found that UVLF of LAEs increases from $z\sim3-4$ to 5.7 while Ly$\alpha$ LF of them remains unchanged over these redshifts, suggesting that the fraction of UV-bright LAEs increases at $z=5.7$. Furthermore, no evolution of LAE UVLF from $z=5.7$ to 6.6 was observed while Ly$\alpha$ LF of LAEs evolves between these epochs in the latest SDF survey [@kashik06]. This implies that LAEs themselves do not significantly evolve from $z=6.6$ to 5.7 and the decline of the Ly$\alpha$ LF might reflect the effect of reionization. However, we do not know if this trend of the Ly$\alpha$ LF and UVLF of LAEs continues from even earlier epochs. In other words, it is not clear whether LAE population evolves from $z>6.6$ as LBG does and the neutral IGM fraction increases to suppress the Ly$\alpha$ LF more severely beyond $z=6.6$. Moreover, the existence of the galaxies at $z>6.6$ has not been confirmed by spectroscopy yet though several photometric candidates have been found. These questions can be addressed by observing LAEs and their LFs at $z>6.6$. Investigating their change over longer cosmic time interval, we can constrain the galaxy evolution and reionization more tightly. One possible method of detecting $z>6.6$ LAEs is a narrowband filter imaging in infrared region. However, beyond the redward limit of CCD sensitivity, the large format mosaicing advantages of infrared arrays are not yet available and observations of high redshift LAEs is limited to a small survey volume. Though recent infrared detectors have achieved extremely high sensitivities, surveys with them cannot avoid large uncertianty due to cosmic variance. Therefore, we carried out a narrowband survey of $z=7$ LAEs using the final window of OH-airglow at the very edge of the optical regime still accessible with CCDs of Subaru Prime Focus Camera [@Miya02 Suprime-Cam] having a superb wide field of view, $34' \times 27'$. We chose the wavelength region 9655–9855Å open to the highest redshift optical narrowband survey. Although this might not be considered quite an adequate window since there are several OH lines in the region, the estimated fraction of the sky counts coming from OH lines in the window is not prohibitively large (only $\sim 4.3$ photons s$^{-1}$ Å$^{-1}$ arcsec$^{-2}$ m$^{-2}$) and we actually succeeded in making a narrowband filter named NB973 covering this wavelength region. This range corresponds to a redshift of $6.9 \leq z \leq 7.1$ for LAEs. To discover such extremely high redshift LAEs and make a sample of them as consistent as possible with those of $z=5.7$ and 6.6 LAEs obtained by @Shima06 and @kashik06, we targeted the same field, SDF using 8.2-m Subaru Telescope/Suprime-Cam. Our brief and preliminary result has been recently reported in @06IOK. In this survey, we successfully confirmed a $z=6.96$ LAE spectroscopically and observed that the number density of $z=7$ LAE further declines from $z=6.6$ by a factor of 0.18–0.36, suggesting that the neutral hydrogen might increase between these epochs. However, we do not know whether there had been any possible evolution of LAE population itself from $z=7$ to 6.6, and the density dificit might come from such a galaxy evolution. In this paper, we present the methods and results of our photometric and spectroscopic surveys for $z=7$ LAEs, which were not fully covered in @06IOK, and try to draw out as much useful information as possible about the epoch of reionization and the LAE galaxy evolution from our results combined with the most recent high redshift galaxy surveys and a LAE evolution model based on hierarchical clustering. We first describe the properties of our narrowband filter NB973 and imaging observation in 2. Then, selection criteria of $z=7$ LAE candidates based on narrow and broadband images are derived in detail and their photometric properties are analyzed in 3. In 4, we explain the results of our follow-up spectroscopy of the selected candidates and their spectroscopic properties. In 5, we compare Ly$\alpha$ and UV LFs of $z=7$ LAEs with those of $z=5.7$ and 6.6 LAEs derived from the latest samples obtained by the SDF LAE surveys [@Shima06; @kashik06] and discuss what implications the result gives for cosmic reionization and galaxy evolution. Also, any possibilities of LAE galaxy evolution at $z=5.7$–7 are inspected by observational and theoreical approaches. In the last section, we conclude and summarize our results. Throughout we adapt a concordance cosmology with ($\Omega_m$, $\Omega_{\Lambda}$, $h$) $= (0.3, 0.7, 0.7)$, and AB magnitudes with $2''$ diameter aperture unless otherwise specified. Imaging Observation =================== We developed a narrowband filter, NB973, designed to cover the last optical window of OH-airglow centered at 9755Å with $\Delta \lambda_{\rm FWHM} \sim 200$Å corresponding to Ly$\alpha$ emission at $6.9 \leq z \leq 7.1$ [@06IOK]. The design and fabrication of such a narrowband filter was not a simple issue for Suprime-Cam that uses a fast converging F/1.83 beam whose incident angle varies with a position in the field of view. Mixture of such light with different incident angles severely degrades the resultant transmission characteristics of the narrowband filter from our target design. To obtain the desired performance complying with the filter specification for our scientific requirement, we employed a combination of the following three filters glued together: a color cut glass filter RG780 with anti-reflection coating, a narrow bandpass multi-layer coating filter, and another multi-layer coating filter for red leak prevention. One year before starting the manufacturing of the NB973 filter for Suprime-Cam, we made another filter NB980 (bandwidth of $\sim100$Å centered at $\sim9800$Å) for use in the parallel beam section of the Faint Object Camera And Spectrograph [@kashik02 FOCAS] on Subaru to demonstrate the feasibility of narrowband imaging at this last OH window. During this prefabrication of NB980, manufacturing errors in controlling the thickness of thin film layers were evaluated. The multilayer thin film coating design for NB973 was then optimized so that the resulting transmitting properties are relatively robust to inevitable manufacturing errors to control the thickness of each thin layers. The measured transmission curve of the final NB973 filter actually used in the present survey as well as other filters used for color selection of $z=7$ LAE candidates are shown in Figure \[MS\_BVRizNBfilters\_z7LAE\]. Our target sky region is SDF [@kashik04 13$^{\rm h}$24$^{\rm m}$21.$^{\rm s}$4, -27$^o29'23''$(J2000), $\sim$876 arcmin$^2$], a blank field in which $z=5.7$ and 6.6 LAE surveys had been also carried out [@Shima06; @kashik06]. Deep broadband $BVRi'z'$ and narrowband NB816 ($\lambda_c=8160$Å, $\Delta\lambda_{\rm FWHM}=$120Å) and NB921 ($\lambda_c=9196$Å, $\Delta\lambda_{\rm FWHM}=$132Å) filter images were taken by the SDF project. All the images were convolved to have common seeing size of $0.''98$. Limiting magnitudes in $2''$ aperture at $3\sigma$ are $(B, V, R, i', z', {\rm NB816, NB921})=(28.45, 27.74, 27.80, 27.43, 26.62, 26.63, 26.54)$. Transmissions of these filters including CCD quantum efficiency, reflection ratio of the telescope prime mirror, the correction for the prime focus optics and transmission to the atmosphere (airmass $\sec z=1.2$) are also shown in Figure \[MS\_BVRizNBfilters\_z7LAE\]. Our NB973 image of the SDF was taken with Suprime-Cam mounted on the Subaru Telescope on 16 and 17 March 2005. These two nights were photometric with good seeing of $\sim 0.''5$–$0.''8$. The total integration time is 15 hours. We have reduced NB973 image frames using the software SDFRED [@Ouch04; @Yagi02] in the same manner as in @kashik04. The NB973 image frames were dithered in a similar way as the SDF project did for other wavebands when they were taken. The combined NB973 image removed the slight fringing caused by OH-airglow that appeared in some image frames. The loss of survey area due to this dithering is only $\sim$5%. The seeing size of the combined image was $0.''78$ and convolved to $0.''98$, which is the common seeing size of the images of other wavebands, for the purpose of photometry. Spectrophotometric standard stars Feige34 and Hz44 [@Oke90] were imaged during the observation to calibrate the photometric zeropint of the stacked image, which is NB973$=32.03$. The limiting magnitude reached NB973$\leq24.9$ at $5\sigma$ with 15 hour integration. Photometric Analysis ==================== Photometry ---------- After obtaining the stacked NB973 image, we conducted photometric analysis, making an object catalog. Source detection and photometry were carried out with SExtractor software version 2.2.2 [@BA96]. Pixel size of the Suprime-Cam CCDs is $0.''202$ pixel$^{-1}$. We considered an area larger than contiguous 5 pixels with a flux \[mag arcsec$^{-2}$\] greater than $2\sigma$ to be an object. Object detection was first made in the NB973 image and then photometry was done in the images of other wavebands using the double-imaging mode. $2''$ diameter aperture magnitudes of detected objects were measured with MAG$_{-}$APER parameter while total magunitudes with MAG$_{-}$AUTO. Low quality regions of CCDs, bright stellar halos, saturated CCD blooming, and pixels of spiky abnormally high or low flux counts were masked in the SDF images of all wavebands, using the official program code provided by the SDF team [@kashik04]. The final effective area of the SDF image is 876 arcmin$^2$. The comoving distance along the line-of-sight corresponding to the redshift range $6.94 \leq z \leq 7.11$ for LAEs covered by NB973 filter was 58 Mpc. Therefore, we have surveyed a total of $3.2 \times 10^5$ Mpc$^3$ volume using NB973 image. Then, the final object catalog was constructed, detecting 41,533 objects down to NB973$\leq24.9$ ($5\sigma$). The Detection Completeness -------------------------- To understand how reliable our source detections are down to the limiting magnitude of NB973 $\leq 24.9$, we measured the detection completeness of our photometry with the NB973 image. First, all the objects that satisfy our source detection criterion were removed from the NB973 image using the SExtractor. Then, the [starlist]{} task in the [artdata]{} package of IRAF was used to create a sample starlist of about 20,000 artificial objects with a random but uniform spatial and luminosity distributions ranging from NB973 $=20$ to 25 mag. Next, using the [mkobject]{} task of IRAF, these artificial objects were spread over the NB973 image, avoiding the masked regions of the SDF and the locations close to the previously removed real objects with the distance shorter than 3/2 of their FWHMs. After this, SExtractor was run for the source detection in exactly the same way as our actual photometry. Finally, we calculated the ratios of the number of detected artificial objects to that of created ones to obtain the detection completeness. We repeated this procedure five times and averaged the obtained completeness. The result is shown in Figure \[MS\_AVE\_NB973\_Completeness\]. The completeness at our detection limit of NB973 $=24.9$ is $\sim76$%. The completeness was corrected when the number and luminosity densities of $z=7$ LAEs were calculated in the \[Re-and-GalEv\]. We evaluated the completeness in the same way as @Shima06 and @kashik06 did for the detection completeness of $z=5.7$ and $z=6.6$ LAEs to keep consistency. However, in real life, some $z=7$ galaxies will lie behind brighter sources at lower redshifts and thus the completeness correction will be artificially small. The fractions that do can be included in the completeness estimate by simply adding artificial sources to the original image without masking out anything and without any exclusion zones in the placement of artificial sources. We also calculated our detection completeness in this way to see how much it is different from the original completeness evaluation. As expected, the completeness we calculated this time is slightly smaller. However, the difference is a factor of only 1.1–1.3 over NB973 $=20$–25 and it does not change the evaluation of LAE number and luminosity densities much. Hence, for our consequent analyses, we use the original completesess calculated in the same way as done for $z=5.7$ and $z=6.5$ LAEs to keep consistency. Colors and Selection Criteria of $z=7$ LAEs \[COLOR\_CRITERIA\] --------------------------------------------------------------- To isolate $z=7$ LAEs from other objects, we investigated their expected colors and derived candidate selection criteria. We generated model spectra of LAEs at the redshift ranging from $z=5$ to 7 with rest frame Ly$\alpha$ line equivalent width $EW_0(\rm Ly\alpha)$ varying from 0 to 300Å as follows. First, we created a spectral energy distribution (SED) of a starburst galaxy using a stellar population synthesis model, GALAXEV [@BC03] with a metallicity of $Z=Z_{\odot}=0.02$, an age of $t=1$ Gyr, Salpeter initial mass function with lower and upper mass cutoffs of $m_L=0.1$ $M_{\odot}$ and $m_U=100$ $M_{\odot}$ and exponentially decaying star formation history for $\tau=1$ Gyr. These parameters were chosen to be the same as those used to generate model $z=6.6$ LAEs in @Tani05 to keep consistency. Although recent observational studies show that LAEs seem to be much younger than the 1 Gyr age and/or 1 Gyr star formation decay time and $z=4.5$ LAEs seem to have dust extinction [@Gawiser06; @Pirzkal07; @Finkelstein07], we did not consider the effects of dust on the SED since the two issues have opposite effects on the broadband colors of the LAEs and this does not have a major effect on the LAE selection criteria. Then, the SED was redshifted to each of $z=5.0$, 5.5, 5.7, ..., and 7.0, and Ly$\alpha$ absorption by IGM was applied to it, using the prescription of @madau95. Finally, flux of a Ly$\alpha$ emission line with either of $EW_0(\rm Ly\alpha)=0$, 10, 20, 50, 100, 150, 200, 250 or 300Å was added to the SED at $(1+z)$1216Å. We did not assume any specific line profile or velocity dispersion of Ly$\alpha$ emission. Instead, we simply added 1/2 of the total line flux value, assuming that the blue half of the Ly$\alpha$ line is absorbed by IGM. An example of a model spectrum of a $z=7$ LAE is shown in Figure \[MS\_BVRizNBfilters\_z7LAE\]. Colors of these model LAEs were calculated using their SEDs and transmission curves of Suprime-Cam broadband and NB973 filters and plotted in a two-color diagram of $z'-$NB973 vs. $i'-z'$ shown in Figure \[MS\_2Color\_Diagram\]. As clearly seen in the diagram, a $z=7$ LAE is expected to produce significant flux excess in NB973 against $z'$. However, it should be also noted that NB973 bandpass overlaps with the wavelength range at the longward edge of $z'$ band. This allows LAEs and LBGs at even lower redshifts $z=6.2$–6.8 to cause the NB973 flux excess with respect to $z'$ band if such galaxies have bright and steep UV continua. Actually, such objects were detected in our photometry. Their images and photometric properties are shown and described in Figure \[MS\_Poststamp\_IOK1-5\] and \[NB973\_EXCESS\_OBS\]. Out of these lower redshift galaxies, $z=6.5$–6.6 LAEs can be removed by requiring no detection in the narrowband filter NB921 image whose bandpass corresponds to the Ly$\alpha$ emissions at this redshift range [@Koda03; @Tani05; @kashik06]. Hence, we classified NB973-excess objects with NB973 $\leq 24.9$ ($5\sigma$, $2''$ aperture), which include our target $z=7$ LAE, into following two categories based on $z'-$ NB973 color. - $z=6.9$–7.1 LAEs : $B, V, R, i'$, NB816, $z'$, NB921 $<3\sigma$ - $z=6.7$–7.1 LBGs : $B, V, R$, NB816, NB921 $<3\sigma$, $i'-z'>1.3$, $z'-$ NB973 $>1.0$ where $B, V, R, i'$, NB816, $z'$ and NB921 fluxes were measured in total magnitudes while $i'-z'$ and $z'-$NB973 colors in $2''$ aperture magnitudes. All the $i'$ and $z'$ aperture magnitudes fainter than 27.87 and 27.06 ($2\sigma$ limits), respectively, were replaced by these values in the application of criterion (2). Since the flux of a LAE shortward of Ly$\alpha$ emission should be absorbed by IGM, no detections ($<3\sigma$) in $B, V, R$, NB816 and NB921 with either red $i'-z'>1.3$ color or no detections in $i'$ and $z'$ were imposed as a part of the criteria. This can help eliminate interlopers such as L/M/T type dwarf stars and lower redshift galaxies with other type of emission lines (e.g., H$\beta$, \[OIII\], \[OII\], H$\alpha$, \[SII\] and so on). Also, criterion (1) implies that the robust $z=7$ LAE candidates should show significant excess in NB973 over $z'$ and NB921, $z' -$ NB973 $>1.72$ and NB921 $-$ NB973 $>1.64$. Note that the color selection criteria (1) and (2) are slightly different from those in @06IOK in that this time we include null detections in NB816 and NB921 whose bandpasses correspond to Ly$\alpha$ emission at $z=5.65$–5.75 and 6.5–6.6, respectively, to make the criteria more reliable and secure. In fact, the object IOK-3 detected by @06IOK satisfied the criteria (1) and (2) simultaneously except for NB921 $< 3\sigma$ and was spectroscopically identified as a $z=6.6$ LAE by @kashik06. This time, we found only one object satisfying criterion (1) (hereafter referred to as IOK-1 as in @06IOK) and none met criterion (2). In order not to miss faint and diffuse $z=7$ LAEs such as Ly$\alpha$ blobs having extended shapes with fairly bright cores but NB973 $> 24.9$ ($2''$ aperture mag.), we also loosened our detection limit cutoff adopting NB973 $\leq 24.9$ (total mag.) as another limiting magnitude. This increased the number of objects satisfying criterion (1) by 17 while still no objects fell into (2). However, this sample might be contaminated by some spurious objects such as sky residuals and noise due to fringing that might not be removed perfectly at the time of image reduction. Hence, we visually inspected all the broadband and narrowband images of each color-selected object and only kept those appearing to have condensed and relatively bright cores and excluded those having only diffuse faint shapes with no cores. More specifically, we removed objects that look apparently artificial such as connected bad pixels, tails of saturated pixels from bright stars and noises of discrete dismembered shapes or pieces of disconnected pixels with fairly large fluxes. As a result, we were left with one object (hereafter called IOK-2 as in @06IOK). The images of IOK-1 and -2 and their photometric properties are shown in Figure \[MS\_Poststamp\_IOK1-5\] and Table \[Photo-property\], respectively. The color-magnitude diagram ($z'-$ NB973 vs. NB973) of IOK-1 to -2 as well as all the objects detected down to NB973 $=24.5$ (total mag.) is plotted in Figure \[MS\_CMD\]. Their two-color diagram is also shown in Figure \[MS\_2Color\_Diagram\]. Possibility of Objects with Weak NB973-excess Being $z=7$ LAEs\[NB973\_EXCESS\_OBS\] ------------------------------------------------------------------------------------ As @Tani05 did in selecting out their candidate $z=6.6$ LAEs in order not to miss faint targets, we also investigated the possibility of objects with a weak excess of $1.0 > z'-$ NB973 $> 3\sigma$ being $z=7$ LAEs even though such objects do not have the expected colors of $z=7$ LAEs predicted by the stellar populationin synthesis model in \[COLOR\_CRITERIA\] and Figure \[MS\_2Color\_Diagram\]. We define the color criterion of such weak NB973-excess objects as: - $B, V, R<3\sigma$, $i'-z'>1.3$, $1.0>z'-$ NB973 $>3\sigma$ As mentioned in \[COLOR\_CRITERIA\], our NB973 is located at the red edge of $z'$ band. This could cause the criterion (3) to pick up interlopers such as $z=6.2$–6.8 LAEs/LBGs (as Figure \[MS\_2Color\_Diagram\] predicts), $z=1$–3 extremely red objects (EROs) whose continua have the rest frame 4000Å Balmer breaks that result in the NB973-excess against $z'$ or M/L/T type red cool dwarf stars whose SEDs can have steep slopes at around NB973 bandpass. Such objects should be distinguished from $z=7$ LAEs if they are detected in NB973 as well as $z'$. From the photometry alone, it is difficult to tell if the criterion (3) objects are EROs or dwarfs. However, it is possible to say whether the objects are galaxies at $z=7$ or not, which is more important in our study. According to the predicted colors of model galaxies in Figure \[MS\_2Color\_Diagram\], the criterion (3) should select out $z=6.2$–6.8 LAEs/LBGs, not $z=7$ ones. However, as some of $z=5.7$ LAEs spectroscopically identified by @Shima06, though not so many, do not satisfy their color selection criteria computed using SED models, objects satisfying our criterion (3), which reside near the border of criteria (1) and (2), could be $z=7$ LAEs. We found two objects to fall into criterion (3) (hereafter, referred as to Obj-4, the brighter of the two in $2''$ aperture NB973 mag., and Obj-5). Their colors, images and photometric properties are shown in Figures \[MS\_2Color\_Diagram\], \[MS\_Poststamp\_IOK1-5\], \[MS\_CMD\] and Table \[Photo-property\]. If they are LAEs, their redshifts can be further constrained by using NB816 and NB921 images. As seen in Figure \[MS\_Poststamp\_IOK1-5\], Obj-4 is detected in $i'$, NB816, $z'$, and NB921 as well as NB973 but does not show any significant excess in NB816 against $i'$ and in NB921 with respect to $z'$ although it displays NB973-excess greater than $3\sigma$ against $z'$. Therefore, it is neither a $z=5.65$–5.75 LAE nor $z=6.5$–6.6 one. Since it is clearly detected in NB816, which is the waveband well shortward of $z=6.7$–7 Ly$\alpha$ emission, Obj-4 could be a $z=6.2$–6.4 LAE or LBG. On the other hand, Obj-5 is detected in $z'$ and NB921 as well as NB973 but does not show significant excess in NB921 with respect to $z'$ and thus is not a $z=6.5$–6.6 LAE. Also, detection in NB921 rules out the possibility of $z=6.7$–7 LAEs since their fluxes shortward of Ly$\alpha$ should be close to zero. Though it displays an excess of $z'-$ NB973 $>3\sigma$, its $i'-z'$ color is very similar to that of a $z\sim 5.7$ LAE, which is predicted not to produce any NB973-excess. However, it is not detected in NB816 image and thus not a $z\sim 5.7$ LAE. Hence, Obj-5 could be a LAE or LBG at $z=6.2$–6.4. As mentioned earlier, Obj-4 and -5 can be EROs or dwarfs. However, we have confirmed that all the objects with weak ($>3\sigma$) NB973-excesses (i.e., Obj-4 and -5) cannot be $z=7$ LAEs and thus do not have to care anymore about the possibility of missing any faint $z=7$ LAE candidates. Possibility of IOK-1 and IOK-2 Being Variable Objects \[variable\] ------------------------------------------------------------------ As the selection criterion (1) derived in \[COLOR\_CRITERIA\] shows, the most probable $z=7$ LAE candidates are imaged in only NB973 waveband and not detected in any of other filters. Since the NB973 image was taken 1–2 years after the $BVRi'z'$ images of the SDF had been obtained, the sources only bright in NB973 can be some variable objects such as supernovea and active galactic neuclei (AGNs) that accidentally increased their luminosities during our NB973 imaging observation. Therefore, we investigated how many objects can be such variables. In another word, this corresponds to the number of the objects that were fainter than our detection limit NB973 $=24.9$ $(5\sigma)$ at some epoch but that can become brigher than it at another epoch. Since there are no enough data in $z'$ and NB973 bands for the statistic of variables, we instead used $i'$ band images taken over several separate epochs [@Kuma07] for the best possible (but somewhat rough) estimation we can do. First, we calculated the mean color of $i'-$ NB973 over the range of NB973 $=$ 22–25, which is $<i'-$ NB973$> = 0.33$, for the purpose of rough conversion of NB973 into $i'$ magnitude. Using it, NB973 $=24.9$ corresponds to $i'=0.33+24.9=25.23$. Since the detection limit of SDF $i'$ band image ($i'=$ 26.85 in $5\sigma$) is firmly deeper, the number count of objects fainter than our NB973 detection limit corresponding to $i'=25.23$ can be securely obtained down to $i'=$ 26.85. The number count per 0.5 mag bin as well as the magnitude increments needed to exceed NB973 $=24.9$ in brightness to be detected in NB973 are shown in Table \[N\_vs\_i\]. Since we were extrapolating the object number counts in NB973 down to NB973 $=26.5$ using $i'$ band object number counts down to $i'= 26.85$ $(5\sigma)$ and $<i'-$ NB973$> = 0.33$, we also checked how similar the number counts in NB973 and in $i'-0.33$ are to each other as shown in Figure \[MS\_i\_Ncount\] and Table \[N\_vs\_NB973\_or\_i\]. Since $i'-0.33$ number counts are slightly larger (by a factor of $\times 1.1$–1.2 per bin), our calculation of the number of variables can be only a little overestimation. Note that the detection completeness of $i'$ and NB973 are not corrected in their number counts. This can be the cause of the smaller counts in NB973 than $i'$ toward our detection limit NB973 $=24.9$. We use in our calculation four $i'$ images of a part of SDF ($\sim 71$% of the total area) taken at four separate epochs: 4 March 2005, 30 April 2003, 11 April 2002 and 24 April 2001, respectively [@Kuma07]. The numbers of variable objects $N_v(\Delta i')$ that increased their $i'$ magnitudes by $\Delta i'$ over the periods 2003–2005 and 2001–2002 were counted in each magnitude $\Delta i'$ bin (matched to $\Delta m$ bin in Table \[N\_vs\_i\]) as shown in Table \[N\_Variables\]. In the 2003 and 2005 images, $\sim 70,000$ and 80,000 objects were detected down to their limiting magnitudes $i'=26.3$ and 26.6 ($5\sigma$, $2''$ aperture), respectively. Similarly, in the 2001 and 2002 images, $\sim 50,000$ and 70,000 objects were detected down to $i'=25.9$ and 26.2 (also, $5\sigma$, $2''$ aperture), respectively. Thus, taking the averages, we roughly assumed that $N_{obs}=75,000$ and 60,000 objects were detected in 2003–2005 and 2001–2002, respectively and divided the number of variables $N_v(\Delta i')$ by these numbers $N_{obs}$ to obtain the probabilities $P(\Delta i')$ of finding the variables with a brightness increase of $\Delta m = \Delta i'$ in the SDF down to our detection limit. Finally, multiplying the probability by the number counts of $i'$-detected objects $N(\Delta m)$ in Table \[N\_vs\_i\] and summing all them up, the number of variables that became brighter than NB973 $=24.9$ came out to be $\sim 9$–10. Note that since the magnitude increse of $0 < \Delta m\leq 0.1$ is really small change and cannot be distinguished from photometric errors in NB973 and $i'$ magnitude measurements, which is also in the order of up to $\sim 0.1$, we ignored the number of variables in $\Delta m=0$–0.1 bin at the time of the summation. So far, we have considered only the data of the variables that increased their magnitudes over the two epochs and did not treat those having decreased their magnitudes. If we roughly assume that their numbers are approximately the same, the number of possible variables could be about one half of that we estimated above, which is $\sim 4.5$–5. Again, this number might be a little overestimation by a factor of $1.1$–1.2 since we for our extrapolation used $i'$ band number count instead of NB973 one, which is smaller as seen in Figure \[MS\_i\_Ncount\] and Table \[N\_vs\_NB973\_or\_i\]. Correcting for this factor, we estimate that the number of variables would be $\sim3.8$–4.5. This number is slightly different from that reported in @06IOK since more elaborate calculations were used here. The estimated number of variables indicates that we cannot completely reject the possibility of narrowband excess of IOK-1 and IOK-2 being due to object variability. To securely reveal their identities, follow-up spectroscopy of them is required. Spectroscopy ============ To confirm the reality of our candidate LAEs, IOK-1 and IOK-2, selected by the color selection criteria in \[COLOR\_CRITERIA\], we carried out optical spectroscopy of them during 2005–2006 using the Faint Object Camera And Spectrograph [@kashik02 FOCAS] on Subaru. The observation status is summarized in Table \[Spec-Status\]. An Echelle grism (175 lines mm$^{-1}$, resolution $\simeq 1600$) with $z'$ filter and $0.''8$ slit was used to obtain the spectra of 30-min exposure each, dithered along the slit by $\pm1''$. The spectrum of spectrophotometric standard, either of Feige 34, Feige 110 or BD+28$^{\circ}$4211 [@Oke90; @Ham94], was also obtained for each night and used for flux calibration. The observation data reduction and analysis were all performed in the same manners as in @06IOK. IOK-1, a $z=6.96$ Ly$\alpha$ emitter \[IOK-1Spec\] -------------------------------------------------- We identified IOK-1, the brighter of the two $z=7$ LAE candidates, as a $z=6.96$ LAE. The details of the spectroscopic analysis of this object were reported in @06IOK. We measured the skewness and weighted skewness of the Ly$\alpha$ emission line in IOK-1 spectrum and obtained $S=0.558\pm 0.023$ and $S_w=9.46\pm 0.39$ Å, respectively. See @Shima06 and @kashik06 for the definition of $S$ and $S_w$. These values show that the line is quite asymmetric and ensure that it is a Ly$\alpha$ emission. Actually, our $S_w$ value for IOK-1 is comparable to the average weighted skewness of $z=5.7$ and 6.6 LAEs (calculated from the data in @Shima06 and @kashik06), $<S_w^{z=5.7}>=7.43\pm 1.47$Å and $<S_w^{z=6.6}>=7.31\pm 1.51$Å, respectively. The Ly$\alpha$ line flux, $F(\rm Ly\alpha)$, Ly$\alpha$ line luminosity, $L(\rm Ly\alpha)$, the corresponding star formation rate, $SFR(\rm Ly\alpha)$ as well as other spectroscopic properties of the Ly$\alpha$ emission line of IOK-1 are summarized in Table \[Spec-Property\]. To estimate $SFR(\rm Ly\alpha)$, we use the following relation derived from Kennicutt’s equation [@Kenicutt98] with the case B recombination theory [@Brockle71]. $$SFR({\rm Ly\alpha}) = 9.1 \times 10^{-43} L({\rm Ly\alpha}) M_{\odot} {\rm yr}^{-1} \label{L-to-SFR_conversion}$$ In addition, we estimate the UV continuum flux, $F(\rm UV)$, by simply subtracting Ly$\alpha$ emission line flux, $F(\rm Ly\alpha)$, measured in the spectrum from NB973 total flux, $F_{\rm NB973}$, obtained by SExtractor photometry (MAG$_{-}$AUTO). $$F({\rm UV}) =F_{\rm NB973}-F(\rm Ly\alpha) \label{F_UV}$$ Then, this UV continuum flux can be converted into the UV continuum luminosity, $L_{\nu}(\rm UV)$, and corresponding star formation rate, $SFR(\rm UV)$. To estimate $SFR(\rm UV)$, we use the following relation [@Kenicutt98; @madau98]. $$SFR({\rm UV}) = 1.4 \times 10^{-28} L_{\nu}({\rm UV}) M_{\odot} {\rm yr}^{-1} \label{L-to-SFR_conversion_UV}$$ The spectroscopic properties of the UV continuum of IOK-1 are listed in Table \[Spec-Property\_UV\]. IOK-2\[IOK-2-Spec\] ------------------- As reported in @06IOK, although there appears to be an extremely weak emission-like flux at around 9750Å ($z=7.02$ if this is a Ly$\alpha$ line) within the small gap between OH sky lines, 3 hours integration on the IOK-2 spectroscopy (obtained from 4 May 2005 and 24 April 2006) was not deep enough to confirm if it is real or spurious since we had only S/N $\sim 2$ even though measured within the gap. This did not allow us to draw any firm conclusion about IOK-2. To reveal the true entity of this object, we made additional 8 hours follow-up spectroscopy with Subaru/FOCAS on 10 April 2007 (See Table \[Spec-Status\]). The seeing during this observing run was $0.''4$–$1''$ with clear sky. We combined the spectra taken at this night with those obtained in 2006 and 2005 to achieve the total of 11 hours integration. However, sky-subtracted stacked spectrum has shown neither the emission-like flux at 9750Å nor any other spectral features. We also combined only the spectra taken in 2007 and again could not find any emission lines. There are no signals that follow the dithering shifts among 30-min spectrum frames. This result indicates that the extremely weak emission-like flux at 9750Å seen in 3 hours stacked spectrum made from 2005 and 2006 frames is spurious. To see if our 11 hours spectroscopy has reached the depth required to detect a Ly$\alpha$ emission, we compare the sky background RMS of the stacked spectrum with the Ly$\alpha$ flux calculated from NB973 magnitude of the IOK-2. If we assume all of the flux in NB973 comes from the Ly$\alpha$ line at $z=7$ and adopt the total magnitude of NB973 $=24.74$ rather than $2''$ aperture one, we obtain the line flux of $F^{\rm phot}({\rm Ly\alpha})=2.9\times 10^{-17}$ erg s$^{-1}$ cm$^{-2}$. On the other hand, binning of 4 pixels (corresponding to 0.017Mpc at $z=7$) in the spatial direction is used to extract the one dimensional spectrum. The sky RMS (in terms of flux density) is measured in this spectrum by calculating the variance in unbinned pixels along the dispersion direction within the wavelength range corresponding to NB973 passband 9655–9855Å, and it is $3.0 \times 10^{-19}$ erg s$^{-1}$ cm$^{-2}$ Å$^{-1}$. The FWHM of Ly$\alpha$ line, for example, of $z=6.6$ LAE varies from 5.5 to 14.6Å [@kashik06; @Tani05]. If we assume the FWHM distribution of $z=7$ LAE is similar, then we obtain the Ly$\alpha$ line flux of $F^{\rm spec}({\rm Ly\alpha})=(1.7$–$4.4)\times 10^{-18}$ erg s$^{-1}$ cm$^{-2}$. This is 6.6–$17 \times$ fainter than $F^{\rm phot}({\rm Ly\alpha})$, indicating that we have reached enough depth to detect the Ly$\alpha$ line if IOK-2 is a real LAE at $z=7$. Likewise, even if we use $2''$ aperture magnitude of NB973 $=25.51$, we obtain $F^{\rm phot}({\rm Ly\alpha})=1.4\times 10^{-17}$ erg s$^{-1}$ cm$^{-2}$, and $F^{\rm spec}({\rm Ly\alpha})$ is 3.2–$8.2 \times$ fainter than this. Furthermore, even if we assume $\sim$ 68% of the NB973 flux comes from Ly$\alpha$ line as @06IOK did, $F^{\rm spec}({\rm Ly\alpha})$ is still 4.5–12 (2.2–$5.6)\times$ fainter than $F^{\rm phot}({\rm Ly\alpha})$ if we use NB973 total ($2''$ aperture) magnitude to calculate $F^{\rm phot}({\rm Ly\alpha})$. In all cases we have considered, our spectroscopy reached enough depth to detect a Ly$\alpha$ line. Hence, IOK-2 might not be a LAE. However, we should note that residuals of the subtracted OH skylines around 9790Å in the 11 hours stacked spectrum is still locally strong ($\sim 13$% of the NB973 pass band is contaminated) and a Ly$\alpha$ line can be masked out if it is weak and redshifted there. If IOK-2 is not a LAE, the possible origin of the NB973 flux excess can be one of either a LBG at $z\sim7$, a low-$z$ ERO, a late-type star, a variable object or a noise. In the former three cases, spectroscopy could show no signals in the spectrum if their continuum light is very faint. If IOK-2 is a variable object, well possible as discussed in section \[variable\], it could have been fainter than our detection limit at the time of the follow-up spectroscopy. The possibility of IOK-2 being a noise spike in NB973 image cannot be ruled out though it is very low as described in @06IOK. An additional NB973 imaging of SDF will be helpful to see if IOK-2 is either of a variable object or a noise. For the statistical study in the following sections, we hereafter consider that only IOK-1 is a $z=7$ LAE we have successfully identified and IOK-2 is not. Implications for the Reionization and Galaxy Evolution \[Re-and-GalEv\] ======================================================================= From $z\sim 6$ quasar GP daigonostics, the neutral IGM fraction at this redshift was estimated to be $x_{\rm HI}^{z\sim 6.2}\sim 0.01$–0.04 and thus the reionization is believed to have already completed at around this epoch [@Fan06]. This result is also supported by the spectral modeling analysis of the currently most distant GRB at $z\sim 6.3$ conducted by @Totani06, placing the constraint of $0 \leq x_{\rm HI}^{z \sim 6.3} < 0.17$–0.6. On the other hand, the observed Ly$\alpha$ LFs of LAEs at $z\sim6$ and higher redshifts can be used to probe the epoch of reionization. The Ly$\alpha$ LF is expected to decline beyond $z\sim6$ due to a rapid change of neutral IGM and ionization states before and after the completion of the reionization [@HiSpa99; @RM01; @Hu02]. While the Ly$\alpha$ LFs have been observed not to evolve at $z=3$–5.7 [@Ouch03; @Ajiki03; @Tran04; @vBrk05], it was recently found to decline as $L^*_{z=6.6} \sim L^*_{z=5.7} \times (0.4$–0.6) from $z=5.7$ to 6.6 in SDF suggesting that $x_{\rm HI}^{z=6.6}\leq 0.45$ [@kashik06]. Furthermore, we also found that the number density of $z=7$ LAEs is only 18–36% of the density at $z=6.6$ [@06IOK]. This series of decrements in densities might reflect the completion of reionization at around $z\sim6$, beyond which the fraction of the neutral IGM hydrogen could possibly increase and attenuate the Ly$\alpha$ photons from LAEs. However, this interpretation was based on the assumption that there had been no evoluion of LAE population from $z=5.7$ to 7. The recent photometric study of $z\sim 6$ $i$-dropouts and $z\sim 7$–8 $z$-dropouts in the Hubble Ultra Deep Field (UDF) demonstrated that galaxy number density decreases by a factor of $\sim 0.1$–0.2, suggesting the rapid evolution of luminous galaxies between these epochs [@06BI]. In the following discussion, we re-evaluate the comparison of our LAE number and Ly$\alpha$ luminosity densities at $z=7$ with those at $z=5.7$ and 6.6, using the most up-to-date SDF data from @Shima06 and @kashik06. We also investigate the possibility of LAE galaxy evolution between $z=5.7$ and 7 and the degree to which it contributed to the number density deficit between these epochs. The Evolution of Ly$\alpha$ LF at $z \gtrsim 6$ \[Reionization\] ---------------------------------------------------------------- Figure \[MS\_Kobayashi-LyaLFs\] compares Ly$\alpha$ line LFs at $z=5.7$, 6.6 and 7 derived from the latest SDF data (that is, @Shima06, @kashik06, and @06IOK). In addition, Figure \[MS\_Madau\_PLOT\] shows the LAE number densities, $n_{\rm Ly\alpha}$, Ly$\alpha$ line luminosity densities, $\rho_{\rm Ly\alpha}$, and corresponding star formation rate densities, $SFRD_{\rm Ly\alpha}$, at $2.3 < z \leq 7$ down to our detection limit $L_{\rm limit}(\rm Ly\alpha) = 1.0 \times 10^{43}$ erg s$^{-1}$ (converted from NB973 $\leq 24.9$ ($5\sigma$) as @06IOK did). $\rho_{\rm Ly\alpha}$ and $SFRD_{\rm Ly\alpha}$ at $z=7$ are calculated using Ly$\alpha$ line luminosity estimated from the spectrum of IOK-1 and equation \[L-to-SFR\_conversion\]. The number and luminosity densities at $z < 7$ are obtained by integrating the best-fit Ly$\alpha$ Schechter LFs [@Schechter76] of LAEs down to our detection limit $L_{\rm limit}(\rm Ly\alpha)$ as follows. $$\phi(L)dL=\phi^*\left(\frac{L}{L^*}\right)^{\alpha}\exp\left(\frac{-L}{L^*}\right)d\left(\frac{L}{L^*}\right) \label{Scheter-LF}$$ $$n_{\rm Ly\alpha}=\int_{L_{\rm limit}}^{\infty}\phi(L)dL \label{integ-Scheter-LF}$$ $$\rho_{\rm Ly\alpha}=\int_{L_{\rm limit}}^{\infty}\phi(L)LdL \label{integ-L*Scheter-LF}$$ We adopt $(\log(\phi^* [$Mpc$^{-3}]), \log(L^* [$erg s$^{-1}]), \alpha)=(-3.44^{+0.20}_{-0.16}, 43.04^{+0.12}_{-0.14}, -1.5)$ and $(-2.88^{+0.24}_{-0.26}$, $42.60^{+0.12}_{-0.10}, -1.5)$ for Ly$\alpha$ LFs of LAEs at $z=5.7$ and 6.6 in the SDF (taken from Table 3 in @kashik06), respectively. $\phi ^* = (22.0 \pm 12.0, 1.7 \pm 0.2) \times 10^{-4} {\rm Mpc}^{-3}$ and $L^*= (5.4 \pm 1.7, 10.9 \pm 3.3) \times 10^{42}$ erg s$^{-1}$ with $\alpha=-1.6$ are quoted for $2.3<z<4.5$ [@vBrk05] and $z\sim4.5$ [@Dawson07] Ly$\alpha$ LFs, respectively. We also use $\phi ^* = (9.2^{+2.5}_{-2.1}, 3.4^{+1.0}_{-0.9}, 7.7^{+7.4}_{-3.9}) \times 10^{-4} {\rm Mpc}^{-3}$ and $L^*= (5.8^{+0.9}_{-0.7}, 10.2^{+1.8}_{-1.5}, 6.8^{+3.0}_{-2.1}) \times 10^{42}$ erg s$^{-1}$ with $\alpha=-1.5$ for Ly$\alpha$ LFs of LAEs at $z=3.1$, 3.7 and 5.7 in $\sim 1.0$ deg$^2$ of the SXDS field [@Ouch07]. $\rho_{\rm Ly\alpha}$ is converted to $SFRD_{\rm Ly\alpha}$ by using equation \[L-to-SFR\_conversion\]. The uncertainties in the number and luminosity densities at $z=5.7$–7 LAEs in the SDF in Figure \[MS\_Kobayashi-LyaLFs\] and \[MS\_Madau\_PLOT\] (and likewise Figure \[MS\_UVLF\] and \[MS\_LyaLF\]) include cosmic variance and the Poissonian errors associated with small-number statistic. To estimate the cosmic variance $\sigma_v$ at $z=5.7$–7, we adopt a bias parameter $b=3.4\pm 1.8$ derived from the sample of 515 $z=5.7$ LAEs detected in $\sim1.0$ deg$^2$ of the SXDS field [@Seki04; @Ouch05], which is $\sim 5 \times$ wider than SDF. Then, applying the dark matter halo variances ($z, \sigma_{\rm DM}) = (5.7, 0.063), (6.6, 0.053)$ and (7.0, 0.044) obtained using analytic cold dark matter model [@Sheth99; @Some04] and our SDF survey volumes to $b=\sigma_{v}/\sigma_{\rm DM}$, we calculate the geometric mean of cosmic variance at $z=5.7$–7, which is 8.4–27%. The maximum cosmic variance of $\sigma_v=27$% is included in the errors in Figure \[MS\_Kobayashi-LyaLFs\]–\[MS\_LyaLF\]. Similarly, the cosmic variance at $z=2.3$–4.5 is also calculated and included in the Figure 8. The Poissonian errors for small-number statistic are estimated using Table 1 and 2 in @Geh86. When the densities and errors are calculated for $z=5.7$, 6.6 and 7 LAEs in the SDF, the detection completenesses in NB816, NB921 and NB973 images are also corrected (see Figure \[MS\_AVE\_NB973\_Completeness\] for NB973 completeness). While it remains unchanged at $2.3 < z < 5.7$, the LAE number density decreases by a factor of $n_{\rm Ly\alpha}^{z=6.6}/n_{\rm Ly\alpha}^{z=5.7} \simeq 0.24$ from $z=5.7$ to 6.6 and $n_{\rm Ly\alpha}^{z=7}/n_{\rm Ly\alpha}^{z=6.6}\simeq 0.17$ from $z=6.6$ to 7. Similarly, the LAE Ly$\alpha$ luminosity density declines by factors of $\rho_{\rm Ly\alpha}^{z=6.6}/\rho_{\rm Ly\alpha}^{z=5.7} \simeq 0.21$ and $\rho_{\rm Ly\alpha}^{z=7}/\rho_{\rm Ly\alpha}^{z=6.6} \simeq 0.15$. If we assume that the LAE population does not evolve from $z=7$ to 5.7, this density deficit might reflect an increase in neutral IGM hydrogen with redshifts. However, the density decline might also possibly be ascribed to the evolution of LAE population. If the number of LAEs having luminosities fainter than our SDF detection limits drastically increases from $z=5.7$ to 7, this can certainly affect our estimations of $n_{\rm Ly\alpha}$ and $\rho_{\rm Ly\alpha}$. Hence, the Ly$\alpha$ LF alone cannot resolve this degeneracy between the reionization and galaxy evolution effects. To cope with this matter, the rest frame UV continuum luminosity function (UVLF) of LAEs can be used to extrtact the galaxy evolution effect alone since it is not suppressed by neutral hydrogen. @kashik06 have compared the UVLF (rest frame $\sim1255$Å at $z=6.6$ and $\sim1350$Å at $z=5.7$) of LAEs in SDF and other field also imaged by Suprime-Cam and found that it does not significantly change from $z=5.7$ to 6.6. This suggests that the density deficit between $z=5.7$ and 6.6 are not mainly caused by galaxy evolution. Thus, @kashik06 concluded that the reionization might have ended at around $5.7<z<6.6$ and it supports the results of $z\sim6$ quasars and GRB [@Fan06; @Totani06]. If this is also the case for $z=6.6$–7 LAEs, the further decline of LAE density implies increase in nuetral hydrogen that attenuates Ly$\alpha$ photons and supports @kashik06’s result. Can the Ly$\alpha$ LF evolution be explained only by galaxy evolution? \[GalEvol\] ---------------------------------------------------------------------------------- We do not know whether the LAEs themselves evolve at $z=6.6$–7. If the galaxy evolution occurs at $z=6.6$–7, the further decline of LAE density at these epochs reflects it in addition to reionization. Hence, in this section, we investigate the possibilities of the LAE evolution from $z=6.6$ to 7 using three independent methods: (1) Comparison of the UVLFs of $z=5.7$ and 6.6 LAEs with that of $z=7$ LAEs derived from our spectroscopic data of IOK-1, (2) Estimation from the UVLF evolution of LBGs and (3) Application of an LAE evolution model constructed by @ktn07 based on a hierarchical clustering galaxy formation model [@ny04] to predict the expected change of Ly$\alpha$ LF from $z=7$ to 5.7 due to galaxy evolution alone. ### Implications from UVLF of $z=7$ LAEs \[subsubSec\_UVLF\] First, we roughly estimate UVLF of $z=7$ LAEs to see if there is any possible galaxy evolution from $z=6.6$. We calculate absolute UV magnitude $M_{\rm UV,1230}$ at the rest frame 1230Å for IOK-1 from the UV continuum flux $F({\rm UV})$ obtained in \[IOK-1Spec\] using equation \[F\_UV\] and $F(\rm Ly\alpha)$ measured in the spectrum of IOK-1. That is, $$\begin{aligned} M_{\rm UV,1230} &=& m_{\rm UV,1230}-DM+2.5\log(1+z) \nonumber \\ &=& -2.5\log\left[\frac{\lambda^2}{c\Delta \lambda}F({\rm UV})\right]-48.6-DM+2.5\log(1+z) \label{AbsMag_UV_SPEC}\end{aligned}$$ where $m_{\rm UV,1230}$ is UV apparent magnitude, $\lambda=1230(1+z)$[Å ]{}and $\Delta \lambda$ is the wavelength range in which the UV continuum is covered by NB973 passband, $\Delta \lambda=9855{\rm \AA}-(1+z)1216{\rm\AA}$. Also, $DM$ and $c$ are a distance modulus and the speed of light. Figure \[MS\_UVLF\] shows the UVLF of $z=7$ LAE derived here together with those of $z=5.7$ and 6.6 LAEs. We ignore a subtle difference in the rest frame UV wavelengths (rest frame $\sim1230$Å at $z=7.0$, $\sim1255$Å at $z=6.6$ and $\sim1350$Å at $z=5.7$) assuming that the LAEs have flat UV continua. Also, the detection completeness of NB973 image is corrected using Figure \[MS\_AVE\_NB973\_Completeness\]. The UVLF implies that there is no galaxy evolution from $z=7$ to 6.6, and the density deficits of $n_{\rm Ly\alpha}$ and $\rho_{\rm Ly\alpha}$ between these epochs might be attributed mainly to the reionization. ### Estimation from the UVLF Evolution of LBGs \[UVLF-Yoshida\] Even though the $z=7$ UVLF derived from the SDF data suggests that LAEs do not evolve from $z=7$ to 6.6, it suffers small statistics due to the relatively shallower detection limit in NB973 (equivalent to $L(\rm Ly\alpha) \geq 1.0\times 10^{34}$ erg s$^{-1}$). Therefore, we discuss the possibilities of the LAE galaxy evolution at $z=5.7$–7 using inferences from other independent methods, by which we try to obtain some helpful insights. One possible way to estimate the LAE galaxy evolution at $z=5.7$–7 is the inference from the evolution of UVLF of high-$z$ LBGs, assuming LAEs and LBGs share a similar evolutionary history. We use the UVLF data from the recent observational studies about $z\sim 4$–8 LBGs conducted by @Yoshida06, @Bouw06 and @06BI. Their surveys, when combined together, form the deepest and the widest imaging data with the samples of the largest numbers in all LBG surveys. Interestingly, @Yoshida06 combined their data of $z\sim4$ and 5 LBGs with those from lower-$z$ LBG surveys and $z\sim6$ LBG ($i$-dropout) study by @Bouw06 and found clear evolution of the UVLF from $z\sim 6$ to 0, in which only the characteristic magnitude, $M_{\rm UV}^*$, changes significantly and almost linearly with redshift while the normalization factor, $\phi^*$, and the faint end slope, $\alpha$, tend to remain constant as seen in Figure 22 of @Yoshida06. This trend of $M_{\rm UV}^*$, $\phi^*$ and $\alpha$ vs. $z$ continues up to $z\sim 7.4$ when we add $M_{\rm UV}^*=-19.5\pm0.6$ mag, $\phi=0.00202^{+0.00086}_{-0.00076}$ Mpc$^{-3}$ and $\alpha=-1.73$ (or $M_{\rm UV}^*=-18.75\pm0.6$ mag, $\phi=0.00218$ Mpc$^{-3}$ and $\alpha=-1.73$) of the first (or second) LBG UVLF at $z\sim7.4$ derived by @06BI. We estimate the change in $M_{\rm UV}^*$ between $z=5.7$ and $z=7$ from the $z$-dependence of $M_{\rm UV}^*$ at $z\sim 4$, 5, 6 and 7.4, assuming the correlation is linear with the slope of $\Delta M^*_{\rm UV}/\Delta z\simeq 0.47$ mag. As a result, $M^*_{\rm UV}$ is expected to become fainter by 0.6 mag, which corresponds to the luminosity of $L^{*,\rm expect}_{z=7} \simeq L^*_{z=5.7}\times 10^{-0.4\times0.6}\simeq L^*_{z=5.7}\times 0.58$. Here, the relation between the equations \[Scheter-LF\] and the Schechter LF in absolute magnitude form, $$\phi (M)dM=\frac{2}{5}\phi^*(\ln10)\left[10^{\frac{2}{5}(M^*-M)}\right]^{\alpha +1}\exp\left[-10^{\frac{2}{5}(M^*-M)}\right]dM \label{AbsMag_SchechterLF}$$ is used to convert $M^*_{\rm UV}$ to $L^*$. To infer the deficit by which Ly$\alpha$ LF of LAEs decreases from $z=5.7$ to 7 due to thier evolution alone, we now roughly assume that this UVLF evolution of the LBGs can be also applied to that of LAEs at $z=5.7$–7 and Ly$\alpha$ line luminosities of LAEs are simply proportional to their UV continuum luminosities as Figure 15 in @Tani05 suggests. Based on this idea, we change $L^*_{z=5.7}$ of our best-fit Schechter Ly$\alpha$ LF at $z=5.7$ in exactly the same way (i.e., $\log(\phi^* [$Mpc$^{-3}])=-3.44^{+0.20}_{-0.16}$ and $\alpha=-1.5$ as in \[Reionization\] but $\log(L^*_{z=5.7} [$erg s$^{-1}])=43.04^{+0.12}_{-0.14}+\log0.58$ this time) to obtain $z=7$ Ly$\alpha$ LF. This result is compared in Figure \[MS\_LyaLF\] with actual observation data of IOK-1. The inferred Ly$\alpha$ LF at $z=7$ does not really agree with one calculated from the spectrum of IOK-1. Our density deficit between $z=5.7$ and 7 LAEs cannot be explained by only the galaxy evolution factor estimated here. The integrations of the inferred Ly$\alpha$ LF using equation \[integ-Scheter-LF\] and \[integ-L\*Scheter-LF\] down to $\log L(\rm Ly\alpha)=43.05$, which is IOK-1’s Ly$\alpha$ line luminosity, yield $n^{{\rm expect},z=7}_{\rm Ly\alpha} \simeq 1.5\times 10^{-5}$ Mpc$^{-3}$ and $\rho^{{\rm expect},z=7}_{\rm Ly\alpha} \simeq 2.3\times 10^{38}$ erg s$^{-1}$Mpc$^{-3}$, respectively. Our LAE number and Ly$\alpha$ line luminosity densities at $z=7$ are $n^{z=7}_{\rm Ly\alpha} \simeq (3.6_{-2.8}^{+7.3})\times 10^{-6}$ Mpc$^{-3}$ and $\rho^{z=7}_{\rm Ly\alpha} \simeq (4.1_{-3.1}^{+8.2})\times 10^{37}$ erg s$^{-1}$Mpc$^{-3}$ based on IOK-1 data alone, respectively. Therefore, the density deficits of $n^{z=7}_{\rm Ly\alpha}/n^{{\rm expect},z=7}_{\rm Ly\alpha} \simeq 0.24_{-0.19}^{+0.49}$ and $\rho^{z=7}_{\rm Ly\alpha}/\rho^{{\rm expect},z=7}_{\rm Ly\alpha} \simeq 0.18_{-0.13}^{+0.36}$ might be due to the attenuation of Ly$\alpha$ photons by neutral IGM having existed during the reionization. In order for the inferred Ly$\alpha$ LF at $z=7$ to have the same number density as observed Ly$\alpha$ LF at $z=7$ (i.e., $n^{{\rm expect},z=7}_{{\rm Ly\alpha}}=n^{z=7}_{\rm Ly\alpha}$), we have to change the characteristic luminosity $L^{*,\rm expect}_{z=7}$ by a factor of $\times 0.65_{-0.18}^{+0.24}$. This factor might reflect the deficit by which Ly$\alpha$ LF of LAEs decreases from $z=5.7$ to 7 due to the attenuation of the Ly$\alpha$ lines of LAEs by the increasing neutral IGM during the reionization beyond $z\sim6$. We can refer to such a deficit factor due to the neutral IGM attenuation as IGM transmission to Ly$\alpha$ photons, $T_{\rm Ly\alpha}^{\rm IGM}$. In the case of our discussion so far, this can be regarded as the ratio of the Ly$\alpha$ line luminosities of LAEs in the environments with some neutral IGM fraction $x_{\rm HI}$ still remaining and with no neutral IGM ([i.e.]{}, $x_{\rm HI}=0$), $T_{\rm Ly\alpha}^{\rm IGM} = L^{x_{\rm HI}}({\rm Ly\alpha})/L^{x_{\rm HI}=0}({\rm Ly\alpha})$. Once we know $T_{\rm Ly\alpha}^{\rm IGM}$, the neutral IGM fraction at $z=7$, $x_{\rm HI}^{z=7}$, can be estimated. However, the calculation of $x_{\rm HI}$ from $T_{\rm Ly\alpha}^{\rm IGM}$ is not a simple issue and is dependent on theoretical models. We will discuss it in \[neutral-IGM\]. Similarly, the $z$-dependence of $M_{\rm UV}^*$, $\Delta M^*_{\rm UV}/\Delta z\simeq 0.47$ mag, predicts $\Delta M^*_{\rm UV}\simeq 0.42$ for $\Delta z=6.6-5.7$ and thus $L^{*,{\rm expect}}_{z=6.6} \simeq L^*_{z=5.7}\times 10^{-0.4\times0.42}\simeq L^*_{z=5.7}\times 0.68$ due to LAE galaxy evolution from $z=6.6$ to 5.7. However, @kashik06 found that the Ly$\alpha$ LF declines in such a way that $L^*_{z=6.6} \sim L^*_{z=5.7}\times (0.4$–0.6) from $z=5.7$ to 6.6, regarding their photometric and spectroscopic LFs as the upper and lower limits of $z=6.6$ Ly$\alpha$ LF, respectively. Hence, the attenuation of Ly$\alpha$ photons by the neutral IGM at $z=6.6$ is $T_{\rm Ly\alpha}^{\rm IGM}=L^*_{z=6.6}/L^{*,{\rm expect}}_{z=6.6} \simeq 0.59$–0.88. The decrease in Ly$\alpha$ LF from $z=5.7$ to 6.6 and 7 cannot be explained only by the evolution of LAEs inferred from that of LBGs. This result implies that the remaining deficits could come from the attenuation of Ly$\alpha$ lines by neutral IGM. If this is the case, Ly$\alpha$ line tends to be more attenuated at higher redshift as $T_{\rm Ly\alpha}^{\rm IGM}\simeq 0.59$–0.88 at $z=6.6$ and $0.65_{-0.18}^{+0.24}$ at $z=7$, implying that the neutral IGM fraction, $x_{\rm HI}$ increases with redshift beyond $z\sim6$ as derived in \[neutral-IGM\]. However, note that this result is based on the assumption that LAEs evolve in the same way as LBGs do. This might not be necessarily true. Although the LAEs are believed to be closely related to LBGs and many of candidate LBGs at high redshift have been identified as LAEs by spectroscopy, the link between these two populations has not been clearly understood yet and they might have followed different evolutionary histories. ### Application of A Galaxy Evolution Model \[Model\_GalEvol\] In the previous sections, we tried to estimate the intrinsic evolution of Ly$\alpha$ LF of LAEs from UVLF evolution of LAEs and LBGs, with an implicit assumption that the evolutions of Ly$\alpha$ and UV luminosities are similar to each other. However, this assumption may not be true in reality, and hence our argument will be strengthened if we can show that these are indeed similar in a realistic theoretical model of LAEs. For this purpose, we use a recent model for LAE evolution constructed by @ktn07 [hereafter K07]. This model is an extension of one of the latest hierarchical clustering models of galaxy formation [@ny04], in which the merger histories of dark matter haloes are modeled based on structure formation theory and star formation processes in dark haloes are calculated to predict the photometric properties of galaxies. This model can reproduce most of the observations for photometric, kinematic, structural, and chemical properties of local galaxies, as well as high-$z$ LBGs [@kashikawa06]. K07 extended this model without changing the original model parameters, but introducing new modeling only for the escape fraction of Ly$\alpha$ photons ($f_{\rm esc}^{\rm Ly \alpha}$) from galaxies based on physical considerations. Specifically, the dust extinction of Ly$\alpha$ photons and effect of galaxy-scale outflow are newly taken into account. This is the first model for LAEs based on a hierarchical galaxy formation model in which $f_{\rm esc}^{\rm Ly \alpha}$ is not a universal constant but depends on physical conditions of galaxies. This model can reproduce the observed Ly$\alpha$ LF of LAEs in $z \sim $ 3–6, and predicts that galaxies under strong galaxy scale outflow with $f_{\rm esc}^{\rm Ly \alpha} \sim 1$ are dominant in the bright-end of Ly$\alpha$ LFs, which is also consistent with observations. It should be noted here that $f_{\rm esc}^{\rm Ly\alpha}$ in the K07 model could vary from galaxy to galaxy and may evolve within a galaxy, and hence even if the Ly$\alpha$ photon production rate is proportional to star formation rate, the evolutions of Ly$\alpha$ and UV LFs could be different from each other. The K07 predictions of the Ly$\alpha$ and UV LFs of LAEs at $z=5.7,~6.6$ and $7$ assuming $T_{\rm Ly\alpha}^{\rm IGM} = 1$ are presented in Figure \[MS\_Kobayashi-LyaLFs\] and \[MS\_UVLF\], respectively. The evolutions of number density and Ly$\alpha$ luminosity density of LAEs with a threshold Ly$\alpha$ luminosity predicted by this model are shown Figure \[MS\_Madau\_PLOT\]. As demonstrated in K07, the deficit of the observed LAEs compared with the model prediction of Ly$\alpha$ LF is clear at $z \gtrsim 6$ as seen in Figure \[MS\_Kobayashi-LyaLFs\] while this model precisely reproduces the observed evolution at $z \sim$3–6. On the other hand, the degree of evolution of UVLF of LAEs predicted by the model is similar to that observed in the same redshift range. The fact that the model prediction is consistent with the UVLF evolution but not with the Ly$\alpha$ LF evolution then implies that the evolution of the observed Ly$\alpha$ LF at $z \lesssim 6$ could be caused by the IGM absorption. The discrepancy can be resolved if we adopt a simple prescription of luminosity-independent IGM transmission: $T_\mathrm{Ly\alpha}^\mathrm{IGM} = 0.62$–0.78 at $z=6.6$ and $T_\mathrm{Ly\alpha}^\mathrm{IGM} = 0.40$–0.64 at $z=7$. Implications for Reionization \[neutral-IGM\] --------------------------------------------- In the previous section we have shown that the evolution of Ly$\alpha$ LF at $z \gtrsim 6$ could be likely a result of Ly$\alpha$ photon absorption by neutral IGM, implying a significant evolution of the IGM neutral fraction beyond $z \gtrsim 6$. In order to obtain some quantitative implications for reionization, however, we must translate the estimates of $T_\mathrm{Ly\alpha}^\mathrm{IGM}$ obtained in the previous section into the IGM neutral fraction, $x_{\rm HI}$. This procedure is not straightforward because this translation is generally model dependent (e.g., Santos 2004; Dijkstra et al. 2007). Here, we apply the dynamic model with a reasonable velocity shift of Ly$\alpha$ line by $360~\mathrm{km~s^{-1}}$ redward of the systemic velocity [@San04]. The attenuation factor of Ly$\alpha$ luminosity is given as a function of $x_{\rm HI}$, and the reason for the choice of this model is that this model predicts no attenuation when $x_{\rm HI} = 0$. Note that some other models of Santos (2004) predict a significant attenuation even in the case of $x_{\rm HI}=0$, due to the neutral gas associated with the host haloes of LAEs. Choosing this particular model then means that we ascribe the evolution of the Ly$\alpha$ LF at $z \gtrsim 6$ only to the absorption by pure IGM. We consider that this is a reasonable assumption, since observations indicate that the escape fraction of Ly$\alpha$ photons is about unity at least for LAEs at $z \sim 3$ [@Gawiser06]. If LAEs at $z \sim 7$ are a similar population to the low-$z$ LAEs, we do not expect significant absorption by neutral gas physically associated to LAEs. On the other hand, it should also be kept in mind that if $z\sim 7$ LAEs are surrounded by a significant amount of nearby neutral gas that is not present for low-$z$ LAEs, the estimate of $x_{\rm HI}$ as an average of IGM in the universe could become lower than those derived here. In section \[UVLF-Yoshida\], we obtained $T_\mathrm{Ly\alpha}^\mathrm{IGM}=0.59$–0.88 and $0.65_{-0.18}^{+0.24}$ at $z=6.6$ and 7.0, respectively. Application of @San04 model yields the neutral fractions of $x_{\rm HI}^{z=6.6}\sim 0.12$–0.42 and $x_{\rm HI}^{z=7}\sim 0.12$–0.54. If we use the K07 model in section \[Model\_GalEvol\] to estimate $x_{\rm HI}$, we find $x^{z=6.6}_\mathrm{HI} \sim 0.24$–0.36 from $T_\mathrm{Ly\alpha}^\mathrm{IGM}=0.62$–0.78 at $z=6.6$, and $x^{z=7}_\mathrm{HI} \sim 0.32$–0.64 from $T_\mathrm{Ly\alpha}^\mathrm{IGM} =0.40$–0.64 at $z =7$. The neutral fraction $x_{\rm HI}$ at $z=6.6$ and 7 estimated from two independent methods are consistent with each other and tends to increase with redshift at $z>6$. These series of $x_{\rm HI}$ values at $z=6.6$ and 7, combined with $x_{\rm HI}^{z\sim6.2} \sim 0.01$–0.04 and $x_{\rm HI}^{z\sim6.3} < 0.17$–0.6 derived from quasar GP tests and GRB spectral analysis [@Fan06; @Totani06], supports the picture that the reionization completed at $z\sim6$, beyond which it was still in progress with larger neutral fraction of IGM hydrogen, which evolved with redshift. The neutral IGM fractions obtained by independent methods are summerized in Table \[Neutral\_Frction\]. However, our constraint suggests that the neutral IGM persists at the $\sim50$% level as late as $z=7$ and would contradict the [*WMAP*]{} conclusion that the reionization epoch was $z = 10.9^{+2.7}_{-2.3}$ at the $>95$% confidence level. Our results could be reconciled with [*WMAP*]{} only if there is a statistical fluke (one time in 20, a 95% confidence range is wrong) or the reionization happened twice (e.g., Cen 2003) so that a lot of the observed electron scattering happens at $z \gg 7$, and then the universe becomes partially neutral again, allowing us to observe neutral gas at $z=7$. Finally, we again emphasize that these quantitative results are model-dependent and should be interpreted with caution. However, the decrease in the Ly$\alpha$ LF of LAEs beyond $z \sim 6$ is more significant than expected from UVLF evolution or a theoretical model, and hence the physical status of IGM might be changing at $z \gtrsim 6$. Summary and Conclusion ====================== We have conducted a narrowband NB973 survey of $z=7$ LAEs, established color criteria to select out $z=7$ LAEs, and found two candidates down to $L(\rm Ly\alpha) \geq 1.0 \times 10^{43}$ erg s$^{-1}$ (5$\sigma$). By follow-up spectroscopy, the brighter of the two was indentified as a $z=6.96$ LAE while we can confirm neither Ly$\alpha$ emission nor any other features in the spectrum of the other candidate despite the sufficiently long integration time. The number and Ly$\alpha$ luminosity densities at $z=7$ obtained by this study were compared to those at $z=5.7$ and 6.6 derived from the latest samples obtained by the SDF surveys [@Shima06; @kashik06] down to our detection limit, and clear evolution of density deficits with increasing redshifts was observed such that: $n_{\rm Ly\alpha}^{z=6.6}/n_{\rm Ly\alpha}^{z=5.7} \simeq 0.24$ and $n_{\rm Ly\alpha}^{z=7}/n_{\rm Ly\alpha}^{z=6.6}\simeq 0.17 $; $\rho_{\rm Ly\alpha}^{z=6.6}/\rho_{\rm Ly\alpha}^{z=5.7} \simeq 0.21$ and $\rho_{\rm Ly\alpha}^{z=7}/\rho_{\rm Ly\alpha}^{z=6.6} \simeq 0.15$. If we assume that the LAE population does not evolve from $z=7$ to 5.7, this series of density deficits could reflect an increase in neutral IGM hydrogen with redshifts beyond $z\sim 6$. To see if LAE evolves from $z=7$ to 6.6, we also compared UVLF of $z=7$ LAE with those of $z=5.7$ and 6.6 LAEs derived from the SDF LAE surveys. No decrease in the number density of UVLF was observed from $z=5.7$ through 6.6 to 7. Since the UV photons are not attenuated by neutral IGM and the UVLF is only sensitive to galaxy evolution, our result suggests that the deficits in $n_{\rm Ly\alpha}$ and $\rho_{\rm Ly\alpha}$ might reflect the cosmic reionization and the LAE population does not significantly evolve at $z=5.7$–7. However, the UVLF at $z=7$ suffers from small statistics at this time and the interpretation is not robust. Hence, the amount by which the LAE evolution affects the density deficits among $z=7$, 6.6 and 5.7 were investigated by the inference from UVLFs of $z<8$ LBGs [@Yoshida06; @06BI; @Bouw06] based on the assumption that LAEs would have evolved in the same way as LBGs and Ly$\alpha$ line luminosities of LAEs are proportional to their UV continuum luminosities. Even after the galaxy evolution was taken into account, there still remained some density deficits among these epochs. If we attribute the deficits to the attenuation of Ly$\alpha$ photons by the neutral IGMs, the neutral fractions of the Universe at $z=6.6$ and 7 are estimated to be 0.12–0.42 and 0.12–0.54, respectively. This result, combined with neutral fractions derived from $z\sim 6$ quasars and a $z\sim6.3$ GRB, supports the completion of the reionization at $z\sim 6$ and the possible evolution of neutral IGM beyond this redshift. Again, this result is based on the assumption that LAEs would have evolved in the same way as LBGs, which might not be necessarily true. Therefore, we furthermore used a LAE evolution model (K07 model) constructed from hierarchical clustering scenario to reproduce Ly$\alpha$ LFs at $z=5.7$, 6.6 and 7 in the case of transparent IGM ($x_{\rm HI}=0$) and compared with Ly$\alpha$ LFs obtained by the latest SDF surveys [@Shima06; @kashik06; @06IOK]. The observed data at $z=6.6$ and 7 showed smaller number and luminosity densities than those predicted by the model, suggesting that there still remains the possibility of the incomplete reionization at those epochs. The neutral fractions at $z=6.6$ and 7 estimated from the decline of the LFs by the reionization factors alone after the galaxy evolution effects had been corrected are $x_{\rm HI}^{z=6.6}\sim 0.24$–0.36 and $x_{\rm HI}^{z=7}\sim 0.32$–0.64, respectively, also consistent with quasar and GRB results. The results regarding $z=7$ LAE presented here are based on relatively shallow depth of NB973 imaging, small sample statistics, and only the SDF and optical imaging data. From these data alone, the trend of the density deficit between $z=5.7$ and 7 in fainter LAE populations and in other places in the Universe, changes in physical properties of LAEs associated with their evolution between these epochs, and typical spectroscopic properties of $z=7$ LAEs such as the direct detection of the attenuation of Ly$\alpha$ line cannot be inferred. Also, the calculation of the UV continuum flux and thus UVLF is dependent on relatively rough estimation without infrared data. Deeper NB973 imaging of the SDF as well as other fields for which infrared images are available and follow-up spectroscopy of newly detected LAE candidates will provide the answers and more precise results in future studies. We greatly appreciate the technology and engineers of Asahi Spectra Co., Ltd. for developing the NB973 filter that led us to the discovery of the $z=6.96$ LAE. We are deeply grateful to the staff at the Subaru Telescope for their kind supports to make our observations successful. We express the gratitude to the SDF team for obtaining and providing us with invaluable imaging data. K.O. acknowledges the fellowship support from the Japan Society for the Promotion of Science and the Special Postdoctoral Researchers Program at RIKEN. [ ]{} Ajiki, M., et al. 2003, , 126, 2091 Bertin, E., & Arnouts, S. 1996, , 117, 393 Bouwens, R.J., & Illingworth, G.D. 2006, , 443, 189 Bouwens, R.J., Illingworth, G.D., Blakeslee, J.P., & Franx, M. 2006, , 653, 53 Brocklehurst, M., 1971, , 153, 471 Bruzual, A.G., & Charlot S. 2003, , 344, 1000 Cen, R. 2003, , 591, 12 Dawson, S., Rhoads, J.E., Malhotra, S., Stern, D., Wang, J., Dey, A., Spinrad, H., & Jannuzi, B.T. 2007, preprint (arXiv:0707.4182) Dijkstra, M., Lidz, A., & Wyithe, J. S. B. 2007, , 377, 1175 Fan, X., et al. 2006, , 132, 117 Finkelstein, S.L., Rhoads, J.E., Malhotra, S., Grogin, N., & Wang, J. 2007, preprint (arXiv:0708.4226) Furlanetto, S.R., Zaldarriaga, M., & Hernquist, L. , 365, 1012 Gawiser, E., et al. 2006, , 642, L13 Gehrels, N., 1986, , 303, 336 Gunn, J.E., & Peterson, B.A. 1965, , 142, 1633 Haiman, Z., & Spaans, M. 1999, , 518, 138 Hamuy, M., Suntzeff, N.B., Heathcote, S.R., Walker, A.R., Gigoux, P., & Phillips, M.M. 1994, , 106, 566 Hu, E.M., Cowie, L.L., McMahon, R.G., Capak, P., Iwamuro, F., Kneib, J.-P., Maihara, T., & Motohara, K. 2002, , 568, L75 Hu, E.M., Cowie, L.L., Capak, P., McMahon, R.G., Hayashino, T., & Komiyama, Y. 2004, , 127, 563 Iye, M. et al. 2006, , 443, 186 Kashikawa, N. et al. 2002, , 54, 819 Kashikawa, N. et al. 2004, , 56, 1011 Kashikawa, N. et al. 2006, , 637, 631 Kashikawa, N. et al. 2006, , 648, 7 Kennicutt, R.C., Jr., 1998, , 36, 189 Kobayashi, M. A. R., Totani, T., & Nagashima, M. 2007, preprint (arXiv:0705.4349) Kodaira, K. et al. 2003, , 55, L17 Kodama, T., & Arimoto N. 1997, , 320, 41 Lehnert, M.D., & Bremer, M. 2003, , 593, 630 Madau, P. 1995, , 441, 18 Madau, P., Pozzetti, L., & Dickinson, M. 1998, , 498, 106 Malhotra, S., & Rhoads, J.E. 2004, , 617, L5 McQuinn, M., Hernquist, L., Zaldarriaga, M., & Dutta, S. , 381, 75 Miyazaki, S. et al. 2002, , 54, 833 Morokuma, T. et al. 2007, in preparation Nagashima, M., & Yoshii, Y. 2004, , 610, 23 Oke, J.B. 1990, , 99, 1621 Ouchi, M. et al. 2003, , 582, 60 Ouchi, M. et al. 2004, , 611, 660 Ouchi, M. et al. 2005, , 620, L1 Ouchi, M. et al. 2007, preprint (arXiv:0707.3161) Page, L. et al. 2007, , 170, 335 Pirzkal, N., Malhotra, S., Rhoads, J. E., & Xu, C. 2007, , 667, 49 Rhoads, J.E., & Malhotra, S. 2001, , 563, L5 Santos, M.R. 2004, , 349, 1137 Schechter, P. 1976, , 203, 297 Sekiguchi, K. et al. 2004, Ap&SS, 301, 169 Sheth, R.K., & Tormen, C. 1999, , 308, 119 Shimasaku, K. et al. 2006, , 58, 313 Somerville, R.S., Lee, K., Ferguson, H.C., Gardner, J.P., Moustakas, L.A., Giavalisco, M. 2004, , 600, 171L Spergel, D.N. st al. 2007, , 170, 377 Taniguchi, Y. et al. 2006, , 57, 165 Totani, T., Kawai, N., Kosugi, G., Aoki, K., Yamada, T., Iye, M., Ohta, K., & Hattori, T. 2006, , 58, 485 Tran, K-.V.H., et al. 2004, , 612, 89L van Breukelen C., Jarvis, M.J., & Venemans B.P. 2005, , 359, 895 Yagi, M., Kashikawa, N., Sekiguchi, M., Doi, M., Yasuda, N., Shimasaku, K., & Okamura, S. 2002, , 123, 66 Yoshida, M. et al. 2006, , 653, 988 [lcrrrrccc]{}Object & Coordinate & $i'$ & NB816 & $z'$ & NB921 & NB973 & NB973 (total) & Criteria\ IOK-1 & 13:23:59.8 +27:24:55.8 & $>$27.84 & $>$27.04 & $>$27.04 & $>$26.96 & 24.60 & 24.40 & (1)\ IOK-2 & 13:25:32.9 +27:31:44.7 & $>$27.84 & $>$27.04 & $>$27.04 & $>$26.96 & 25.51 & 24.74 & (1)\ IOK-3 & 13:24:10.8 +27:19:28.1 & $>$27.84 & $>$27.04 & 26.26 & 25.08 & 24.87 & 24.57 & —\ Obj-4 & 13:25:09.1 +27:32:16.8 & 27.14 & 26.77 & 25.75 & 25.51 & 24.97 & 24.85 & (3)\ Obj-5 & 13:23:45.8 +27:32:51.4 & $>$27.84 & $>$27.04 & 25.76 & 25.44 & 25.10 & 24.74 & (3)\ [cccc]{}$i'^a$ & NB973$^a$ & $\Delta m^b$ & $N(i')=N(\Delta m)$$^c$\ 25.23 & 24.9 $(5\sigma)$ & 0.0 & —\ 25.23–25.33 & 24.9–25.0 & 0.0–0.1 & 2297\ 25.33–25.83 & 25.0–25.5 & 0.1–0.6 & 13589\ 25.83–26.33 & 25.5–26.0 & 0.6–1.1 & 16012\ 26.33–26.83 & 26.0–26.5 & 1.1–1.6 & 17367\ [cccr]{}NB973 $=i'-0.33$$^a$ & $i'$$^b$ & $N($NB973)$^c$ & $N(i')$$^d$\ 22.5–23.0 & 22.83–23.33 & 2599 & 2588\ 23.0–23.5 & 23.33–23.83 & 3808 & 3953\ 23.5–24.0 & 23.83–24.33 & 5230 & 5697\ 24.0–24.5 & 24.33–24.83 & 7093 & 8121\ 24.5–25.0 & 24.83–25.33 & 8800 & 10660\ 25.0–25.5 & 25.33–25.83 & — & 13589\ 25.5–26.0 & 25.83–26.33 & — & 16012\ 26.0–26.5 & 26.33–26.83 & — & 17367\ [ccccccc]{}$\Delta i'^a$ & & &\ (AB mag) & 2003–2005 & 2001–2002 & 2003–2005 & 2001–2002 & 2003–2005 & 2001–2002\ 0.0 & — & — & — & — & — & —\ 0.0–0.1 & 250 & 409 & $3.3\times10^{-3}$ & $6.8\times10^{-3}$ & 7.6$^c$ & 15.6$^c$\ 0.1–0.6 & 52 & 37 & $6.9\times10^{-4}$ & $6.2\times10^{-4}$ & 9.4 & 8.4\ 0.6–1.1 & 1 & 2 & $1.3\times10^{-5}$ & $3.3\times10^{-5}$ & 0.21 & 0.53\ 1.1–1.6 & 1 & 1 & $1.3\times10^{-5}$ & $1.7\times10^{-5}$ & 0.23 & 0.30\ & 9.8$^d$ & 9.2$^d$\ [lrcrc]{}Object & date & seeing & exposure$^a$ & FOCAS Mask\ & (HST) & ($''$) & (seconds) &\ IOK-1$^b$ & 14, 15 May 2005 & 0.5–0.7, 0.9–1.0 & 10800 & MOS-1\ & 1 June 2005 & 0.6–0.8 & 3600 &\ & 24 April 2006 & 0.9–1.5 & 16200 &\ IOK-2$^c$ & 14, 15 May 2005 & 0.5–0.7, 0.9–1.0 & 3600 & MOS-4\ & 24 April 2006 & 0.9–1.1 & 7200 &\ & 19, 21 June 2006 & 1.0–2.0, 1.0–2.0 & 28430$^d$ &\ & 10 April 2007 & 0.4–1.0 & 28800 &\ IOK-3 & 14, 15 May 2005 & 0.5–0.7, 0.9–1.0 & 3600 & MOS-2\ & 1 June 2005 & 0.6–0.8 & 5400 &\ Obj-4$^e$ & 14, 15 May 2005 & 0.5–0.7, 0.9–1.0 & 1800 & MOS-5\ Obj-5$^e$ & 14, 15 May 2005 & 0.5–0.7, 0.9–1.0 & 3600 & MOS-3\ [lccccccc]{}Object & $z$ & $F(\rm Ly\alpha)$ & $L(\rm Ly\alpha)$ & $SFR(\rm Ly\alpha)$ & FWHM & Sw & S/N\ & & ($10^{-17}$erg s$^{-1}$ cm$^{-2}$) & ($10^{43}$erg s$^{-1}$) & (M$_{\odot}$ yr$^{-1}$) & (Å) (km s$^{-1}$) & (Å) &\ IOK-1 & 6.96 & 2.00 & 1.13 & 10.24 & 13   403 & $9.46\pm0.39$ & 5.5\ [lccc]{}Object & $z$ & $L_{\nu}(\rm UV)$ & $SFR(\rm UV)$\ & & ($10^{29}$erg s$^{-1}$ Hz$^{-1}$) & (M$_{\odot}$ yr$^{-1}$)\ IOK-1 & 6.96 & 2.58 & 36.1\ [lcccc]{} &\ Method & $z\sim 6$ & $z\sim6.3$ & $z=6.6$ & $z=7.0 $\ (1) Quasar GP test$^a$ & 0.01–0.04 & — & — & —\ (2) Gamma Ray Burst$^b$ & — & $<0.17$–0.60 & — & —\ (3) Ly$\alpha$ LF$^c$ & — & — & $<0.45$ & —\ (4) Ly$\alpha$ LF and LBG UVLF$^d$ & — & — & $\sim0.12$–0.42 & $\sim0.12$–0.54\ (5) Model and observed Ly$\alpha$ LFs$^f$ & — & — & $\sim0.24$–0.36 & $\sim0.32$–0.64\
--- author: - 'Tom Levy$^1$,' - 'Yaron Oz$^1$,' - 'Avia Raviv-Moshe$^1$' title: '$\mathcal{N}=2$ Liouville SCFT in Four Dimensions' --- Introduction ============ Liouville field theory in two dimensions [@Polyakov:1981rd] has been extensively studied for almost three decades and is recognized as a basic building block in quantum field and string theories. Higher dimensional Liouville field theories appeared recently as a basic ingredient of the inertial range field theory of fluid turbulence proposed in [@Oz:2017ihc],[@Oz:2018mdq], with the Liouville field being a Nambu-Goldstone boson, the Liouville potential being the local fluid energy dissipation and the Liouville background charge $Q$ related to the fluid intermittency. The higher even-dimensional bosonic Liouville field theories were studied in [@Levy:2018bdc] and their four-dimensional $\mathcal{N}=1$ superconformal version in [@Levy:2018xpu]. The aim of this work is to construct and study $\mathcal{N}=2$ Liouville superconformal field theory (SCFT) in four dimensions. The Liouville superfield is an $\mathcal{N}=2$ chiral superfield with sixteen bosonic and sixteen fermionic component fields. Its lowest component is a log-correlated complex scalar field whose real part carries a background charge. The resulting theory is non-unitary with a continuous spectrum of scaling dimensions. It localizes semi-classically on solutions that describe curved superspaces with a constant complex supersymmetric $\mathcal{Q}$-curvature. We study its quantum dynamics on the supersymmetric 4-sphere and show that the classical background charge is not corrected quantum mechanically. This is similar to the non-renormalization of the background charge in the four-dimensional $\mathcal{N}=1$ superconformal case [@Levy:2018xpu], and in two-dimensional $\mathcal{N}=2$ Liouville SCFT [@Distler:1989nt; @Mussardo:1988av]. We calculate the super-Weyl anomaly coefficients and find that $c$ vanishes, while $a$ is negative and depends on the background charge. We derive an integral expression for the correlation functions of superfield vertex operators in $\mathcal{N}=2$ superspace and analyze them in the semiclassical approximation by using a quaternionic formalism for the $\mathcal{N}=2$ superconformal algebra. In this formalism one can express all superconformal transformations as quaternionic super-Möbius transformations, i.e. as quaternionic linear fractional transformations. We will use this in order to derive selection rules for the correlation function of vertex operators in the semiclassical limit. The paper is organized as follows. In section \[sec:Class\] we will analyze the classical $\mathcal{N}=2$ Liouville field theory. In subsection \[subsec:TheModel\] we will construct the action of the theory, verify its super-Weyl invariance and derive the classical field equations. In subsection \[subsec:LiouvilleOnSphere\] we will study the theory on the $\mathcal{N}=2$ supermanifold extension of the 4-sphere where one sees the background charge in the boundary conditions of the Liouville superfield. We will write the action in components, check its R-symmetries and construct a solution to the classical field equations. In section \[sec:QuantumSection\] we will study the quantum aspects of the theory. We will show that the background charge is not corrected quantum mechanically and we will calculate the super-Weyl anomaly coefficients. In section \[sec:CorrelationFunctionCG\] we will study the correlation function of superfield vertex operators and derive an integral expression for them by considering the relation to free fields and four-dimensional Coulomb gas integrals. In section \[sec:SemiClassicalLimit\] we will consider the correlation functions of vertex operators in the semiclassical limit and introduce the quaternionic formalism for $\mathcal{N}=2$ superconformal transformations. We will conclude with a summary and outlook in section \[sec:SummaryAndOutlook\]. A summary of notations and details of various calculations are given in the appendices. $\mathcal{N}=2$ Liouville SCFT in Four Dimensions {#sec:Class} ================================================= In this section we will construct and study the classical aspects of $\mathcal{N}=2$ Liouville superconformal field theory in four dimensions in an $\mathrm{SU}(2)$ superspace. We will mainly use the notations of [@Kuzenko:2013gva] and [@Kuzenko:2008ep] for the curved supergravity geometrical quantities and the notations of [@Butter:2013lta] for flat space notation and reduction to component fields. These are briefly summarized in appendix \[app:Notations\]. The supergravity notations are in Lorenzian signature $(-,+,+,+)$. We use the analytic continuation in [@Festuccia:2011ws] for the Euclidean signature of the supersymmetric 4-sphere. The Classical Theory {#subsec:TheModel} -------------------- ### The Action Four-dimensional $\mathcal{N}=2$ Liouville SCFT is given by the action: $$\label{eq:ActionMain} S_L(\Phi, \bar{\Phi}) = \frac{1}{64\pi^2}\int d^4x\, d^4\Theta\, \mathcal{E} \left(\Phi \bar{\Delta}\bar{\Phi}+ 2Q\hat{\mathcal{Q}}\Phi + 64\pi^2\mu e^{2b\Phi} \right)+\text{h.c.} \ ,$$ where $\mathcal{E}$ is the chiral density [@deRoo:1980mm]. The Liouville superfield $\Phi$ is an $\mathcal{N}=2$ chiral superfield [@deRoo:1980mm],[@deWit:1980lyi], which satisfies the conditions: $$\label{eq:ChiralCond} \bar{\mathcal{D}}^{\dot{\alpha}}_i\Phi =0, \qquad i=\{1,2\} \ ,$$ where $\mathcal{D}_A = (\mathcal{D}_a, \mathcal{D}_{\alpha}^{i}, \bar{\mathcal{D}}_{i}^{\dot{\alpha}})$ is the covariant superderivative [@Kuzenko:2013gva]. The chiral multiplet $\Phi$ consists of 16 + 16 bosonic and fermionic component fields: a complex scalar field $A$, a chiral spinor doublet $\Psi_i$, a complex symmetric scalar $B_{ij}=B_{ji}$, an antisymmetric tensor $F_{ab}=-F_{ba}$, a chiral spinor doublet $\Lambda_i$ and a complex scalar field $C$. The dimensionless parameters in are the background charge $Q$, the cosmological constant $\mu$ (which we take to be complex) and $b$. We will denote $S_{C.G.} = \left. S_L \right|_{\mu = 0}$, which describes a free SCFT that we call four-dimensional $\mathcal{N}=2$ Coulomb gas SCFT. In $\bar{\Delta}$ denotes the chiral projection operator [@Kuzenko:2008ry], [@Muller:1989uhj]: $$\bar{\Delta} = \frac{1}{96}\left( \left(\bar{\mathcal{D}}^{ij}+16\bar{S}^{ij}\right)\bar{\mathcal{D}}_{ij} - \left(\mathcal{\bar{D}}^{\dot{\alpha}\dot{\beta}}-16\bar{Y}^{\dot{\alpha}\dot{\beta}}\right)\bar{\mathcal{D}}_{\dot{\alpha}\dot{\beta}}\right) \ ,$$ where $\bar{\mathcal{D}}_{ij} \equiv \bar{\mathcal{D}}_{\dot{\alpha}(i}\bar{\mathcal{D}}^{\dot{\alpha}}_{j)}$ and $\bar{\mathcal{D}}^{\dot{\alpha}\dot{\beta}} = \bar{\mathcal{D}}^{(\dot{\alpha}}_{i}\bar{\mathcal{D}}^{\dot{\beta})i}$. The superfields $S_{ij}=S_{ji}$ and $Y_{\alpha\beta}=Y_{\beta\alpha} $ are complex symmetric torsion tensors (see e.g. [@Kuzenko:2013gva] and [@Kuzenko:2008ep]). Their complex conjugated partners are denoted by $\bar{S}_{ij}\equiv \overline{S^{ij}}$ and $\bar{Y}^{\dot{\alpha}\dot{\beta}}\equiv \overline{Y^{\alpha\beta}}$. Acting with the chiral projection operator on any scalar superfield $U$ transforms it to a chiral superfield and we have: $$\label{eq:ChiralProjOp} \bar{\mathcal{D}}^{\dot{\alpha}}_{i} \bar{\Delta}U = 0, \quad \int d^4x \,d^4\theta\, d^4\bar{\theta}\, E \, U = \int d^4x\,d^4\Theta\,\mathcal{E}\, \bar{\Delta}U \ ,$$ where $E = {\mathrm{sdet}}\left(E_{A}^{\;\;\;M}\right)$ is the curved superspace integration measure. Using we can write the kinetic part of the action simply as $\int d^4x \,d^4\theta\, d^4\bar{\theta}\, E \, \Phi\bar{\Phi}$. The chiral projection operator provides an $\mathcal{N}=2$ supersymmetric extension of the conformally covariant fourth-order differential Paneitz operator [@Pan]. In the Liouville action, $\hat{\mathcal{Q}}$ is an $\mathcal{N}=2$ supersymmetric extension of the conformally covariant $\mathcal{Q}$-curvature [@Q] given by [@Butter:2013lta]: $$\hat{\mathcal{Q}} \equiv \frac{1}{2}\bar{Y}_{\dot{\alpha}\dot{\beta}}\bar{Y}^{\dot{\alpha}\dot{\beta}}+\frac{1}{2}\bar{S}^{ij}\bar{S}_{ij}+\frac{1}{12}\bar{\mathcal{D}}^{ij}\bar{S}_{ij}.$$ This supersymmetric extension is chiral and satisfies $\bar{\mathcal{D}}_{i}^{\dot{\alpha}} \hat{\mathcal{Q}} = 0$. ### Super-Weyl Invariance The objects $\bar{\Delta}, \hat{\mathcal{Q}}$ transform covariantly under $\mathcal{N}=2$ super-Weyl transformations [@Kuzenko:2008ep], [@Butter:2013lta], [@Kuzenko:2013gva] which are parameterized by a chiral superfield $\sigma$: $$\label{eq:DeltaQSuperWeyl} \delta_{\sigma} \bar{\Delta} = -2\sigma \bar{\Delta}, \quad \delta_{\sigma} \hat{\mathcal{Q}} = -2\sigma \hat{\mathcal{Q}} + \bar{\Delta}\bar{\sigma} \ .$$ This transformation law is the super-Weyl generalization of the Weyl transformations of the Paneitz operator and the $\mathcal{Q}$-curvature. Under super-Weyl transformations the Liouville superfield transforms according to $$\label{eq:PhiSWeylTrans} \Phi \to \Phi-Q\sigma, \qquad \bar{\Phi} \to \bar{\Phi}-Q\bar{\sigma},$$ which, together with , ensures that the action is classically invariant under super-Weyl transformation, $$S_L(\Phi,\bar{\Phi}) \to S_L(\Phi,\bar{\Phi}) - S_{C.G.}(Q\sigma, Q\bar{\sigma}),$$ under the condition that the background charge takes it classical value: $$\label{eq:QchargeClass} Q = \frac{1}{b}.$$ Similar to the $\mathcal{N}=1$ four dimensional Liouville SCFT studied in [@Levy:2018xpu], the supersymmetric-$\mathcal{Q}$-curvature is related to a topological functional given by [@Butter:2013lta]: $$\label{eq:GuassBonnetFunc} \int d^4x \,d^4 \Theta \, \mathcal{E} \, \left( 2\hat{\mathcal{Q}}-W^{\alpha\beta}W_{\alpha\beta}\right) = 64\pi^2 (\chi + i p) \ ,$$ where the complex symmetric torsion tensor $W_{\alpha\beta} = W_{\beta\alpha}$ is the supersymmetric extension of the Weyl tensor. It transforms homogenously under super-Weyl transformation [@Kuzenko:2008ep]: $$\delta_{\sigma}W_{\alpha\beta} = -\sigma W_{\alpha\beta}.$$ The integral is a superconformal invariant and when setting to zero the gravitino and the auxiliary fields of the supergravity multiplet, it is a topological invariant of the resulting curved space, where $\chi$ and $p$ are the Euler characteristic and first Pontryagin invariant respectively. In [@Kuzenko:2013gva] a Wess-Zumino action for spontaneously broken $\mathcal{N}=2$ superconformal symmetry, whose variation reproduces the $\mathcal{N}=2$ super-Weyl anomaly was introduced. The Goldstone supermultiplet in [@Kuzenko:2013gva] was identified with a reduced chiral superfield [@deRoo:1980mm] (i.e. the chiral field strength of an $\mathcal{N}=2$ vector multiplet) containing the dilaton and the axion among its components. This Wess-Zumino action is related to our action by replacing the Liouville chiral superfield $\Phi$ with $\frac{1}{b}\log \mathcal{Z}$, where $\mathcal{Z}$ is a reduced chiral superfield satisfying, in addition to , the constraints [@deRoo:1980mm] $ \left(\mathcal{D}^{ij}+4S^{ij}\right)\mathcal{Z} = \left(\bar{\mathcal{D}}^{ij}+4\bar{S}^{ij}\right)\bar{\mathcal{Z}} $. There are two significant differences between the two actions: first, the Liouville interaction and the kinetic term ’switch’ roles between the two models. This change is technically convenient for the rest of this paper. Second, the number of degrees of freedom carried by the multiplets is different. The reduced chiral superfield carries $8+8$ degrees of freedom (bosonic and fermionic), while the chiral one used in carries $16+16$ degrees of freedom. Yet, these two models yield the same super-Weyl variation. ### Field Equations The field equations derived from the action read: $$\bar{\Delta}\bar{\Phi}+Q\hat{\mathcal{Q}} = -64\pi^2\mu be^{2b\bar{\Phi}} \ , \qquad \Delta\Phi+Q\hat{\bar{\mathcal{Q}}}=-64\pi^2\bar{\mu}be^{2b\Phi}.$$ Using the finite form of the transformation (see [@Kuzenko:2008qw] for the finite super-Weyl transformation of the relevant curved superspace geometrical quantities) one sees that solutions to the field equations describe super-Weyl parameters $\sigma = b\Phi$ and $\bar{\sigma}=b\bar{\Phi}$ that transform the background to a supermanifold with a constant complex super-$\mathcal{Q}$-curvature: $$\hat{\mathcal{Q}} = -64\pi^2\mu b^2, \qquad \bar{\hat{\mathcal{Q}}}=-64\pi^2\bar{\mu} b^2.$$ This result is similar to the one found in [@Levy:2018xpu] for the $\mathcal{N}=1$ case. Note, however, that the field equations in the non-supersymmetric four-dimensional Liouville field theory studied in [@Levy:2018bdc] have a real positive cosmological constant parameter $\mu$ and their solutions can be viewed as Weyl factors that transform the background curved space into a constant negative $\mathcal{Q}$-curvature one. Liouville $\mathcal{N}=2$ SCFT on $S^4$ {#subsec:LiouvilleOnSphere} --------------------------------------- ### Background Charge We define Liouville SCFT on the $\mathcal{N}=2$ supermanifold extension of $S^4$ [@Butter:2015tra]. Using the fact that this supermanifold is superconformally flat, i.e. $W_{\alpha\beta} =0$, and the topological invariant , where for supersymmetric $S^4$ we have $\chi=2, p=0$, we get that for a constant shift by $\phi_0$ of the Liouville superfield: $$\label{eq:ActionShiftBC} S_L(\Phi+\phi_0,\bar{\Phi}+\bar{\phi_0}) = S_L(\Phi,\bar{\Phi}) + 4Q\mathrm{Re}(\phi_0) \ .$$ Preforming a singular super-Weyl transformation from supersymmetric $S^4$ to flat superspace and using the transformation law we find the following boundary conditions for the Liouville superfield: $$\label{eq:Boundary1Phi} \Phi = -2Q\log(|x|) + O(1), \quad |x|\to \infty \ .$$ Writing in component fields, one finds: $$\label{eq:BoundaryA1} \mathrm{Re}(A)=-2Q\log(|x|)+O(1), \quad |x| \to \infty,$$ while all other component fields approach a finite limit. Following , the real part of the lowest component $A$ of the Liouville superfield $\Phi$, i.e. $\mathrm{Re}(A)$, acquires a background charge $Q$. ### Action in Components, R-Symmetry and a Classical Solution The kinetic part of the Lagrangian in flat space written in terms of component fields is: $$\begin{aligned} \label{eq:KineticComponents} &\frac{1}{32\pi^2} \int d^4\theta d^4\bar{\theta} \Phi\bar{\Phi} = \\ &\frac{1}{32\pi^2}\left( 4\square A \square A^\dagger + CC^\dagger-\bar{B}_{ij}\square B^{ij} -8\partial_aF^{-ab}\partial^cF^+_{cb}+2\bar{\Psi}_i\square\slashed{\partial}\Psi^i -2\bar{\slashed{\partial}\Lambda_i}\Lambda^i\right) . \end{aligned}$$ Its bosonic parts can be found in [@deWit:2010za]. The interaction term in can be derived in terms of component fields by constructing the chiral superfield $e^{\Phi}$ in terms of the components of $\Phi$. For simplicity and for later use, we list here only the bosonic parts of the interaction. The bosonic parts of the Liouville interaction yield: $$\begin{aligned} & \mu\int d^4xd^4\theta\mathcal{E}e^{2b\Phi}+h.c.=\\ &-\mu b\left(e^{2bA}C-\frac{b}{2}e^{2bA}B_{ij}B^{ij}+bF^{-ab}F^-_{ab}e^{4bA}\right)+h.c. \end{aligned}$$ Four-dimensional Liouville $\mathcal{N}=2$ SCFT is invariant under the $\mathrm{SU}(2)_R\times \mathrm{U}(1)_R$ R-symmetry group. As in the $\mathcal{N}=1$ case [@Levy:2018bdc], the $\mathrm{U}(1)_R$ symmetry is not a standard one (the interaction term, as well as the boundary condition are not invariant under a standard $\mathrm{U}(1)$ transformation). Rather, the Liouville superfield transforms in the affine representation under a non-compact $\mathrm{U}(1)_R$ symmetry: $$\label{eq:SuperspaceRSymm} \Phi'\left(e^{i\alpha/2}\theta,x_+\right) = \Phi(\theta,x_+)+\frac{i}{b}\alpha, \quad \; \; \bar{\Phi}'\left(e^{-i\alpha/2}\bar{\theta},x_-\right) = \bar{\Phi}\left(\bar{\theta},x_-\right)-\frac{i}{b}\alpha \ .$$ In terms of bosonic component fields, the transformation reads: $$A \to A +\frac{i}{b}\alpha,\quad B_{ij}\to e^{-i\alpha}B_{ij} \quad F^-_{ab} \to e^{-i\alpha}F^-_{ab},\quad C \to e^{-2i\alpha}C.$$ Under the $\mathrm{SU}(2)_R$ symmetry the Liouville chiral superfield transforms trivially. The fermions $\Psi^i$ and $\Lambda^i$ transform as doublets under the $\mathrm{SU}(2)_R$ transformation, while the symmetric complex field $B^{ij}$ transforms in the triplet representation under $\mathrm{SU}(2)_R$. All other component fields transform as singlets under $\mathrm{SU}(2)_R$. The classical equations of motion with vanishing fermions are: $$\label{eq:FieldEOM} \square^2A+8\pi^2\bar{\mu}b^3e^{2b\bar{A}}\bar{B}_{ij}\bar{B}^{ij}=0,$$ $$-\square B_{ij}+32\pi^2\bar{\mu}b^2 e^{2b\bar{A}}\bar{B}_{ij}=0,$$ $$C=32\pi^2\bar{\mu} b e^{2b\bar{A}},$$ $$\partial^c\partial_a F^{-ab}+8\pi^2\bar{\mu}bF^{+cb}e^{4b\bar{A}}=0.$$ Therefore, a possible solution to the classical equations of motion that respects the boundary condition is given by: $$\label{eq:SolToClassicalEOMRegularNotation} \begin{aligned} &A = -\frac{1}{b}\log{\left({4\pi^2|\mu|b^2}|x|^2+1\right)}, \qquad C= \frac{32\pi^2\bar{\mu} b}{\left({4\pi^2|\mu|b^2}|x|^2+1\right)^2},\\ &B_{ij} = \frac{4\pi i\sqrt{6\bar{\mu}}}{\left(4\pi^2{|\mu|b^2}|x|^2+1\right)}\delta_{ij}, \qquad \bar{B}_{ij} = -\frac{4\pi i\sqrt{6\mu}}{\left(4\pi^2{|\mu|b^2}|x|^2+1\right)}\delta_{ij}. \end{aligned}$$ All other fields vanish. Quantum $\mathcal{N}=2$ Liouville SCFT in Four Dimensions {#sec:QuantumSection} ========================================================= In this section we study quantum aspects of the Liouville $\mathcal{N}=2$ SCFT in four dimensions. We show that the background charge is not corrected quantum mechanically and calculate supersymmetric Weyl anomaly coefficients. We find that the $c$ anomaly coefficient vanishes while the $a$ anomaly coefficient is negative and depends on the value of the background charge $Q$. Primary Vertex Operators and Background Charge Non-Renormalization ------------------------------------------------------------------ The free field two-point function of the lowest order component field reads: $$\left\langle A(x)\bar{A}(x') \right\rangle_{C.G.}=-\log(|x-x'|) \ ,$$ thus, it is a log-correlated complex field. Consider the vertex operator: $$\label{eq:BosonicVertexOp} V_{\alpha\tilde{\alpha}} = e^{2\alpha A+2\tilde{\alpha}\bar{A}} = e^{2(\alpha+\tilde{\alpha})\mathrm{Re}(A) + 2i(\alpha-\tilde{\alpha})\mathrm{Im}(A)},$$ where $\alpha, \tilde{\alpha}$ are two independent complex numbers. When $\alpha = \tilde{\alpha}$ we denote $V_{\alpha}=V_{\alpha\tilde{\alpha}}$. The vertex operators have scaling dimensions: $$\label{eq:DimBosonicVertexOp} \Delta_{\alpha\bar{\alpha}} = -4\alpha\tilde{\alpha}+2Q(\alpha+\tilde{\alpha}) \ .$$ Requiring the Liouville interaction to be a marginal operator implies: $$\Delta(e^{2bA}) = \Delta_{b,0} = 2 \ ,$$ and using we get the quantum value of the background charge: $$\label{eq:NonRenormQ} Q = \frac{1}{b} \ .$$ We see that the classical value of the background charge is not corrected quantum mechanically. Note that $\mathcal{N}=1$ Liouville SCFT in four dimensions [@Levy:2018xpu] and $\mathcal{N}=2$ Liouville SCFT in two dimensions [@Distler:1989nt; @Mussardo:1988av] exhibit a similar non-renormalization of the background charge. We can also consider vertex operators in superspace: $$\label{eq:GeneralVertexInSuperspace} \mathcal{V}_{\alpha\tilde{\alpha}} = e^{2\alpha \Phi +2\tilde{\alpha}\bar{\Phi}}.$$ Its dimension is given by and its $U(1)_R$ charge reads: $$\label{eq:U1RGenVertexInSuperspace} w_{\alpha\tilde{\alpha}} = \frac{2}{b}(\alpha-\tilde{\alpha}).$$ The following operator is a chiral primary operator: $$\label{eq:ChiralVertexOp} \mathcal{V}_\alpha \equiv \mathcal{V}_{\alpha,0}= e^{2\alpha\Phi}.$$ Its dimension is equal to its $U(1)_R$ charge and they are given by $\Delta_{\mathcal{V}_\alpha} = w_{_{\mathcal{V}_{\alpha}}} = \frac{2}{b}\alpha$. The free-fields superspace propagators are given by: $$\label{eq:PropSuper01} \left<\Phi(x,\theta^i,\bar{\theta}_j)\Phi(x',\theta^{'i},\bar{\theta}^{'}_j)\right> = 0 \ ,$$ $$\label{eq:PropSuper02} \left<\bar{\Phi}(x,\theta^i,\bar{\theta}_j)\bar{\Phi}(x',\theta^{'i},\bar{\theta}^{'}_j)\right> = 0 \ ,$$ $$\label{eq:PropSuperfields} \left<\Phi(x,\theta^i,\bar{\theta}_j)\bar{\Phi}(x',\theta^{'i},\bar{\theta}^{'}_j)\right> = e^{\left(\theta\sigma^a\bar{\theta}+\theta'\sigma^a\bar{\theta}'-2\theta\sigma^a\bar{\theta}'\right)\partial_a^x}\left\langle A(x)\bar{A}(x') \right\rangle,$$ where $\theta\sigma^a\bar{\theta} = \theta^i\sigma^a\bar{\theta}_i= \theta^1\sigma^a\bar{\theta}_1+\theta^2\sigma^a\bar{\theta}_2$. For a detailed derivation of - see appendix \[app:SuperSpacePropagators\]. For future reference, in supersymmetric Coulomb gas theory the following identity holds: $$\label{eq:SuVerOpWickCon} \left<\mathcal{V}_{\alpha_1\tilde{\alpha}_1}(z_1)\cdots \mathcal{V}_{\alpha_N\tilde{\alpha}_N}(z_N)\right>_{C.G.} = \prod_{l\neq m}|z_l-\bar{z}_m|^{-4\alpha_l\tilde{\alpha}_m} \ ,$$ where $|z_l - \bar{z}_m|$ is the distance in superspace between a chiral and an anti-chiral coordinate: $$|z_l -\bar{z}_m|^2 = \left(x_l-x_m +i\theta_l\sigma\bar{\theta}_l + i\theta_m\sigma\bar{\theta}_m -2\theta_l\sigma\bar{\theta}_m\right)^2 =\left( x_{l+}-x_{m-} - 2\theta_l\sigma\bar{\theta}_m\right)^2 \ .$$ This correlation function vanishes unless $\sum\alpha_l = \sum\tilde{\alpha}_l = \frac{1}{b}$. $\mathcal{N}=2$ Super-Weyl Anomalies ------------------------------------ The super-Weyl anomaly can be related to the ordinary Weyl anomaly by setting the gravitino and the auxiliary fields of the supergravity multiplet to zero. The anomaly coefficients $a$ and $c$ correspond to the A-type and B-type Weyl anomalies [@Deser:1993yx]. The Weyl anomaly coefficients of the Liouville SCFT do not depend on the interaction terms and are the same as those of the supersymmetric Coulomb gas theory. The latter is a free SCFT, which consists of an ordinary four-dimensional Coulomb gas CFT $\mathrm{Re}(A)$ with a background charge $Q$, a conformal four-derivative real scalar $\mathrm{Im}(A)$, a conformal three-derivative chiral spinor doublet $\Psi_i$, a conformal two-derivative complex symmetric scalar $B_{ij}$, a conformal antisymmetric tensor $F_{ab}$, a conformal one-derivative chiral spinor doublet $\Lambda_i$ and an auxiliary complex scalar field $C$. Thus, to obtain the super-Weyl anomaly of Liouville SCFT, we should sum the anomalies of the component fields: Field $a$ $c$ ------------------ ---------------------- ----------------- $\mathrm{Re}(A)$ $ -\frac{7}{90}-Q^2$ $-\frac{1}{15}$ $\mathrm{Im}(A)$ $-\frac{7}{90}$ $-\frac{1}{15}$ $\Psi_i $ $-\frac{3}{40}$ $-\frac{1}{60}$ $ B_{ij}$ $\frac{1}{60}$ $\frac{1}{20}$ $F_{ab}$ $-\frac{19}{60}$ $\frac{1}{20}$ $\Lambda_i$ $\frac{11}{360}$ $\frac{1}{20}$ $C$ $0$ $0$ Summing up the coefficients we obtain: $$a = -\frac{1}{2}-Q^2, \quad c = 0 \ .$$ Correlation Functions and Supersymmetric Coulomb Gas Integrals {#sec:CorrelationFunctionCG} ============================================================== In this section we study the correlation functions of superfield vertex operators in four-dimensional $\mathcal{N}=2$ Liouville SCFT by using their relation to the free supersymmetric Coulomb gas theory and derive an integral expression for them. We consider general correlation functions of these operators: $$\label{eq:DefCorrMain} \mathcal{G}_{\alpha_1\tilde{\alpha}_1,\dots,\alpha_N\tilde{\alpha}_N} (z_1,\dots,z_N) = \left<\mathcal{V}_{\alpha_1\tilde{\alpha}_1}(z_1)\cdots \mathcal{V}_{\alpha_N\tilde{\alpha}_N}(z_N)\right>,$$ where the expectation value is defined as: $$\label{eq:VertexCorrelationFunc} \left<\mathcal{V}_{\alpha_1\tilde{\alpha}_1}(z_1)\cdots \mathcal{V}_{\alpha_N\tilde{\alpha}_N}(z_N)\right> \equiv \int D\Phi\, e^{-S_L}\prod_{l=1}^{N} e^{2\alpha_{l}\Phi(z_l)+2\tilde{\alpha}_l\bar{\Phi}(z_l)} \ .$$ We consider the shift $A \to A-\frac{\log \mu}{2b}$ and using we obtain a KPZ scaling relation: $$\label{eq:KPZScale} \mathcal{G}_{\alpha_1\tilde{\alpha}_1,\dots,\alpha_N\tilde{\alpha}_N} (x_1,\dots,x_N) \propto \mu^{s}\bar{\mu}^{\tilde{s}},~~~~s = \frac{1/b-\sum_{l} \alpha_l}{b},~~~~\tilde{s}= \frac{1/b-\sum_{l} \tilde{\alpha}_l}{b} \ .$$ As a result of the KPZ scaling relation one can see that the correlation functions are not analytic functions of the cosmological constant $\mu,\bar{\mu}$. This implies that we can not preform a naive perturbation theory calculation in powers of $\mu$. We write the interaction terms of the action using the chiral and anti-chiral integrals: $$\label{eq:ChiralActions} S_+ = \int d^4x\, d^4\theta\, e^{2b\Phi}, \quad S_- = \int d^4x\, d^4\bar{\theta}\, e^{2b\bar{\Phi}} \ ,$$ so that $S_L = S_{C.G.}+\mu S_++\bar{\mu}S_-$. We can separate the real zero mode $\Phi_0\in\mathbb{R}$ of the path integral over the real part of $A(x)$ from the non-zero mode $\widehat{A}(x)$ and write the corresponding superfield decomposition $\Phi(z)=\Phi_0+\widehat{\Phi}(z)$ (see the two-dimensional bosonic case in e.g. [@Teschner:2001rv]) . Integrating over the zero mode using equation we get the following expression for the correlation functions: $$\label{eq:CorrFuncGammaInt} \begin{aligned} &\mathcal{G}_{\alpha_1\tilde{\alpha}_1,\dots,\alpha_N\tilde{\alpha}_N} (x_1,\dots,x_N) = \\ &= \int d\Phi_0\, D\widehat{\Phi}\, e^{-S_L\left(\Phi_0+\widehat{\Phi}\right) }\prod_{l=1}^{N} e^{2\alpha_{l}\left(\Phi_0+\widehat{\Phi}(z_l)\right)+2\tilde{\alpha}_l\left(\Phi_0+\widehat{\bar{\Phi}}(z_l)\right)}\\ &= \frac{\Gamma(-s-\tilde{s})}{2b}\left<\prod_{l=1}^{N}e^{2\alpha_{l}\widehat{\Phi}(z_l)+2\tilde{\alpha}_l\widehat{\bar{\Phi}}(z_l)}(\mu S_++\bar{\mu} S_-)^{s+\tilde{s}}\right>_{C.G.} \ , \end{aligned}$$ where we denote by $\left<\dots\right>_{C.G.}$ expectation values in the free Coulomb gas SCFT. The gamma function appearing in produces poles in the correlation functions when $s+\tilde{s}$ is a non-negative integer, i.e. $s+\tilde{s} = k\in\mathbb{N}\cup\{0\}$. As a consequence of $\eqref{eq:CorrFuncGammaInt}$, the residues of the correlation function at these poles, when considered as a function of the variable $2\sum(\alpha_l+\tilde{\alpha}_l)$, are given exactly by the the result of the naive perturbation theory in $\mu, \bar{\mu}$: $$\mathcal{G}^{(k)}_{\alpha_1\tilde{\alpha}_1,\dots,\alpha_N\tilde{\alpha}_N} (z_1,\dots,z_N) = \;\underset{2\sum(\alpha_l+\tilde{\alpha}_l)=\frac{4}{b}-2kb}{\mathrm{Res}}\;\mathcal{G}_{\alpha_1\tilde{\alpha}_1,\dots,\alpha_N\tilde{\alpha}_N} (z_1,\dots,z_N) \ ,$$ where the LHS denotes the contribution of $k$th-order naive perturbation theory to the correlation functions: $$\label{eq:GResidueDecomp} \begin{aligned} \mathcal{G}^{(k)}_{\alpha_1\tilde{\alpha}_1,\dots,\alpha_N\tilde{\alpha}_N} (z_1,\dots,z_N) &=\frac{1}{k!}\left<\prod_{l=1}^{N}\mathcal{V}_{\alpha_l\tilde{\alpha}_l}(z_l)(-\mu S_+-\bar{\mu} S_-)^k\right>_{C.G.} \\ &=\sum_{n+\tilde{n} = k} \frac{(-\mu)^n(-\bar{\mu})^{\bar{n}}}{n!\tilde{n}!}\left<\prod_{l=1}^{N}\mathcal{V}_{\alpha_l\tilde{\alpha}_l}(z_l)(S_+)^{n}(S_-)^{\tilde{n}}\right>_{C.G.}\\ &\equiv \sum_{n+\tilde{n}=k}(-\mu)^n(-\bar{\mu})^{\tilde{n}}\mathcal{G}^{(n,\tilde{n})}_{\alpha_1\tilde{\alpha}_1,\dots,\alpha_N\tilde{\alpha}_N} (z_1,\dots,z_N) . \end{aligned}$$ The KPZ scaling relation shows that actually only a single term in the sum is non-vanishing. Therefore, the correlation functions have a pole only when both $s$ and $\tilde{s}$ are non-negative integers, which we denote by $s=n,\;\tilde{s}=\tilde{n}$. We now focus on the three-point function of vertex operators, i.e. $N=3$. We can write explicit integral expressions for the residues corresponding to the three-point function by using the free field correlation functions to write: $$\label{eq:GResiduePsiFDecomp} \begin{aligned} &\mathcal{G}^{(n,\tilde{n})}_{\alpha_1\tilde{\alpha}_1,\alpha_2\tilde{\alpha}_2,\alpha_3\tilde{\alpha}_3} (z_1,z_2,z_3) = \frac{1}{n!\tilde{n}!} \prod_{l\neq m}|z_l-\bar{z}_m|^{-4\alpha_l\tilde{\alpha}_m} \int d^{8n}z'\,d^{8\tilde{n}}\bar{z}' \\ & \times \prod_{p,\tilde{p}}|z'_p-\bar{z}'_{\tilde{p}}|^{-4b^2}\prod_{l=1}^{3}\prod_{\tilde{p}=1}^{\tilde{n}}|z_l-\bar{z}'_{\tilde{p}}|^{-4b\alpha_l}\prod_{p=1}^{n}|z'_p-\bar{z}_l|^{-4b\tilde{\alpha}_l} \ , \end{aligned}$$ where the measures are $d^{8n}z' = \prod_{p=1}^n d^{4}x'_{p+}d^{4}\theta_p'$, $\;d^{8\tilde{n}}\bar{z}' = \prod_{\tilde{p}=1}^n d^{4}x_{\tilde{p}-}'d^{4}\bar{\theta}_{\tilde{p}}'$. One case for which the integral can be explicitly computed is for the first pole $n=0, \tilde{n} = 1$ of the three-point function. According to , at the pole $n=0$ we have the relation $\sum_l 2b\alpha_l =2$. The integral can then be written as: $$\mathcal{G}^{(0,1)}_{\alpha_1\tilde{\alpha}_1,\alpha_2\tilde{\alpha}_2,\alpha_3\tilde{\alpha}_3} (z_1,z_2,z_3) = \prod_{l\neq m}|z_l-\bar{z}_m|^{-4\alpha_l\tilde{\alpha}_m}\int d^4x_- d^4\bar{\theta} \prod_{l=1}^{3} |z_l-\bar{z}|^{-4b\alpha_l} \ .$$ This integral is an $\mathcal{N}=2$ generalization of the conformal and $\mathcal{N}=1$ superconformal integrals considered in [@Osborn:1998qu]. We define $z_l^2 \equiv |z_l-\bar{z}|^2 = (x_{l+}-x_{-}-2i\theta_l^i\sigma\bar{\theta}_{li})^2$ and the integral we need to calculate is: $$I_{\alpha_1\alpha_2\alpha_3} = \int d^4x_{-} d^4\bar{\theta} \prod_{l=1}^{3} \frac{1}{(z_l^2)^{2b\alpha_l}} \ .$$ Using the identity $(x^2+i\epsilon)^{-\alpha} =\frac{e^{i\pi\alpha/2}}{\Gamma(\alpha)} \int_0^{\infty}d\lambda\, \lambda^{\alpha-1} e^{i\lambda x^2}$ we have: $$\begin{aligned} I_{\alpha_1\alpha_2\alpha_3}&=\frac{1}{\prod_{l=1}^{3}\Gamma(2b\alpha_l)}\int_0^{\infty} \prod_{l=1}^{3}d\lambda_l\, \lambda_l^{2b\alpha_l-1}\int d^4x_{-}d^4\bar{\theta}\,e^{i\sum_{l=1}^{3}\lambda_l z_l^2} \\ &= \frac{\pi^2}{\prod_{l=1}^{3}\Gamma(2b\alpha_l)}\int_0^{\infty} \prod_{l=1}^{3}d\lambda_l\, \lambda_l^{2b\alpha_l-1}\int d^4\bar{\theta}\,\exp\left(-\frac{1}{2\Lambda}\sum_{l,m}\lambda_l\lambda_m z_{lm}^2\right) \ , \\ \end{aligned}$$ where we have defined $z_{lm} = x_{l+}-x_{m+} - 2i(\theta_l^i-\theta_m^i)\sigma\bar{\theta}_{i}$, $\Lambda = \sum_l \lambda_l$ and preformed the Gaussian integration over $x_-$. Preforming the integration over $\bar{\theta}$ we have: $$\begin{aligned} I_{\alpha_1\alpha_2\alpha_3} &= \frac{\pi^2}{\prod_{l=1}^{3}\Gamma(2b\alpha_l)}\int_0^{\infty} \prod_{l=1}^{3}d\lambda_l\, \lambda_l^{2b\alpha_l-1}\frac{1}{\Lambda^2}\cdot\\ & \sum_{l,m,p,q}\theta^1_l\sigma\cdot\partial_l \bar{\sigma}\cdot\partial_m\theta^1_m\theta^2_p\sigma\cdot\partial_p \bar{\sigma}\cdot\partial_q\theta^2_q e^{-\frac{1}{2\Lambda}\sum_{l,m}\lambda_l\lambda_m z_{lm}^2} \ . \end{aligned}$$ The result of this integral is proportional to the pole of the correlation function of vertex operators . As such, the integral should be proportional to an $\mathcal{N}=2$ chiral superconformal invariant built out of the supercoordinates, which we denote $\mathcal{I}_{\alpha_1\alpha_2\alpha_3}(z_1,z_2,z_3)$. We are interested in the dependence of the proportionality constant $C(\alpha_1,\alpha_2,\alpha_3)$ on $\alpha_l$. To obtain the dependence, we use the result [@Symanzik:1972wj]: $$\int_0^{\infty} \prod_{l=1}^{3}d\lambda_l\, \lambda_l^{\delta_l-1}\frac{1}{\Lambda^P} e^{-\frac{1}{2\Lambda}\sum_{l,m}\lambda_l\lambda_m u_{lm}^2} = \frac{\prod_{l=1}^{3}\Gamma(P-\delta_l)}{u_{12}^{P-\delta_3}u_{13}^{P-\delta_2}u_{23}^{P-\delta_1}} \ ,$$ for $u_{lm}=u_{ml}$ and $\sum\delta_l = 2P$. Applying this result for our integral we get the proportionality constant: $$C(\alpha_1,\alpha_2,\alpha_3) = \pi^2\frac{\prod_{l=1}^{3}\Gamma(4-2b\alpha_l)}{\prod_{l=1}^{3}\Gamma(2b\alpha_l)} \ ,$$ which is a particular example of the DOZZ three-point function formula [@Zamolodchikov:1995aa],[@Dorn:1992at]. and its four-dimensional analogue [@Levy:2018bdc],[@Furlan:2018jlv]. We can thus write the first pole of the correlation function as: $$\mathcal{G}^{(0,1)}_{\alpha_1\tilde{\alpha}_1,\alpha_2\tilde{\alpha}_2,\alpha_3\tilde{\alpha}_3} (z_1,z_2,z_3) = \pi^2\frac{\prod_{l=1}^{3}\Gamma(4-2b\alpha_m)}{\prod_{l=1}^{3}\Gamma(2b\alpha_m)}\prod_{l\neq m}|z_l-\bar{z}_m|^{-4\alpha_l\tilde{\alpha}_m}\mathcal{I}_{\alpha_1\alpha_2\alpha_3}(z_1,z_2,z_3) \ .$$ Correlation Functions in the Semiclassical Limit {#sec:SemiClassicalLimit} ================================================ In this section we consider the correlation functions of vertex operators in the semiclassical limit $b\to 0$. In this limit we define the rescaled Liouville superfield $\Phi_c = b\Phi$, in terms of which the correlations function that we wish to evaluate are: $$\label{eq:SemiclassCorrFunc} \left<\mathcal{V}_{\alpha_1\tilde{\alpha}_1}(z_1)\cdots \mathcal{V}_{\alpha_N\tilde{\alpha}_N}(z_N)\right> \equiv \int D\Phi_c\, e^{-S_L}\prod_{l=1}^{N} \exp\left({2\frac{\alpha_{l}}{b}\Phi_c(z_l)+2\frac{\tilde{\alpha}_l}{b}\bar{\Phi}_c(z_l)}\right) \ .$$ Noting that written in terms of $\Phi_c$ the action scales as $S_L\sim b^{-2}$, we can use the saddle point approximation to evaluate the integral in the semiclassical limit $b\to 0$. If the inserted vertex operators $\mathcal{V}_{\alpha\tilde{\alpha}}$ obey the scaling $\alpha,\tilde{\alpha}\sim b$ then the insertions do not affect the saddle point of the integral, which is then determined solely by minimizing the action. Such operators are called light operators, and for those we write $\alpha = b\sigma,\; \tilde{\alpha}=b\tilde{\sigma}$, where $\sigma, \tilde{\sigma}$ are kept fixed in the limit $b\to 0$ [@Zamolodchikov:1995aa],[@Harlow:2011ny]. Considering the correlation functions of light vertex operators, the leading exponential asymptotic behaviour in the limit $b\to 0$ is given by the semiclassical expression: $$\label{eq:SemiclassCorrExp} \left<\mathcal{V}_{b\sigma_1 b\tilde{\sigma}_1}(z_1)\cdots \mathcal{V}_{b\sigma_N b\tilde{\sigma}_N}(z_N)\right> \sim e^{-S_L(\Phi_c,\Phi_c)} \int_{\mathcal{M}} d\mu(g)\,\prod_{l=1}^{N} e^{2\sigma_l\Phi_{g}(z_l)+2\tilde{\sigma}_l\bar{\Phi}_{g}(z_l)} \ ,$$ where we have assumed that there is a continuum of saddle points and therefore the saddle point approximation must include an integral over them. In , $\mathcal{M}$ is the moduli space of solutions to the field equations equipped with some coordinates $\lbrace g \rbrace$, $d\mu(g)$ is a measure over this space and $\Phi_g(z), \bar{\Phi}_g(z)$ are the Liouville superfield saddle points as functions of the moduli space coordinates. In addition, $S_L(\Phi_c,\Phi_c)$ is the minimal value of the action, which is the value of the action evaluated at any saddle point. Using the four-dimensional $\mathcal{N}=2$ superconformal invariance of the theory, given a solution to the field equations , one can produce further solutions by applying superconformal transformations to the original solution. The moduli space of solutions to the field equations is given by the orbit of the solution under the action of the $\mathcal{N}=2$ superconformal group. We can analyse and integrate over this moduli space by using the four-dimensional $\mathcal{N}=2$ super-Möbius representation of the superconformal transformations. Four-Dimensional $\mathcal{N}=2$ Super-Möbius Transformations {#subsec:ForDNequals2TransQuanternions} ------------------------------------------------------------- In [@Lukierski:1983jg] it was shown that the $\mathcal{N}=2$ superconformal group in Euclidean four dimensions can be identified with a quaternionic supergroup $\mathrm{SL}(2|1;\mathbb{H})$. This supergroup has the quaternionic supermatrix representation: $$\label{eq:SLquaternion} \mathrm{SL}(2|1;\mathbb{H}) = \left\lbrace \left. M = \begin{pmatrix} a & b & \gamma \\ c & d & \delta \\ \alpha & \beta & e \end{pmatrix} \; \right| \; a,b,c,d,e \in \mathbb{H},\;\, \alpha,\beta,\gamma, \delta \in \mathbb{H}_a, \;\, {\mathrm{sdet}}(M) =1 \right\rbrace \ ,$$ where $\mathbb{H}$ denotes standard quaternions and $\mathbb{H}_a$ denotes Grassmann quaternions, i.e. quaternions whose coefficients of the basis elements are anticommuting (Grassmann) numbers. The quaternionic superdeterminant, ${\mathrm{sdet}}(M)$, is defined by first identifying each quaternion or Grassmann quaternion with a $2\times 2$ complex or Grassmann complex matrix respectively, $$q = q_0 + q_1 i +q_2 j + q_3 k \cong \begin{pmatrix} q_0 + q_1 i & \; q_2 + q_3 i \\ -q_2 +q_3 i & \; q_0 - q_1 i \\ \end{pmatrix} \ ,$$ which results in a $(4|2)\times(4|2)$ complex supermatrix, and then taking the standard superdeterminant of the resulting supermatrix. The quaternionic Lie superalgebra $\mathfrak{sl}(2|1,\mathbb{H})$ is isomorphic to the four-dimensional $\mathcal{N}=2$ Euclidean superconformal algebra. This was proved in [@Lukierski:1983jg] by explicitly writing the quaternionic supermatrix generators of $\mathrm{SL}(2|1;\mathbb{H})$, computing their superalgebra and showing its equivalence to the $\mathcal{N}=2$ superconformal algebra (the calculation is summarized in appendix \[app:QuaternionicAlgebra\]). Based on the isomorphism between the two supergroups, we are able to express all superconformal transformations as quaternionic super-Möbius transformations, i.e. as quaternionic linear fractional transformations. This provides a realization of the isomorphism on transformations of the supercoordinates. To do so, we identify the supercoordinates $(x^a, \theta^{\alpha}_i, \bar{\theta}^i_{\dot{\alpha}})$ with the standard quaternion $x$ and the Grassmann quaternions $\theta^+, \theta^-$. We also define the chiral and antichiral coordinates: $$\label{eq:QuaternionicChiralCoord} x^+ = x+\frac{1}{2}\bar{\theta}^- \theta^+, \quad x^- = \bar{x}+\frac{1}{2}\bar{\theta}^+ \theta^- \ ,$$ which satisfy $x = (x^++\bar{x}^-)/2$. It is important to note that for a product of anticommuting quaternions $\alpha,\beta \in \mathbb{H}_a$ we get a minus sign under quaternion conjugation, i.e. $\overline{\alpha \beta} = -\bar{\beta}\bar{\alpha}$. The $\mathcal{N}=2$ superconformal transformations in Euclidean four dimensions can be written in terms of the quaternion supercoordinates $(x,\theta^+,\theta^-)$ in the following way: - Supertranslations - translations $P(a)$, $a\in \mathbb{H}$, and supercharges $Q_{\pm}\left(\xi^{\pm}\right)$, $\xi^\pm \in \mathbb{H}_a$: $$\label{eq:QuatSupertrans} x^+ \to x^+ + a + \bar{\xi}^-\theta^+, \quad x^- = x^- + \bar{a} + \bar{\xi}^+\theta^-, \quad \theta^+ \to \theta^+ + \xi^+, \quad \theta^- \to \theta^- + \xi^- \ .$$ - Dilatations $\lambda D$, $\lambda \in \mathbb{R}$: $$\label{eq:QuatDilat} x^{\pm} \to \lambda x^{\pm},\quad \theta ^{\pm} \to \lambda ^{\frac{1}{2}} \theta^{\pm} \ .$$ - $\mathrm{O}(4)$ rotations $M(\omega^+,\omega^-)$, $\;\overline{\omega}^{\pm} = -\omega^{\pm} \in \mathbb{H}$: $$\label{eq:QuatO(4)} x^+ \to e^{\omega^-}x^+ e^{-\omega^+}, \quad x^- \to e^{\omega^+}x^- e^{-\omega^-}, \quad \theta^+ \to \theta^+ e^{-\omega^+}, \quad \theta^{-} \to \theta^{-} e^{-\omega^-} \ .$$ - R-symmetry - $\mathrm{SU}(2)_R$ symmetry $G(q)$, $\; \bar{q} = -q\in \mathbb{H}$, $\mathrm{O}(1,1)_R$ symmetry $\varphi A$, $\; \varphi \in \mathbb{R}$: $$\label{eq:QuatRsymm} x^{\pm} \to x^{\pm}, \quad \theta^+ \to e^{q+\frac{1}{2}\varphi}\theta^+, \quad \theta^- \to e^{q-\frac{1}{2}\varphi}\theta^- \ .$$ - Special conformal transformations $K(b)$, $b\in \mathbb{H}$: $$\label{eq:QuatSuperconf} \begin{aligned} &x^+ \to x^+(1+\bar{b}x^+)^{-1}, \quad x^- \to x^+(1+bx^-)^{-1} \ , \\ &\theta^+ \to \theta^+(1+\bar{b}x^+)^{-1}, \quad \, \theta^- \to \theta^-(1+bx^-)^{-1} \ , \end{aligned}$$ and special superconformal supercharges $S_{\pm}\left(\eta^{\pm}\right)$ , $\eta^\pm \in \mathbb{H}_a$: $$\begin{aligned} &x^{\pm} \to x^{\pm}\left(1+\eta^{\pm}\theta^{\pm}\right)^{-1}, \quad x^{\mp} \to x^{\mp} , \\ &\theta^{\pm} \to \theta^{\pm}\left(1+\eta^{\pm}\theta^{\pm}\right)^{-1}, \quad \theta^{\mp} \to \theta^{\mp} \mp \bar{\eta}^{\pm}x^{\mp} \ . \end{aligned}$$ Note that in the $\mathcal{N}=2$ Euclidean superconformal group the abelian R-symmetry is given by the non-compact group $\mathrm{O}(1,1)_R$ instead of the compact $\mathrm{U}(1)_R$ present in the Lorentzian case. The generators of these transformations satisfy the required commutation relations listed in appendix \[app:QuaternionicAlgebra\]. We define the superderivatives $D_{\pm}(\zeta^{\pm})$ by the requirements: $$\begin{aligned} [D_{+}(\zeta^+),D_{+}(\zeta'^+)] = [D_{-}(\zeta^-),D_{-}(\zeta'^-)] &= 0, \quad [D_{+}(\zeta^+),D_{-}(\zeta^-)] = P(\bar{\zeta}^-\zeta^+) \ , \\ [D_{\pm}(\zeta^{\pm}),Q_{\pm}(\xi^{\pm})] = [D_{\pm}(\zeta^{\pm}),Q_{\mp}(\xi^{\mp})] &=0 \ , \end{aligned}$$ which are satisfied by the generators of the transformation: $$x^{+} \to x^{+} + \bar{\theta}^- \zeta^+, \quad x^- \to x^- + \bar{\theta}^+\zeta^-, \quad \theta^+ \to \theta ^+ +\zeta^+, \quad \theta^- \to \theta ^- +\zeta^- \ .$$ As a result, the chiral and antichiral coordinates satisfy the chirality conditions: $$\label{eq:ChiralCoordCond} D_- x^+ = D_+ x^- = 0 \ .$$ A general superconformal transformations of the quaternion supercoordinates is a combination of the transformations - and therefore is given by a quaternionic super-Möbius transformation (quaternionic linear fractional transformations). In accordance with the isomorphism of the supercornformal group with $\mathrm{SL}(2|1;\mathbb{H})$, quaternionic super-Möbius transformations are parameterized by two supermatrices $M^+, M^- \in \mathrm{SL}(2|1;\mathbb{H})$ of the form . The general quaternionic super-Möbius transformation is then given by: $$\label{eq:SuperMobiusTrans} \begin{aligned} &x^{\pm} \to x'^{\pm}=\left(a^{\pm}x^{\pm}+b^{\pm}+\gamma^{\pm}\theta^{\pm}\right)\left(c^{\pm}x^{\pm}+d^{\pm}+\delta^{\pm}\theta^{\pm}\right)^{-1} \ ,\\ &\theta^{\pm} \to \theta'^{\pm}=\left(\alpha^{\pm}x^{\pm}+\beta^{\pm}+e^{\pm}\theta^{\pm}\right)\left(c^{\pm}x^{\pm}+d^{\pm}+\delta^{\pm}\theta^{\pm}\right)^{-1} \ . \end{aligned}$$ The two supermatrices $M^+, M^-$ are not independent, and one can be determined by the other. This is done by noting that the reality condition which follows from the definition of the chiral coordinates , $$\label{eq:RealCond} x'^+ - \bar{x}'^- = \bar{\theta}'^- \theta'^+ \ ,$$ provides a relation between the supermatrices of the two chiralities. The reality condition results in equations relating the two sets of parameters: $$\label{eq:RealCondSupermatrix} \begin{pmatrix} a^- & b^- & \gamma^- \\ c^- & d^- & \delta^- \\ \alpha^- & \beta^- & e^- \end{pmatrix} = \begin{pmatrix} \bar{d}^+ & -\bar{b}^+ & -\bar{\beta}^+ \\ -\bar{c}^+ & \bar{a}^+ & \bar{\alpha}^+ \\ \bar{\delta}^+ & -\bar{\gamma}^+ & \bar{e}^+ \end{pmatrix}^{-1} \ .$$ Moduli Space in the Super-Möbius Formalism ------------------------------------------ A chiral superfield $\Phi^+$ and an antichiral superfield $\Phi^-$ in the quaternionic formalism are defined by the chirality conditions: $$\label{eq:QuatChiralSuperfieldCond} D_- \Phi ^+(x^+,\theta^+) = 0, \quad D_+ \Phi ^- (x^-, \theta^-) = 0 \ ,$$ where as a result of the superfields depend only on the corresponding chiral coordinates. We are interested in describing the moduli space of chiral superfield solutions to the field equations using the super-Möbius transformations. We first construct chiral superfield corresponding to the solution of the field equations. In quaternionic notation this solution takes the form: $$\label{eq:QuaternChiralSuperfield} \begin{aligned} \Phi^+(x^+,\theta^+) &= A^+(x^+) +\bar{\theta}^+ B^+(x^+)\theta^++\theta^+\bar{\theta}^+\theta^+\bar{\theta}^+C^+(x^+) \ , \\ \Phi^-(x^-,\theta^-) &= A^-(x^-) +\bar{\theta}^- B^-(x^-)\theta^- +\theta^-\bar{\theta}^-\theta^-\bar{\theta}^-C^-(x^-)\ , \end{aligned}$$ where $A^{\pm}(x^{\pm}),C^{\pm}(x^{\pm})\in \mathbb{R}$ are real fields and $\bar{B}^{\pm} (x^{\pm}) = - B^{\pm}(x^{\pm})\in \mathbb{H}$ are imaginary quaternionic fields, which ensures that the chiral and antichiral superfields are both real, i.e. $\bar{\Phi}^{\pm} = \Phi^{\pm}$. The real degrees of freedom $A^+, A^-$ and $C^+, C^-$ correspond to the complex fields $A$ and $C$ respectively, and the imaginary quaternionic degrees of freedom $B^+, B^-$ correspond to the complex $\mathrm{SU}(2)_R$ triplet field $B_{ij}$. The fields $A^{\pm}, B^{\pm}, C^{\pm}$ are $\mathrm{O}(4)$ scalars, while under $\mathrm{SU}(2)_R$ symmetry , $A^{\pm},C^{\pm}$ are singlets and $B^{\pm}$ transform in the adjoint representation $B^{\pm} \to e^q B^{\pm} e^{-q}$. The super-Liouville chiral field transforms in the affine representation of the $O(1,1)_R$ symmetry, and therefore under the transformation we have $A^{\pm}(x^{\pm}) \to A^{\pm}(x^{\pm}) \pm \frac{1}{b} \varphi$, $B^{\pm}(x^{\pm}) \to e^{\mp \varphi} B^{\pm}(x^{\pm})$ and $C^{\pm}(x^{\pm}) \to e^{\mp 2\varphi} C^{\pm}(x^{\pm})$. The classical solution $\Phi_c=b\Phi$ to the field equations can be written explicitly in terms of the quaternionic supercoordinates as: $$\label{eq:ABForQuat} A_c^{\pm} = -\log\left( \frac{|x^{\pm}|^2+r^2}{2r^2}\right), \quad B_c^{\pm} = \frac{r B^{\pm}_0}{|x^{\pm}|^2+r^2} , \quad C_c^{\pm} = \frac{32r^2}{\left(|x^{\pm}|^2+r^2\right)^2} \ ,$$ where $\overline{B}^{\pm}_0 = -B^{\pm}_0\in \mathbb{H}$ is a constant imaginary quaternion, which can be set to an arbitrary value using the $\mathrm{O}(1,1)_R \times \mathrm{SU}(2)_R$ symmetry. The general solution to the field equations is given by the super-Möbius transformations of the basic superfield solution . The superconformal transformation introduces super-Weyl transformation, parameterized by the (anti)chiral superfield $\sigma^{\pm}= \log|c^{\pm}x^{\pm}+d^{\pm}+\delta^{\pm}\theta^{\pm}|^2$. Using the super-Möbius transformations and the transformation law of the Liouville superfield under super-Weyl transformations we find the general solution: $$\label{eq:GenSolutionChiralSuperifledAfterTrans} \begin{aligned} \Phi_c'^{\pm} &= A^{\pm}_c(x'^{\pm})+\bar{\theta}'^{\pm}B_c^{\pm}(x'^{\pm})\theta'^{\pm}+\theta'^{\pm}\bar{\theta}'^{\pm}\theta'^{\pm}\bar{\theta}'^{\pm}C^{\pm}(x'^{\pm})-\sigma^{\pm}\\ &= \log\left(2r^2\right) -\log\left( |a^{\pm}x^{\pm}+b^{\pm}+\gamma^{\pm}\theta^{\pm}|^2+|c^{\pm}x^{\pm}+d^{\pm}+\delta^{\pm}\theta^{\pm}|^2\right)\\ & \quad +\frac{(\bar{x}^{\pm}\bar{\alpha}^{\pm}+\bar{\beta}^{\pm}+\bar{\theta}^{\pm}\bar{e}^{\pm})B_0^{\pm}(\alpha^{\pm}x^{\pm}+\beta^{\pm}+e^{\pm}\theta^{\pm})}{|a^{\pm}x^{\pm}+b^{\pm}+\gamma^{\pm}\theta^{\pm} |^2+|c^{\pm}x^{\pm}+d^{\pm}+\delta^{\pm}\theta^{\pm}|^2} \\ & \quad + \frac{32\left|\alpha^{\pm}x^{\pm}+\beta^{\pm}+e^{\pm}\theta^{\pm}\right|^4}{\left(|a^{\pm}x^{\pm}+b^{\pm}+\gamma^{\pm}\theta^{\pm} |^2+|c^{\pm}x^{\pm}+d^{\pm}+\delta^{\pm}\theta^{\pm}|^2\right)^2}\ . \end{aligned}$$ Correlation Functions of Light Operators ---------------------------------------- In the semiclassical limit $b\to 0$, the correlation function of light operators is evaluated using the saddle point approximation by integrating over the moduli space of solutions to the field equations . Looking at the general solution written using the quaternionic formalism we see that the moduli space is $\mathcal{M}=\mathrm{SL}(2|1;\mathbb{H})$. In order to get the leading exponential asymptotic of the correlation function of light operators we need to include two corrections to . We need to to multiply by the functional superdeterminant ${\mathrm{sdet}}\left(\frac{\delta^2 S(\Phi_c)}{\delta \Phi^2}\right)$, and we also need to multiply by the superdeterminant of the Jacobian for changing the integration variable $\Phi_c$ to the coordinates of the moduli space. We can include both those effects by multiplying by a $b$-dependent factor $\hat{\mathcal{A}}(b)$ whose logarithm is at most $O(\log b)$ [@Harlow:2011ny]. We therefore have the following semiclassical expression for the correlation function of light operators: $$\label{eq:SemiclassCorrExpMobius} \left<\mathcal{V}_{b\sigma_1 b\tilde{\sigma}_1}(z_1)\cdots \mathcal{V}_{b\sigma_N b\tilde{\sigma}_N}(z_N)\right> \approx \hat{\mathcal{A}}(b) e^{-S_L(\Phi_c,\Phi_c)} \int_{\mathcal{M}} d\mu(M)\,\prod_{l=1}^{N} e^{2\sigma_l\Phi^+_M(z_l)+2\tilde{\sigma}_l\Phi^-_{M}(z_l)} \ ,$$ where $\Phi^{\pm}_M$ denotes the saddle point $\eqref{eq:GenSolutionChiralSuperifledAfterTrans}$ corresponding to $M^{\pm}$ and the integration is accompanied by the invariant Haar measure of quaternionic supergroup $\mathrm{SL}(2,1;\mathbb{H})$ : $$\label{eq:SLHaarMeasure} d\mu(M^+) = d^4a^+\, d^4b^+\, d^4c^+\, d^4d^+\, d^4\alpha^+\, d^4\beta^+\, d^4\gamma^+\, d^4\delta^+\, d^4e^+\; \delta\!\left({\mathrm{sdet}}\left(M^+\right)-1\right) \ .$$ Here we wrote the Haar measure in terms of elements of the supermatrix $M^+$ but it takes the same form in terms of $M^-$. In order for a correlation function to be non-zero, it needs to have a well-defined R-charge (see e.g. [@Osborn:1998qu] for the $\mathcal{N}=1$ case). The $\mathrm{O}(1,1)_R$ charge of the vertex operator is the Euclidean version of the Lorentzian non-compact $\mathrm{U}(1)_R$ charge , and is given by the same formula. The sum over all $\mathrm{O}(1,1)_R$-charges of the vertex operators which appear in a given correlation function of the form should agree with a series expansion in superspace coordinates which contains only integer powers of the Grassmanic external coordinates. Thus the $N$-point function of vertex operators vanishes unless it obeys the $\mathrm{O}(1,1)_R$ selection rule: $$\frac{2}{b}\sum_{l=1}^{N} (\alpha_l - \tilde{\alpha}_l) \in \mathbb{Z} \ .$$ Our strategy is to start with the classical solution for the chiral superfield given by - and conformally transform it according to . Then, one must integrate over the parameters of the $\mathcal{N}=2$ superconformal transformation to find the semiclassical limit of the correlation function of vertex operators. We are interested in understanding how the $\mathrm{U}(1)_R$ selection rule arises from the integration over the $\mathrm{SL}(2,1;\mathbb{H})$ moduli space of saddle points. In terms of the supermatrix $M^+$, an $\mathrm{O}(1,1)_R$ transformation is given by multiplying $M^+$ by an $\mathrm{SL}(2,1;\mathbb{H})$ element $$\label{eq:SupermatrixO(1,1)R} M^+ \to M^+ \begin{pmatrix} e^{\varphi} & 0 & 0 \\ 0 & e^{\varphi} & 0 \\ 0 & 0 & e^{2\varphi} \end{pmatrix} \ ,$$ which, of course, leaves the Haar measure invariant. On the other hand, for the vertex operators evaluated at the solution we have : $$\exp\left(2\sigma\Phi^++ 2\tilde{\sigma} \Phi^-\right) \to e^{-2(\sigma-\tilde{\sigma})\varphi} \exp\left(2\sigma\Phi^+(x^+,e^{\varphi}\theta^+)+ 2\tilde{\sigma} \Phi^-(x^-,e^{-\varphi}\theta^-)\right) \ .$$ By expanding the vertex operators appearing in a semiclassical calculation of an $N$-point function in powers of the $\theta^{\pm}_l, \; l=1,\dots, N$, we find that under each order of the power series is multiplied by $\exp\left(-2\sum_l (\sigma_l - \tilde{\sigma}_l)\varphi+k\varphi\right)$ for some $k\in \mathbb{Z}$. However, is simply a change of integration variables and thus cannot change the result of the integral. Therefore, we have shown that the result of the integration over the moduli space will have a defined $\mathrm{O}(1,1)_R$ charge and demonstrated the selection rule $2\sum_l (\sigma_l - \tilde{\sigma}_l) \in \mathbb{Z}$. In addition to the reproduction of the $\mathrm{O}(1,1)_R$ selection rule, by explicitly inserting the classical solution in the moduli space integral , we find some non-trivial selection rules which exist in the semiclassical limit. Specifically, by examining the lowest order component in all supercoordinates of , i.e. $\theta_1=\bar{\theta}_1 = \dots = \theta_N = \bar{\theta}_N=0$, one can see that the 2-point and 3-point functions of the chiral and antichiral vertex operators $V_{\alpha} = e^{2\alpha A}, V_{\tilde{\alpha}} = e^{2\tilde{\alpha} \bar{A}}$ vanish in the semiclassical computation. This vanishing is a result of the number of Grassmann integrations appearing in the measure . By preforming a series of changes to the integration variables for these 2-point and 3-point functions one can show that there is not a sufficient number of Grassmann variables appearing in the integrand for it to survive the integrations. Discussion and Outlook {#sec:SummaryAndOutlook} ====================== We constructed and studied classical and quantum aspects of $\mathcal{N}=2$ Liouville SCFT in four dimensions. There are many directions one can follow. As in the $\mathcal{N}=1$ case, solving analytically the integrals for the three-point function of vertex operators should reveal a four-dimensional DOZZ-like formula, which in turn can lead to a complete bootstrap solution of the theory. Solving the integrals over the quaternionic variables, which we largely left as an open problem, will result in the explicit expression of correlation functions of the vertex operators in the semiclassical regime. In fact, the quaternionic formalism presented in subsection \[subsec:ForDNequals2TransQuanternions\] for general $\mathcal{N}=2$ superconformal transformations, which is based on the isomorphism of the supercornformal group with $\mathrm{SL}(2|1;\mathbb{H})$ that was proven in [@Lukierski:1983jg], can be used for other calculations in a general framework of $\mathcal{N}=2$ SCFTs, as it leads to a simple, quaternionic linear fractional form of transformations. In [@Levy:2018xpu], it was shown that by using the super-Möbius group one can easily evaluate correlation functions of vertex operators in the semiclassical limit. In this work, a quaternionic super-Möbius group was shown to yield some light over these correlators, however, the complete calculation had faced some technical difficulties. It will be interesting to develop new mathematical frameworks and other generalizations of super-Möbius groups for evaluating correlation functions of vertex operators in the semiclassical limit in other Liouville field theories in various dimensions. In [@Beem:2013sza], the authors have established a correspondence between four-dimensional $\mathcal{N}=2$ SCFTs and two-dimensional chiral algebras. By classifying the Schur operators in the four dimensional theory, it was argued that in cases where the four-dimensional $\mathcal{N}=2$ theory is unitary, a component of the $\mathrm{SU}(2)_R$ current yields the stress-tensor of the corresponding two-dimensional theory. A Schur operator [@Gadde:2011uv] is an operator satisfying: $$\label{eq:SchurCon1} \frac{1}{2}\left(\Delta-(j_1+j_2)\right)-R=0,$$ $$\label{eq:SchurCon2} w+(j_1-j_2)=0.$$ It was shown in [@Beem:2013sza] that when the four-dimensional theory is unitary, the second condition necessarily follows from the first one. However, in non-unitary $\mathcal{N}=2$ four-dimensional SCFTs, such as the theory that was studied in this paper, the two conditions are independent. Due to the non-unitarity of the theory it is possible to find operators which are Schur operators but still transform trivially under the $\mathrm{SU}(2)_R$ symmetry (see table \[Table1\] for a classification of the dimensions and charges of the various fields). For example, $\partial_{++}A$, and $F_{++}$[^1] are two operators which satisfy the Schur conditions and . When reducing their two-point functions in flat space according to the prescription given in [@Beem:2013sza], one finds non-vanishing free field correlators in two dimensions. However, both of these operators transform trivially under the $\mathrm{SU}(2)_R$ group, therefore do not appear in the $\mathrm{SU}(2)_R$ current. It will be interesting to study how does the loss of unitarity in the four-dimensional theory affects the correspondence with the two-dimensional chiral algebra. 0.5cm This work is supported in part by the I-CORE program of Planning and Budgeting Committee (grant number 1937/12), the US-Israel Binational Science Foundation, GIF and the ISF Center of Excellence. T.L gratefully acknowledges the support of the Alexander Zaks Scholarship. A.R.M gratefully acknowledges the support of the Adams Fellowship Program of the Israel Academy of Sciences and Humanities. Notations and Conventions {#app:Notations} ========================= General Notations and Conventions --------------------------------- The flat space notations and component reduction are inherited mainly from [@Butter:2013lta] in Lorentzian signature. These were adapted to Euclidean signature using appendix A in [@Festuccia:2011ws]. The curved supergravity notation follows that of [@Kuzenko:2013gva] and [@Kuzenko:2008ep] in Lorentzian signature. Spacetime indices are denoted by $a,b,\cdots$, while $SU(2)_R$ indices are denoted by $i,j,\cdots$ and spinoric indices are denoted by $\alpha,\beta,\cdots$.\ $SU(2)_R$ indices are raised and lowered by complex conjugation, in accordance with [@Butter:2013lta]. The invariant $SU(2)_R$ tensor $\epsilon^{ij}$ and $\epsilon_{ij}$ are defined by $\epsilon^{ij}\epsilon_{kj}=\delta^i_k$, with $\epsilon^{12}=\epsilon_{12}=1$. Spinoric indices $\alpha,\dot{\alpha}=\lbrace1,2\rbrace$ are raised and lowered using the antisymmetric $\epsilon$ symbol: $$\psi^\alpha=\epsilon^{\alpha\beta}\psi_\beta, \quad \psi_\alpha=\epsilon_{\alpha\beta}\psi^\beta, \quad \bar{\psi}^{\dot{\alpha}}=\epsilon^{\dot{\alpha}\dot{\beta}}\bar{\psi}_{\dot{\beta}}, \quad \bar{\psi}_{\dot{\alpha}}=\epsilon_{\dot{\alpha}\dot{\beta}}\psi^{\dot{\beta}},$$ where $\epsilon^{12}=\epsilon_{21}=1$. A four component Dirac fermion $\Psi$ consists of two Weyl spinors $\psi_\alpha$ and $\bar{\chi}^{\dot{\alpha}}$ which are left-handed and right handed spinors respectively. The Dirac conjugate is $\bar{\Psi}$ follows the notation in [@Butter:2013lta] and carries components $\chi^\alpha=(\bar{\chi}^{\dot{\alpha}})^*$ and $\bar{\psi}_{\dot{\alpha}}=(\psi_\alpha)^*$. We denote: $$V_{\alpha\dot{\alpha}} = (\sigma^a)_{\alpha\dot{\alpha}}V_{a}, \qquad V_a=-2\bar{\sigma}_a^{\dot{\alpha}\alpha}V_{\alpha\dot{\alpha}},$$ and: $$F^-_{ab}=(\sigma^{ab})_{\alpha}^\beta F_\beta^\alpha,\qquad F^+_{ab}=(\bar{\sigma}_{ab})^{\dot{\alpha}}_{\dot{\beta}}F^{\dot{\beta}}_{\dot{\alpha}},$$ $$F^{\pm}_{ab} = \frac{1}{2}\left(F_{ab}\pm\tilde{F}_{ab} \right), \quad \tilde{F}_{ab}=\frac{1}{2}\epsilon_{abcd}F^{cd}, \quad \tilde{F}^{\pm}_{ab}=\pm F^{\pm}_{ab}.$$ Supercoordinates are denoted by $z^A=(x^a,\theta^{\alpha i},\bar{\theta}_{\dot{\alpha}i})$. In Lorentzian signature, the superderivatives in flat space are given by: $$D_{\alpha i} = \frac{\partial}{\partial\theta^{\alpha i}}+i(\sigma^a)_{\alpha\dot{\alpha}}\bar{\theta}^{\dot{\alpha}}_i\frac{\partial}{\partial x^a}, \qquad \bar{D}^{\dot{\alpha}i} = \frac{\partial}{\partial\bar{\theta}_{\dot{\alpha}i}}+i(\bar{\sigma}^a)^{\alpha\dot{\alpha}}\theta_\alpha^i\frac{\partial}{\partial x^a}.$$ An $\mathcal{N}=2$ chiral superfield satisfies: $$\bar{D}^{\dot{\alpha}i}\Phi=0.$$ In terms of component fields, it decomposes into: $$\begin{aligned} &\qquad A \equiv \Phi\vert_{\theta=0}, \qquad \Psi_{\alpha i} \equiv D_{\alpha i} \Phi\vert_{\theta=0}, \qquad B_{ij}\equiv -\frac{1}{2}D_{ij}\Phi\vert_{\theta=0},\\ & F_{ab}^-\equiv -\frac{1}{4}(\sigma_{ab})_\alpha^\beta D_\beta^\alpha\Phi\vert_{\theta=0},\qquad \Lambda_{\alpha i} \equiv \frac{1}{6}\epsilon^{jk}D_{\alpha k}D_{ji}\Phi\vert_{\theta=0},\qquad C=-2D^4\Phi\vert_{\theta=0}, \end{aligned}$$ where $$D_{ij}\equiv -D_{\alpha (i}D^\alpha_{j)}, \qquad D_{\alpha\beta}\equiv -\epsilon^{ij}D_{(\alpha i}D_{\beta)j}.$$ Note that in we have used the normalization found in [@deWit:1980lyi] for the kinetic term, which is different than the one given in [@Butter:2013lta] by a factor of $4$. Notation for Grassmann Quaternions and Useful Identities {#app:QuaternionicNotation} -------------------------------------------------------- Consider a Grassmanic quaternion $\alpha$. We denote its components by $\alpha_0 {\bf 1}+\alpha_1{\bf i}+\alpha_2{\bf j}+\alpha_3{\bf k}$, where $\alpha_0,\alpha_1,\alpha_2,\alpha_3$ are real Grassman numbers. We define: $$|\alpha|^2 \equiv \alpha\bar{\alpha}.$$ Therefore: $$|\alpha|^4 = -24\alpha_0\alpha_1\alpha_2\alpha_3.$$ Note that the following identity holds for Grassmann quaternions: $$|\beta|^2|(\alpha+\beta)|^4|\alpha|^2 = |\alpha|^4|\beta|^4.$$ The latter is easily shown by defining $\gamma=\alpha+\beta$: $$|\beta|^2|\alpha+\beta|^4|\alpha|^2 = |\gamma-\alpha|^2|\gamma|^4|\alpha|^2=|\alpha|^2|\gamma|^4|\alpha|^2 = |\alpha|^2|\beta|^4|\alpha|^2=|\alpha|^4|\beta|^4$$ where we have used the fact that for any Grassmanic quaternion $\alpha$: $|\alpha|^n=0$ for $n>4$. Superspace Propagators {#app:SuperSpacePropagators} ====================== In the presence of chiral sources $J$, $\bar{J}$ the free action can be written as: $$\begin{aligned} S &= \frac{1}{32\pi^2}\int d^4xd^4\theta d^4\bar{\theta}\Phi\bar{\Phi}+\left(\int d^4xd^4\theta\Phi J +h.c.\right)\\ &=\frac{1}{32\pi^2} \frac{1}{2}\begin{pmatrix} \Phi, & \bar{\Phi} \end{pmatrix}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} \Phi \\ \bar{\Phi} \end{pmatrix} + \begin{pmatrix} \Phi, & \bar{\Phi} \end{pmatrix}\begin{pmatrix} \frac{D^4}{\square^2} & 0 \\ 0 & \frac{\bar{D}^4}{\square^2} \end{pmatrix} \begin{pmatrix} J \\ \bar{J} \end{pmatrix} \end{aligned}$$ where we have used $\int d^4 \bar{\theta} = \bar{D}^4$, and the identity: $$\frac{\bar{D}^4 D^4}{\square^2}\Phi = \Phi ,$$ which holds for any $\mathcal{N}=2$ chiral superfield $\Phi$. The variation is given by: $$\frac{\delta\Phi(y,\theta^i)}{\delta\Phi(y',\theta'_i)}= \delta(y-y')\delta(\theta^i-\theta'^i), \qquad \frac{\delta\Phi(x,\theta^i,\bar{\theta}_i)}{\delta\Phi(x',\theta'^i,\bar{\theta}'_i}=\bar{D}^4\delta(z-z'),$$ where $$\delta(z-z') \equiv \delta(x-x')\delta(\theta^i-\theta'^i)\delta(\bar{\theta}_i-\bar{\theta}'_i).$$ We thus find: $$\frac{1}{32\pi^2}\begin{pmatrix} \bar{D}^4 & 0 \\ 0 & D^4 \end{pmatrix}\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \begin{pmatrix} \Phi \\ \bar{\Phi}\end{pmatrix} = - \begin{pmatrix} J \\ \bar{J} \end{pmatrix}.$$ The Green’s function satisfies: $$\frac{1}{32\pi^2}\begin{pmatrix} \bar{D}^4 & 0 \\ 0 & D^4 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \vartriangle = \begin{pmatrix} \bar{D}^4 & 0 \\ 0 & D^4 \end{pmatrix} \delta(z-z').$$ A solution to this equation is given by: $$\vartriangle = 32\pi^2 \begin{pmatrix} 0 & \frac{\bar{D}^4D^4}{\square^2}\\ \frac{D^4\bar{D}^4}{\square^2} & 0 \end{pmatrix}\delta(z-z'),$$ therefore: $$\vartriangle \equiv \begin{pmatrix} \left<\Phi(z)\Phi(z')\right> & \left<\Phi(z)\bar{\Phi}(z')\right>\\ \left<\bar{\Phi}(z)\Phi(z')\right> & \left<\bar{\Phi}(z)\bar{\Phi}(z')\right> \end{pmatrix}= \begin{pmatrix} 0 & \bar{D}^4D^4 \\ D^4\bar{D}^4 & 0 \end{pmatrix}\hat{\vartriangle}(x-x'),$$ where $\hat{\vartriangle}(x-x')\equiv -\log{(x-x')}$. Therefore one recovers the results given in equations , , . Quaternionic Formalism for $\mathcal{N}=2$ Superconformal Algebra in $4d$ {#app:QuaternionicAlgebra} ========================================================================= By acting with the $\mathcal{N}=2$ transformations on each of the quaternionic supercoordinates (see subsection \[subsec:ForDNequals2TransQuanternions\]), one recovers the algebra of [@Lukierski:1983jg]: $$\begin{aligned} & \left[ D,Q_2(\zeta) \right] = -\frac{1}{2}Q_2(\zeta), \qquad \left[D,\pi_1(\zeta)\right] = -\frac{1}{2}\pi_1(\zeta), \\ & \left[ D, Q_1(\zeta)\right]=\frac{1}{2}Q_z(\zeta), \qquad \left[D,\pi_2(\zeta)\right]=\frac{1}{2}\pi_2(\zeta), \\ & \left[ M(\omega_1, \omega_2), Q_2(\zeta)\right] =Q_2(\omega_2\zeta), \qquad \left[M(\omega_1,\omega_2),\pi_1(\zeta)\right] = \pi_1(\omega_1,\zeta), \\ & \left[ M(\omega_1,\omega_2), Q_1(\zeta)\right] = Q_1(\omega_1\zeta), \qquad \left[M(\omega_1,\omega_2),\pi_2(\zeta)\right] = \pi_2(\omega_2\zeta), \\ & \left[ K(b),Q_2(\zeta)\right] = Q_1(\bar{b}\zeta), \qquad \left[ K(b),\pi_1(\zeta)\right] = -\pi_2(b\zeta), \\ & \left[ P(a),\pi_2(\zeta) \right] = -\pi_1(\bar{a}\zeta), \qquad \left[P(a),Q_1(\zeta) \right] = -Q_2(a\zeta),\\ & \left[ M(\omega_1,\omega_2), M(\omega_1',\omega_2')\right] = M(\left[\omega_1,\omega_1')\right],\left[\omega_2,\omega_2'\right]),\\ & \left[M(\omega_1,\omega_2),P(a)\right] = P(\omega_2a-a\omega_1), \\ & \left[ P(a),D\right] = P(a), \\ &\left[K(b),D\right] = -K(b), \\ &\left[ M(\omega_1,\omega_2),K(b)\right] = K(\omega_2b-b\omega_1), \\ & \left[ P(a),K(b)\right] = -(\bar{a}b+\bar{b}a)D+\frac{1}{2}M(\bar{a}b-\bar{b}a,a\bar{b}-b\bar{a}),\end{aligned}$$ $$\begin{aligned} &\left[G(q),Q_1(\zeta)\right]=-Q_1(\zeta q), \qquad \left[G(q),Q_2(\zeta) \right] = -Q_2(\zeta q), \\ & \left[ G(q), \pi_1(\eta) \right] = -\pi_1(\eta q), \qquad \left[ G(q), \pi_2(\eta) \right] = -\pi_2(\eta q),\end{aligned}$$ $$\begin{aligned} & \left[ A, Q_1(\zeta) \right] = -Q_1(\zeta), \qquad \left[ A, Q_2(\zeta) \right]=-Q_2(\zeta),\\ & \left[ A, \pi_1(\eta)\right] = \pi_1(\eta), \qquad \left[ A, \pi_2(\eta)\right] = \pi_2(\eta),\end{aligned}$$ $$\begin{aligned} & \lbrace Q_2(\zeta), \pi_1(\eta) \rbrace = P(\zeta\bar{\eta}), \\ & \lbrace Q_1(\zeta), \pi_2(\eta) \rbrace = K(\eta\bar{\zeta}), \\ & \lbrace Q_1(\zeta),\pi_1(\eta) \rbrace = -\frac{1}{2}G(\bar{\eta}\zeta-\bar{\zeta}\eta)-\frac{3}{4}\left(\zeta\bar{\eta}+\eta\bar{\zeta}\right)A\\ & \qquad \qquad \qquad \qquad +\frac{1}{2}\left(\zeta\bar{\eta}+\eta\bar{\zeta}\right)D+\frac{1}{2}M(\zeta\bar{\eta}-\eta\bar{\zeta},0),\nonumber \\ & \lbrace Q_2(\zeta), \pi_2(\eta) \rbrace = -\frac{1}{2}G(\bar{\eta}\zeta-\bar{\zeta}\eta)-\frac{3}{4}\left(\bar{\eta}\zeta+\bar{\zeta}\eta\right)A\\ & \qquad \qquad \qquad \qquad -\frac{1}{2}\left(\bar{\eta}\zeta+\bar{\zeta}\eta\right)D+\frac{1}{2}M(0,\zeta\bar{\eta}-\eta\bar{\zeta}).\nonumber \end{aligned}$$ Dimensions and Charges ====================== In this appendix we list the dimensions and charges of the fields in . $E$ represents the dimensions, $j_1$ and $j_2$ are the corresponding $\mathrm{O}(4)\cong \mathrm{SU}(2)_1\times\mathrm{SU}(2)_2$ charges, $R$ is the $\mathrm{SU}(2)_R$ charge and $r$ corresponds to the $\mathrm{U}(1)_R$ charge. The list is given in table \[Table1\]. ------------------------------------------------- ------------------------------------------------------------- ------------------- ------------------- ------------------- -------------------------------------- $\Delta$ $j_1$ $j_2$ $R$ $w$ \[0.5ex\] $F_{\alpha\beta}$ $1$ $\pm 1,0$ $0$ $0$ $-1$ $\bar{F}_{\dot{\alpha}\dot{\beta}}$ $1$ $0$ $\pm 1,0$ $0$ $1$ $B^{ij}$ $1$ $0$ $0$ $\pm 1,0$ $-1$ $\bar{B}_{ij}$ $1$ $0$ $0$ $\pm 1,0$ $1$ $\psi^i_{\alpha}$ $\frac{1}{2}$ $\pm\frac{1}{2}$ $0$ $\pm \frac{1}{2}$ $-\frac{1}{2}$ $\bar{\psi}_i^{\dot{\alpha}}$ $\frac{1}{2}$ $0$ $\pm \frac{1}{2}$ $\pm\frac{1}{2}$ $\frac{1}{2}$ $\Lambda^i_{\alpha}$ $\frac{3}{2}$ $\pm \frac{1}{2}$ $0$ $\pm\frac{1}{2}$ $-\frac{3}{2}$ $\bar{\Lambda}_i^{\dot{\alpha}}$ $\frac{3}{2}$ $0 $ $\pm\frac{1}{2}$ $\pm\frac{1}{2}$ $\frac{3}{2}$ $e^{2\alpha A+2\tilde{\alpha}{A^*}}$ $-4\alpha\tilde{\alpha}+\frac{2}{b}(\alpha+\tilde{\alpha})$ $0$ $0$ $0$ $\frac{2}{b}(\alpha-\tilde{\alpha})$ $ \partial_{\alpha\dot{\alpha}}A\, (Q=0)$ $1$ $\frac{1}{2}$ $\frac{1}{2}$ $0$ $0$ $ \partial^{\alpha\dot{\alpha}}\bar{A}\, (Q=0)$ $1$ $\frac{1}{2}$ $\frac{1}{2}$ $0$ $0$ \[1ex\] ------------------------------------------------- ------------------------------------------------------------- ------------------- ------------------- ------------------- -------------------------------------- : The dimensions and charges of the fields which correspond to the action in . The last two lines represent primary operators only for the Coulomb gas case ($Q=0)$.[]{data-label="Table1"} [999]{} A. M. Polyakov, “Quantum Geometry of Bosonic Strings”, Phys. Lett. B [**103**]{}, 207 (1981) \[Phys. Lett.  [**103B**]{}, 207 (1981)\]. doi:10.1016/0370-2693(81)90743-7 Y. Oz, “Spontaneous Symmetry Breaking, Conformal Anomaly and Incompressible Fluid Turbulence,” JHEP [**1711**]{}, 040 (2017) doi:10.1007/JHEP11(2017)040 \[arXiv:1707.07855 \[hep-th\]\]. Y. Oz, “Turbulence and Random Geometry,” arXiv:1809.10003 \[hep-th\], to appear in the Memorial Volume for Jacob Bekenstein. T. Levy and Y. Oz, “Liouville Conformal Field Theories in Higher Dimensions,” JHEP [**1806**]{}, 119 (2018) doi:10.1007/JHEP06(2018)119 \[arXiv:1804.02283 \[hep-th\]\]. T. Levy, Y. Oz and A. Raviv-Moshe, “$\mathcal{N}=1$ Liouville SCFT in Four Dimensions,” JHEP [**1812**]{} (2018) 122 doi:10.1007/JHEP12(2018)122 \[arXiv:1810.02746 \[hep-th\]\]. G. Mussardo, G. Sotkov and M. Stanishkov, “N=2 Superconformal Minimal Models”, Int. J. Mod. Phys. A [**4**]{}, 1135 (1989). doi:10.1142/S0217751X89000522 J. Distler, Z. Hlousek and H. Kawai, “Superliouville Theory as a Two-Dimensional, Superconformal Supergravity Theory”, Int. J. Mod. Phys. A [**5**]{}, 391 (1990). doi:10.1142/S0217751X90000180 S. M. Kuzenko, “Super-Weyl anomalies in N=2 supergravity and (non)local effective actions”, JHEP [**1310**]{}, 151 (2013) doi:10.1007/JHEP10(2013)151 \[arXiv:1307.7586 \[hep-th\]\]. S. M. Kuzenko, U. Lindstrom, M. Rocek and G. Tartaglino-Mazzucchelli, “4D N = 2 Supergravity and Projective Superspace”, JHEP [**0809**]{}, 051 (2008) doi:10.1088/1126-6708/2008/09/051 \[arXiv:0805.4683 \[hep-th\]\]. D. Butter, B. de Wit, S. M. Kuzenko and I. Lodato, “New higher-derivative invariants in N=2 supergravity and the Gauss-Bonnet term”, JHEP [**1312**]{}, 062 (2013) doi:10.1007/JHEP12(2013)062 \[arXiv:1307.6546 \[hep-th\]\]. G. Festuccia and N. Seiberg, “Rigid Supersymmetric Theories in Curved Superspace”, JHEP [**1106**]{}, 114 (2011) doi:10.1007/JHEP06(2011)114 \[arXiv:1105.0689 \[hep-th\]\]. M. de Roo, J. W. van Holten, B. de Wit and A. Van Proeyen, “Chiral Superfields in $N=2$ Supergravity”, Nucl. Phys. B [**173**]{}, 175 (1980). doi:10.1016/0550-3213(80)90449-6 B. de Wit, J. W. van Holten and A. Van Proeyen, “Structure of N=2 Supergravity”, Nucl. Phys. B [**184**]{}, 77 (1981) Erratum: \[Nucl. Phys. B [**222**]{}, 516 (1983)\]. doi:10.1016/0550-3213(83)90548-5, 10.1016/0550-3213(81)90211-X S. M. Kuzenko and G. Tartaglino-Mazzucchelli, “Different representations for the action principle in 4D N = 2 supergravity”, JHEP [**0904**]{}, 007 (2009) doi:10.1088/1126-6708/2009/04/007 \[arXiv:0812.3464 \[hep-th\]\]. M. Müller, “Consistent Classical Supergravity Theories”, Lect. Notes Phys.  [**336**]{} (1989). doi:10.1007/3-540-51427-9 S. M. Paneitz, “A Quartic Conformally Covariant Differential Operator for Arbitrary Pseudo-Riemannian Manifolds (Summary)", arXiv:0803.4331 \[math.DG\]. T. Branson, “Differential operators canonically associated to a conformal structure", Math. Scand. [**57**]{} (1985) 293. S. M. Kuzenko and G. Tartaglino-Mazzucchelli, “Field theory in 4D N=2 conformally flat superspace”, JHEP [**0810**]{}, 001 (2008) doi:10.1088/1126-6708/2008/10/001 \[arXiv:0807.3368 \[hep-th\]\]. D. Butter, G. Inverso and I. Lodato, “Rigid 4D $ \mathcal{N}=2 $ supersymmetric backgrounds and actions”, JHEP [**1509**]{}, 088 (2015) doi:10.1007/JHEP09(2015)088 \[arXiv:1505.03500 \[hep-th\]\]. B. de Wit, S. Katmadas and M. van Zalk, “New supersymmetric higher-derivative couplings: Full N=2 superspace does not count!”, JHEP [**1101**]{}, 007 (2011) doi:10.1007/JHEP01(2011)007 \[arXiv:1010.2150 \[hep-th\]\]. S. Deser and A. Schwimmer, “Geometric classification of conformal anomalies in arbitrary dimensions,” Phys. Lett. B [**309**]{} (1993) 279 doi:10.1016/0370-2693(93)90934-A \[hep-th/9302047\]. J. Teschner, “Liouville theory revisited,” Class. Quant. Grav.  [**18**]{}, R153 (2001) doi:10.1088/0264-9381/18/23/201 \[hep-th/0104158\]. H. Osborn, “N=1 superconformal symmetry in four-dimensional quantum field theory”, Annals Phys.  [**272**]{}, 243 (1999) doi:10.1006/aphy.1998.5893 \[hep-th/9808041\]. K. Symanzik, “On Calculations in conformal invariant field theories”, Lett. Nuovo Cim.  [**3**]{}, 734 (1972). doi:10.1007/BF02824349 A. B. Zamolodchikov and A. B. Zamolodchikov, “Structure constants and conformal bootstrap in Liouville field theory”, Nucl. Phys. B [**477**]{}, 577 (1996) doi:10.1016/0550-3213(96)00351-3 \[hep-th/9506136\]. H. Dorn and H. J. Otto, “On correlation functions for noncritical strings with c &lt;= 1 d &gt;= 1”, Phys. Lett. B [**291**]{}, 39 (1992) doi:10.1016/0370-2693(92)90116-L \[hep-th/9206053\]. P. Furlan and V. B. Petkova, “On some Coulomb gas integrals in higher dimensions”, arXiv:1806.03270 \[hep-th\]. D. Harlow, J. Maltz and E. Witten, “Analytic Continuation of Liouville Theory,” JHEP [**1112**]{} (2011) 071 doi:10.1007/JHEP12(2011)071 \[arXiv:1108.4417 \[hep-th\]\]. J. Lukierski and A. Nowicki, “Euclidean Superconformal Symmetry and its Relation with Minkowski Supersymmetries”, Phys. Lett.  [**127B**]{}, 40 (1983). doi:10.1016/0370-2693(83)91626-X C. Beem, M. Lemos, P. Liendo, W. Peelaers, L. Rastelli and B. C. van Rees, “Infinite Chiral Symmetry in Four Dimensions”, Commun. Math. Phys.  [**336**]{}, no. 3, 1359 (2015) doi:10.1007/s00220-014-2272-x \[arXiv:1312.5344 \[hep-th\]\]. A. Gadde, L. Rastelli, S. S. Razamat and W. Yan, “Gauge Theories and Macdonald Polynomials”, Commun. Math. Phys.  [**319**]{}, 147 (2013) doi:10.1007/s00220-012-1607-8 \[arXiv:1110.3740 \[hep-th\]\]. [^1]: We denote by $F_{++}$ and $\partial_{++}A$ the operators which are highest weight states of $\mathrm{SU}(2)_1$ (see table \[Table1\]).
--- abstract: 'A basic statistical mechanics analysis of many-body systems with non-reciprocal pair interactions is presented. Different non-reciprocity classes in two- and three-dimensional binary systems (relevant to real experimental situations) are investigated, where the action-reaction symmetry is broken for the interaction between different species. The asymmetry is characterized by a non-reciprocity parameter $\Delta$, which is the ratio of the non-reciprocal to reciprocal pair forces. It is shown that for the “constant” non-reciprocity (when $\Delta$ is independent of the interparticle distance $r$) one can construct a pseudo-Hamiltonian and such systems, being intrinsically non-equilibrium, can nevertheless be described in terms of equilibrium statistical mechanics and exhibit detailed balance with distinct temperatures for the different species. For a general case (when $\Delta$ is a function of $r$) the temperatures grow with time, approaching a universal power-law scaling, while their ratio is determined by an [*effective*]{} constant non-reciprocity which is uniquely defined for a given interaction.' author: - 'A. V. Ivlev' - 'J. Bartnick' - 'M. Heinen' - 'H. Löwen' title: 'Statistical mechanics for non-reciprocal forces' --- One of the fundamental postulates of classical mechanics is Newton’s third law [*actio*]{}=[*reactio*]{}, which states that the pair interactions between particles are reciprocal. Newton’s third law holds both for the fundamental microscopic forces, but also for [*equilibrium*]{} effective forces on classical particles, obtained by integrating out microscopic degrees of freedom [@Israelachvili; @Dijkstra00; @Bolhuis01; @Praprotnik08; @Mognetti09]. However, the action-reaction symmetry for particles can be broken when their interaction is mediated by some [*non-equilibrium environment*]{}: This occurs, for instance, when the environment moves with respect to the particles, or when a system of particles includes different species and their interaction with the environment is out of equilibrium (of course, Newton’s third law holds for the complete “particles plus environment” system). Examples of non-reciprocal interactions on the mesoscopic length-scale include forces induced by non-equilibrium fluctuations [@Hayashi06; @Buenzli08], optical [@Dholakia10] and diffusiophoretic [@Sabass10; @Soto14] forces, effective interactions between colloidal particles under solvent or depletant flow [@Dzubiella03; @Khair07; @Mejia11; @Sriram12], shadow [@Tsytovich97; @Khrapak01; @Chaudhuri11] and wake-mediated [@Melzer99; @Morfill09; @Couedel10] interactions between microparticles in a flowing plasma, etc. A very different case of non-reciprocal interactions are “social forces” [@Helbing95; @Helbing00] governing, e.g., pedestrian dynamics. Non-reciprocal forces are in principle non-Hamiltonian, so the standard Boltzmann description of classical equilibrium statistical mechanics breaks down. Hence, it is a priori unclear whether concepts like temperature and thermodynamic phases can be used to describe them. To the best of our knowledge, the classical statistical mechanics of systems with non-reciprocal interactions – despite their fundamental importance – has been unexplored so far. In this Letter we present the statistical foundations of systems with non-reciprocal interparticle interactions. We consider a binary system of particles, where the action-reaction symmetry is broken for the pair interaction between different species. The asymmetry is characterized by the non-reciprocity parameter $\Delta$, which is the ratio of the non-reciprocal to reciprocal forces. We show that for the “constant” non-reciprocity, when $\Delta$ is independent of the interparticle distance $r$, one can construct a (pseudo) Hamiltonian with renormalized masses and interactions. Hence, being intrinsically non-equilibrium, such systems can nevertheless be described in terms of equilibrium statistical mechanics and exhibit detailed balance with distinct temperatures for different species (the temperature ratio is determined by $\Delta$). For a general case, when $\Delta$ is a function for $r$, the system is no longer conservative – it follows a universal asymptotic behavior with the temperatures growing with time as $\propto t^{2/3}$. The temperature ratio in this case is determined by an [*effective*]{} constant non-reciprocity which is uniquely defined for a given interaction. In the presence of frictional dissipation the temperatures reach a steady state, while their ratio remains practically unchanged. Let us consider a binary mixture of particles of the sort $A$ and $B$. The spatial dependence of the pair interaction is described by the function $\varphi(r)$. The interaction is reciprocal for the $AA$ and $BB$ pairs, while between the species $A$ and $B$ the action-reaction symmetry is broken. The measure of the asymmetry is the [*non-reciprocity parameter*]{} $\Delta(\geq0)$ which we first assume to be independent of the interparticle distance (“constant”). We present the force ${\bf F}_{ij}$ exerted by the particle $i$ on the particle $j$ as follows: $$\label{force_const} {\bf F}_{ij} = -\frac{\partial\varphi(r_{ij})}{\partial{\bf r}_{j}}\times\left\{ \begin{array}{cl} 1-\Delta & \text{ for $ij \in AB$};\\ 1+\Delta & \text{ for $ij \in BA$};\\ 1 & \text{ for $ij \in AA$ or $BB$}, \end{array} \right.$$ where $r_{ij}=|{\bf r}_i-{\bf r}_j|$ and each particle can be of the sort $A$ or $B$; note that $\varphi(r)$ may be different for different pairs [@note0]. By writing the Newtonian equations of motion of individual particles interacting via the force (\[force\_const\]), we notice that the interaction symmetry is restored if the particle masses and interactions are renormalized as follows: $$\begin{aligned} \label{mass_ren} \tilde m_{i} &=& m_i\times\left\{ \begin{array}{rl} (1+\Delta)^{-1} & \text{ for $i\in A$};\\ (1-\Delta)^{-1} & \text{ for $i\in B$}, \end{array} \right.\\[.3cm]\label{phi_ren} \tilde \varphi(r_{ij}) &=& \varphi(r_{ij})\times\left\{ \begin{array}{cl} (1+\Delta)^{-1} & \text{ for $ij\in AA$};\\ (1-\Delta)^{-1} & \text{ for $ij\in BB$};\\ 1 & \text{ for $ij\in AB$ or $BA$}. \end{array} \right.\end{aligned}$$ Hence, a binary system of $N$ particles with non-reciprocal interactions of the form of Eq. (\[force\_const\]) is described by a [*pseudo-Hamiltonian*]{} with the masses (\[mass\_ren\]) and interactions (\[phi\_ren\]). In particular, this implies the pseudo-momentum and energy conservation, $$\begin{aligned} \sum\limits_{i}^N\tilde m_i{\bf v}_i &=& {\rm const}, \\ \sum\limits_{i}^N\frac12\tilde m_iv_i^2+\sum\limits_{i<j}^N\tilde\varphi(r_{ij}) &=& {\rm const},\end{aligned}$$ and allows us to employ the methods of equilibriums statistical mechanics to describe such systems. For instance, from equipartition, $\frac12\tilde m_A\langle v_A^2\rangle=\frac12\tilde m_B\langle v_B^2\rangle\equiv\frac12Dk_{\rm B}\tilde T$ (where $\tilde T$ is the pseudo-temperature and $D$ is the dimensionality), it immediately follows that in detailed balance $T_A=(1+\Delta)\tilde T$ and $T_B=(1-\Delta)\tilde T$, i.e., $$\label{equilibrium} \frac{T_A}{T_B}=\frac{1+\Delta}{1-\Delta}.$$ We conclude that a mixture of particles with non-reciprocal interactions can be in a remarkable state of equilibrium, where the species have different temperatures $T_A$ and $T_B$. Such equilibrium is only possible for $\Delta<1$, otherwise the forces ${\bf F}_{AB}$ and ${\bf F}_{BA}$ are pointed in the [*same*]{} direction \[see Eq. (\[force\_const\])\] and the system cannot be stable. Now we shall study a general case, when the interaction between the species $A$ and $B$ is determined by the force whose reciprocal, ${\bf F}_{\rm r}(r)$, and non-reciprocal, ${\bf F}_{\rm n}(r)$, parts are arbitrary functions of the interparticle distance $r$. Both forces can be presented as ${\bf F}_{\rm r,n}(r)=({\bf r}/r)F_{\rm r,n}(r)$, where $F_{\rm r,n}=-d\varphi_{\rm r,n}/dr$. It is instructive to write the equations of motion for a pair of interacting particles $A$ and $B$ in terms of the relative coordinate ${\bf r}= {\bf r}_{A}-{\bf r}_{B}$ and the center-of-mass coordinate ${\bf R}= (m_A{\bf r}_{A}+m_B{\bf r}_{B})/M$, $$\begin{aligned} M\ddot{\bf R} &=& 2{\bf F}_{\rm n}(r),\label{motion_C}\\ \mu\ddot{\bf r} &=& {\bf F}_{\rm r}(r)+\frac{m_B-m_A}{m_A+m_B}{\bf F}_{\rm n}(r),\label{motion_R}\end{aligned}$$ where $\mu= m_Am_B/(m_A+m_B)$ and $M= m_A+m_B$ are the reduced and total masses, respectively. We define the relative velocity, ${\bf v}= \dot{\bf r}$, the center-of-mass velocity, ${\bf V}= \dot{\bf R}$, and their values after a collision, ${\bf v}'={\bf v}+\delta{\bf v}$ and ${\bf V}'={\bf V}+\delta{\bf V}$. From Eq. (\[motion\_R\]) we conclude that the relative motion is conservative, i.e., the absolute value of the relative velocity remains unchanged after a collision, $|{\bf v}+\delta{\bf v}|=|{\bf v}|$. Equation (\[motion\_C\]) governs the variation of the center-of-mass velocity, $\delta{\bf V}$, which is determined by the relative motion via ${\bf F}_{\rm n}(r)$. By employing the relation ${\bf v}_{A,B}={\bf V}\pm(\mu/m_{A,B}){\bf v}$, we obtain the variation of the kinetic energy $E_{A,B}$ after a collision: $$\begin{aligned} \delta E_{A,B}=m_{A,B}\left[{\bf V}\cdot\delta{\bf V}+\frac12(\delta{\bf V})^2\right]\hspace{2.cm}\nonumber\\ \pm\mu\left({\bf V}\cdot\delta{\bf v}+{\bf v}\cdot\delta{\bf V}+\delta{\bf V}\cdot\delta{\bf v}\right).\label{delta_E}\end{aligned}$$ Let us introduce the angle $\theta$ between ${\bf V}$ and ${\bf v}$, and the scattering angle $\chi$ between ${\bf v}'$ and ${\bf v}$. Since the relative motion is conservative, from Eq. (\[motion\_C\]) we conclude that $\delta{\bf V}$ is parallel to $\delta{\bf v}$. Hence, $\delta{\bf V}\cdot\delta{\bf v}=\delta V\delta v$, and for two-dimensional (2D) systems we have ${\bf V}\cdot\delta{\bf V}=V\delta V\sin(\theta-\frac12\chi)$, ${\bf V}\cdot\delta{\bf v}=V\delta v\sin(\theta-\frac12\chi)$, and ${\bf v}\cdot\delta{\bf V}=-v\delta V\sin\frac12\chi$ [@note1; @LandauMechanics; @LandauKinetics]. In order to calculate the magnitudes of the velocity variations and the scattering angle, we consider the [*small-angle*]{} scattering, $\chi\ll1$ [@LandauMechanics]: Such approximation significantly simplifies the general analysis and is valid for sufficiently high temperatures (provided the pair interaction is not of the hard-sphere-like type). Using Eqs. (\[motion\_C\]) and (\[motion\_R\]), for a given impact parameter $\rho$ we get $\delta V(\rho)=(4/M v)f_{\rm n}(\rho)$ and $\chi(\rho)=\delta v/v=(2/\mu v^2)[f_{\rm r}(\rho)+ \frac{m_B-m_A}{m_A+m_B}f_{\rm n}(\rho)]$, expressed via the scattering functions ($\alpha=$r,n): $$f_{\alpha}(\rho)=\rho\int_{\rho}^{\infty}dr\frac{F_{\alpha}(r)}{\sqrt{r^2-\rho^2}}.$$ The equations describing evolution of the mean kinetic energy of the species $A$ and $B$ can be obtained by multiplying $\delta E_{A,B}$ with the collision frequency between the species and averaging it over the velocity distributions [@LandauKinetics]. The collision cross section is represented by the integral over the impact parameter [@LandauMechanics], $\int d\rho$ for 2D systems or $\int d\rho\:2\pi\rho$ for 3D systems. To obtain a closed-form solution, we shall assume that the elastic momentum/energy exchange in collisions provides efficient Maxwellization of the distribution functions (which can be verified by molecular dynamics simulations, see below). Then, one can perform the velocity averaging over the Maxwellian distributions with the temperatures $T_{A,B}$ (note that after the integration over $\theta$ all terms in Eq. (\[delta\_E\]) yield contributions $\sim\chi^2$). After some algebra we derive the following equations for 2D systems: $$\label{kinetics} \dot T_{A,B}=\pm\frac{1\pm\Delta_{\rm eff}}{1+\epsilon}\frac{\sqrt{2\pi}n_{B,A} I_{\rm rr}}{m_Am_B\left(\frac{T_A}{m_A} +\frac{T_B}{m_B}\right)^{3/2}}\left[(1+\Delta_{\rm eff})T_B-(1-\Delta_{\rm eff})T_A+\frac{\epsilon}{1\pm\Delta_{\rm eff}} (T_B-T_A)\right],$$ where $n_{\alpha}$ is the areal number density (for simplicity, below we assume $n_A=n_B=n$). The equations depend on the [*effective non-reciprocity*]{} $\Delta_{\rm eff}$ and the [*interaction disparity*]{} $\epsilon$, $$\label{parameters} \begin{array}{l} \Delta_{\rm eff} = I_{\rm nn}/I_{\rm rn},\\[.2cm] \epsilon = I_{\rm rr}I_{\rm nn}/I_{\rm rn}^2-1, \end{array}$$ expressed via the integrals $I_{\alpha\beta}=\int_0^{\infty}d\rho\: f_{\alpha}f_{\beta}$ (naturally, it is assumed that the integrals converge). We point out that $\Delta_{\rm eff}$ and $\epsilon$ are numbers uniquely defined for given functions $\varphi_{\rm r,n}(r)$; from the Cauchy inequality it follows that $\epsilon\geq0$. Note that for 3D systems the r.h.s. of Eq. (\[kinetics\]) should be multiplied by the additional factor 8/3, and the integrals become $I_{\alpha\beta}=\int_0^{\infty}d\rho\: \rho f_{\alpha}f_{\beta}$. For a reciprocal Coulomb interaction, $I_{\rm rr}$ is proportional to the so-called Coulomb logarithm (see e.g., [@LandauKinetics; @Spitzer]) and $\Delta_{\rm eff}=0$. In this case Eq. (\[kinetics\]) is reduced to the classical equation for the temperature relaxation in a plasma [@LandauKinetics]. For the “constant” non-reciprocity, $F_{\rm n}(r)/F_{\rm r}(r)=\Delta$, we get $\Delta_{\rm eff}=\Delta$ and $\epsilon=0$. In this case Eq. (\[kinetics\]) yields the equilibrium $\dot T_{A,B}=0$ for the temperature ratio given by Eq. (\[equilibrium\]). Otherwise we have $\epsilon>0$ and the temperatures grow with time, approaching the asymptotic solution, $$\label{asymp1} t\to\infty:\quad T_A(t)=\tau T_B(t)=ct^{2/3},$$ where $c\propto(nI_{\rm rr})^{2/3}$ and $$\label{asymp2} \tau=\sqrt{\frac{(1+\Delta_{\rm eff})^2+\epsilon}{(1-\Delta_{\rm eff})^2+\epsilon}}.$$ Thus, the asymptotic temperature ratio is a constant which tends to the equilibrium value \[Eq. (\[equilibrium\])\] for $\epsilon\to0$. To verify the analytical results, we carried out a molecular dynamics simulation of a 2D binary, equimolar mixture of soft spheres. We implemented the velocity Verlet algorithm [@Swope82] with an adaptive time step. The simulation box with periodic boundary conditions contained $2\times20,000$ particles with equal masses. We used a model Hertzian potential [@note2; @Pamies09; @Berthier10] whose reciprocal and non-reciprocal parts are given by $\varphi_{\rm r}(r)=\frac12\varphi_0(\max\{0, 1-r/r_0\})^2$ and $\varphi_{\rm n}(r)=\frac13\varphi_0(\max\{0, 1-r/r_0\})^3$, respectively, where $\varphi_0$ is the interaction energy scale and $r_0$ is the interaction range. At $t=0$ the particles were arranged into two interpenetrating square lattices with the initial temperature $T_A = T_B = T_0$ (therefore, at early simulation time a certain fraction of $T_0$ was converted into the interaction energy). The numerical results are illustrated in Fig. \[Fig.1\], where we plot the dependencies $T_{A,B}(t)$ for different $T_0$. For the Hertzian interactions, from Eq. (\[parameters\]) we obtain $\Delta_{\rm eff}=0.57$ and $\epsilon=0.082$, and Eq. (\[asymp2\]) yields the asymptotic temperature ratio $\tau=3.1$. One can see that for all $T_0$ the numerical curves approach the expected universal asymptotes described by Eqs. (\[asymp1\]) and (\[asymp2\]). Note that the early development at sufficiently low temperatures exhibits a remarkably sharp dependence on $T_0$ – we observe the formation of a plateau which broadens dramatically with decreasing $T_0$. On the other hand, for $T_0\gtrsim1$ the numerical results are very well reproduced by the solution of Eq. (\[kinetics\]), as expected. A small ($<10\%$) deviation observed in this case is due to the fact that weak collisions are no longer providing efficient Maxwellization of the velocity distribution for the “hotter” species $A$ (see the lower panel of Fig. \[Fig.1\]). In Fig. \[Fig.2\] we show how the temperature evolution depends on the density $n$. Here, the total kinetic energy $T_A(t)+T_B(t)$ calculated for different values of the areal fraction $\phi=\pi r_0^2n$ is plotted. In contrast to the sharp dependence on $T_0$ seen in Fig. \[Fig.1\], the increase of $n$ is accompanied by an approximately proportional shortening of the plateau [@note3]. The inset demonstrates the predicted $\propto n^{2/3}$ scaling for the asymptotic temperature growth. In order to explain the observed behavior at low temperatures, we point out that the approximation of small-angle scattering is not applicable in this regime and, hence, Eq. (\[kinetics\]) is no longer valid. Strong correlations make the analysis rather complicated in this case, but one can implement a simple phenomenological model to understand the essential features. We postulate that at sufficiently low temperatures the energy growth caused by non-reciprocal interactions can be balanced by nonlinearity, forming a “dynamical potential well” where the system can reside for a long time. Qualitatively, one can then expect the development around the initial temperature to be governed by the activation processes, and introduce the effective “Arrhenius rate” characterizing these processes. Assuming the dimensionless temperature $T$ (normalized by the effective depth of the well) to be small, we employ the following model equation: $$\label{initial} \dot T=C\exp(T^{-\gamma}),$$ where $C$ is a constant (possible power-law factors can be neglected for $T\ll1$) and $\gamma$ is an exponent determined by the particular form the potential well. Substituting $T^{-\gamma}\simeq T_0^{-\gamma}-\gamma T_0^{-\gamma-1}(T-T_0)$ in Eq. (\[initial\]) yields the explosive solution, $$\label{explosion} T(t)=T_0-\frac{T_0^{\gamma+1}}{\gamma}\ln\left[1-\frac{C\gamma}{T_0^{\gamma+1}}\exp(-T_0^{-\gamma})t\right],$$ with the explosion time $t_{\rm ex}=(T_0^{\gamma+1}/C\gamma)\exp(T_0^{-\gamma})$. In Fig. \[Fig.1\] we show that the explosive solution provides quite a reasonable fit to the numerical results at low temperatures for $C=4\times10^{-5}$ and $\gamma=0.305$. Let us briefly discuss the effect of dissipation due to friction against the surrounding medium. To take this into account, one has to add the dissipation term $-2\nu_{A,B}(T_{A,B}-T_{\rm b})$ to the r.h.s. of Eq. (\[kinetics\]), where $\nu_{\alpha}$ is the respective damping rate in the friction force $-m_{\alpha}\nu_{\alpha}{\bf v}_{\alpha}$ and $T_{\rm b}$ is the background temperature (determined by the interaction of individual particles with the medium) [@Morfill09; @vanKampen]. In this case the temperatures $T_{A,B}$ always reach a steady state, since the growth term in Eq. (\[kinetics\]) decreases with temperature. The resulting steady-state temperature ratio, $\tau_{\nu}$, can be easily derived from Eq. (\[kinetics\]), assuming that the steady-state temperatures are much larger than $T_{\rm b}$. For similar particles, this requires the condition $\nu\ll nI_{\rm rr}/\sqrt{mT_{\rm b}^3}$ to be satisfied [@note4]. Then we obtain the following equation for $\tau_{\nu}$: $$\tilde\nu[(1-\Delta_{\rm eff})^2+\epsilon]\tau_{\nu}^2-(\tilde\nu-1)(1-\Delta_{\rm eff}^2+\epsilon)\tau_{\nu} =(1+\Delta_{\rm eff})^2+\epsilon,$$ where $\tilde\nu=\nu_A/\nu_B$. For $\tilde\nu=1$ we get $\tau_\nu=\tau$, i.e., the steady-state temperature ratio is not affected by friction. Generally, $\tau_{\nu}$ exhibits a very weak dependence on $\tilde\nu$: e.g., for the Hertzian interactions the deviation between $\tau_{\nu}$ and $\tau$ is within $\simeq1\%$ in the range $0.8\leq\tilde\nu\leq1.3$ (expected for experiments with binary complex plasmas [@Morfill09; @Comm]). Note that at low temperatures the system can be dynamically “arrested” due to friction and never reach the asymptotic stage described by Eqs. (\[asymp1\]) and (\[asymp2\]). A simple analysis of Eq. (\[initial\]) with the dissipation term shows that the arrest occurs when $\nu t_{\rm ex}\gtrsim1$. In conclusion, the presented results provide a basic classification of many-body systems with non-reciprocal interactions. We investigated different non-reciprocity classes in 2D and 3D systems which are relevant to a plethora of real situations: For instance, the shadow interactions [@Tsytovich97; @Chaudhuri11] in binary complex plasmas have a constant non-reciprocity and can dominate the kinetics of 3D systems, while the wake-mediated interactions [@Morfill09; @Couedel10] governing the action-reaction symmetry breaking in bilayer complex plasmas are generally characterized by a variable non-reciprocity. We expect that our predictions can be verified in complex plasma experiments, e.g., by measuring the kinetic temperatures in 2D binary mixtures or in 3D clouds under microgravity conditions. Furthermore, the analysis of dynamical correlations in the strong-damping regime should help to understand the effect of non-reciprocal effective interactions operating in colloidal suspensions. The authors acknowledge support from the European Research Council under the European Union’s Seventh Framework Programme, Grant Agreement No. 267499. [99]{} J. N. Israelachvili, [*Intermolecular and Surface Forces*]{} (Elsevier, Amsterdam, 1992). M. Dijkstra, R. van Roij, and R. Evans, J. Chem. Phys. [**113**]{}, 4799 (2000). P. G. Bolhuis, A. A. Louis, J. P. Hansen, and E. J. Meijer, J. Chem. Phys. [**114**]{}, 4296 (2001). M. Praprotnik, L. Delle Site, and K. Kremer, Annu. Rev. Phys. Chem. [**59**]{}, 545 (2008). B. M. Mognetti, P. Virnau, L. Yelash, W. Paul, K. Binder, M. Muller, and L. G. MacDowell, J. Chem. Phys. [**130**]{}, 044101 (2009). K. Hayashi and S. Sasa, J. Phys. Cond. Matter [**18**]{}, 2825 (2006). P. R. Buenzli and R. Soto, Phys. Rev. E [**78**]{}, 020102 (2008). K. Dholakia and P. Zemanek, Rev. Mod. Phys. [**82**]{}, 1767 (2010). B. Sabass and U. Seifert, Phys. Rev. Lett. [**105**]{}, 218103 (2010). R. Soto and R. Golestanian, Phys. Rev. Lett. [**112**]{}, 068301 (2014). J. Dzubiella, H. Löwen, and C. N. Likos, Phys. Rev. Lett. [**91**]{}, 248301 (2003). A. S. Khair and J. F. Brady, Proc. R. Soc. A [**463**]{}, 223 (2007). C. Mejia-Monasterio and G. Oshanin, Soft Matter [**7**]{}, 993 (2011). I. Sriram and E. M. Furst, Soft Matter [**8**]{}, 3335 (2012). V. N. Tsytovich, Phys. Usp. [**40**]{}, 53 (1997). S. A. Khrapak, A. V. Ivlev, and G. E. Morfill, Phys. Rev. E [**64**]{}, 046403 (2001). M. Chaudhuri, A. V. Ivlev, S. A. Khrapak, H. M. Thomas, and G. E. Morfill, Soft Matter [**7**]{}, 1229 (2011). A. Melzer, V. A. Schweigert, and A. Piel, Phys. Rev. Lett. [**83**]{}, 3194 (1999). G. E. Morfill and A. V. Ivlev, Rev. Mod. Phys. [**81**]{}, 1353 (2009). L. Couëdel, V. Nosenko, A. V. Ivlev, S. K. Zhdanov, H. M. Thomas, and G. E. Morfill, Phys. Rev. Lett. [**104**]{}, 195001 (2010). D. Helbing and P. Molnar, Phys. Rev. E [**51**]{}, 4282 (1995). D. Helbing, I. Farkas, and T. Vicsek, Nature [**407**]{}, 487 (2000). For the $AB$ and $BA$ pairs $\varphi(r)$ is the same, to distinguish the effect of non-reciprocity. For 3D systems the corresponding expressions are easily derived using the cosine rule of spherical trigonometry. L. D. Landau and E. M. Lifshitz, [*Mechanics*]{} (Pergamon, Oxford, 1976). E. M. Lifshitz and L. P. Pitaevskii, [*Physical Kinetics*]{} (Pergamon, Oxford, 1981). L. Spitzer, [*Physics of Fully Ionized Gases*]{} (Dover, New York, 2006). W. C. Swope, H. C. Andersen, P. H. Berens, and K. R. Wilson, J. Chem. Phys. [**76**]{}, 637 (1982). We chose the Hertzian forces for the illustration, in order to ensure precise numerical calculations at low and high temperatures. J. C. Pamies, A. Cacciuto, and D. Frenkel, J. Chem. Phys. [**131**]{}, 044514 (2009). L. Berthier, A. J. Moreno, and G. Szamel, Phys. Rev. E [**82**]{}, 060501 (2010). A small dip in the early development (Fig. \[Fig.2\]) is due to partial conversion of the initial kinetic energy into the interaction energy. N. G. van Kampen, [*Stochastic Processes in Physics and Chemistry*]{} (Elsevier, Amsterdam, 1981). This strong inequality always holds for typical experiments with 2D complex plasmas, where the interparticle interaction can be approximated by the Yukawa potential, $\varphi_{\rm r}(r)=(Q^2/r)e^{-r/\lambda}$, so that $I_{\rm rr}\sim Q^4/\lambda$. C. Du, private communication (2013).
--- abstract: 'This paper establishes a general theory of energy-constrained quantum and private capacities of quantum channels. We begin by defining various energy-constrained communication tasks, including quantum communication with a uniform energy constraint, entanglement transmission with an average energy constraint, private communication with a uniform energy constraint, and secret key transmission with an average energy constraint. We develop several code conversions, which allow us to conclude non-trivial relations between the capacities corresponding to the above tasks. We then show how the regularized, energy-constrained coherent information is an achievable rate for all of the tasks, whenever the energy observable satisfies the Gibbs condition of having a well defined thermal state for all temperatures and the channel satisfies a finite output-entropy condition. For degradable channels satisfying these conditions, we find that the single-letter energy-constrained coherent information is equal to all of the capacities. We finally apply our results to degradable quantum Gaussian channels and recover several results already established in the literature (in some cases, we prove new results in this domain). Contrary to what may appear from some statements made in the literature recently, proofs of these results do not require the solution of any kind of minimum output entropy conjecture or entropy photon-number inequality.' author: - 'Mark M. Wilde' - Haoyu Qi bibliography: - 'Ref.bib' title: 'Energy-constrained private and quantum capacities of quantum channels' --- Introduction ============ The capacity of a quantum channel to transmit quantum or private information is a fundamental characteristic of the channel that guides the design of practical communication protocols (see, e.g., [@W16] for a review). The quantum capacity $Q(\mathcal{N})$ of a quantum channel $\mathcal{N}$ is defined as the maximum rate at which qubits can be transmitted faithfully over many independent uses of $\mathcal{N}$, where the fidelity of transmission tends to one in the limit as the number of channel uses tends to infinity [@PhysRevA.55.1613; @capacity2002shor; @ieee2005dev]. Related, the private capacity $P(\mathcal{N})$ of $\mathcal{N}$ is defined to be the maximum rate at which classical bits can be transmitted over many independent uses of $\mathcal{N}$ such that 1) the receiver can decode the classical bits faithfully and 2) the environment of the channel cannot learn anything about the classical bits being transmitted [@ieee2005dev; @1050633]. The quantum capacity is essential for understanding how fast we will be able to perform distributed quantum computations between remote locations, and the private capacity is connected to the ability to generate secret key between remote locations, as in quantum key distribution (see, e.g., [@SBCDLP09] for a review). In general, there are connections between private capacity and quantum capacity [@ieee2005dev] (see also [@PhysRevLett.80.5695]), but the results of [@HHHO05; @HHHO09; @PhysRevLett.100.110502; @HHHLO08] demonstrated that these concepts and the capacities can be very different. In fact, the most striking examples are channels for which their quantum capacity is equal to zero but their private capacity is strictly greater than zero [@PhysRevLett.100.110502; @HHHLO08]. Bosonic Gaussian channels are some of the most important channels to consider, as they model practical communication links in which the mediators of information are photons (see, e.g., [@EW07; @WPGCRSL12] for reviews). Recent years have seen advances in the quantum information theory of bosonic channels. For example, we now know the capacity for sending classical information over all single-mode phase-insensitive quantum Gaussian channels [@GHG15; @GPCH13] (and even the strong converse capacity [@BPWW14]). The result of this theoretical development is that coherent states [@GK04] of the light field suffice to achieve classical capacity of phase-insensitive bosonic Gaussian channels. We have also seen advances related to quantum capacity of bosonic channels. Important statements, discussions, and critical steps concerning quantum capacity of single-mode quantum-limited attenuator and amplifier channels were reported in [@HW01; @WPG07]. In particular, these papers stated a formula for the quantum capacity of these channels, whenever infinite energy is available at the transmitter. These formulas have been supported with a proof in [@PhysRevA.86.062306 Theorem 8] and [@PLOB15 Eq. (21)] (see Remark \[rem:WPG07\] of the present paper for further discussion of this point). However, in practice, no transmitter could ever use infinite energy to transmit quantum information, and so the results from [@HW01; @WPG07] have limited applicability to realistic scenarios. Given that the notion of quantum capacity itself is already somewhat removed from practice, as argued in [@TBR15], it seems that supplanting a sender and receiver with infinite energy in addition to perfect quantum computers and an infinite number of channel uses only serves to push this notion much farther away from practice. One of the main aims of the present paper is to continue the effort of bringing this notion closer to practice, by developing a general theory of energy-constrained quantum and private communication. Considering quantum and private capacity with a limited number of channel uses, as was done in [@TBR15; @WTB16], in addition to energy constraints, is left for future developments. In light of the above discussion, we are thus motivated to understand both quantum and private communication over quantum channels with realistic energy constraints. Refs. [@GLMS03; @GSE08] were some of the earlier works to discuss quantum and private communication with energy constraints, in addition to other kinds of communication tasks. The more recent efforts in [@WHG11; @PhysRevA.86.062306; @QW16] have considered energy-constrained communication in more general trade-off scenarios, but as special cases, they also furnished proofs for energy-constrained quantum and private capacities of quantum-limited attenuator and amplifier channels (see [PhysRevA.86.062306]{} and [@QW16]). In more detail, let $Q(\mathcal{N},N_{S})$ and $P(\mathcal{N},N_{S})$ denote the respective quantum and private capacities of a quantum channel $\mathcal{N}$, such that the mean input photon number for each channel use cannot exceed $N_{S}\in\lbrack0,\infty)$. Ref. [@PhysRevA.86.062306 Theorem 8] established that the quantum capacity of a pure-loss channel $\mathcal{L}_{\eta}$ with transmissivity parameter $\eta\in\left[ 0,1\right] $ is equal to$$Q(\mathcal{L}_{\eta},N_{S})=\max\left[ g(\eta N_{S})-g((1-\eta)N_{S}),0\right] ,\label{eq:pure-loss-capacities}$$ where $g(x)$ is the entropy of a thermal state with mean photon number $x$, defined as$$g(x)\equiv(x+1)\log_{2}(x+1)-x\log_{2}x.$$ The present paper (see ) establishes the private capacity formula for $\mathcal{L}_{\eta}$:$$P(\mathcal{L}_{\eta},N_{S})=\max\left[ g(\eta N_{S})-g((1-\eta)N_{S}),0\right] .\label{eq:priv-cap-loss-new}$$ A special case of the results of [@QW16] established that the quantum and private capacities of a quantum-limited amplifier channel $\mathcal{A}_{\kappa}$ with gain parameter $\kappa\in\lbrack1,\infty)$ are equal to$$\begin{aligned} Q(\mathcal{A}_{\kappa},N_{S}) & =P(\mathcal{A}_{\kappa},N_{S})\\ &=g(\kappa N_{S}+\kappa-1)-g([\kappa-1][N_{S}+1])\label{eq:amp-capacities}.\end{aligned}$$ Taking the limit as $N_{S}\rightarrow\infty$, these formulas respectively converge to$$\begin{aligned} & \max\left[ \log_{2}(\eta/\left[ 1-\eta\right] ),0\right] ,\label{eq:unconstrained-q-cap-loss}\\ & \log_{2}\left( \kappa/\left[ \kappa-1\right] \right) ,\end{aligned}$$ which were stated in [@HW01; @WPG07] in the context of quantum capacity, with the latter proved in [@PLOB15 Eq. (21)] for both quantum and private capacities. Figure \[fig:cap-compare\] plots the gap between the unconstrained and constrained quantum capacity formulas in and , respectively. \[ptb\] The main purpose of the present paper is to go beyond bosonic channels and establish a general theory of energy-constrained quantum and private communication over quantum channels, in a spirit similar to that developed in [@H03; @H04; @HS12; @H12] for other communication tasks. We first recall some preliminary background on quantum information in infinite-dimensional, separable Hilbert spaces in Section \[sec:prelims\]. We then begin our development in Section \[sec:energy-constrained-caps\] by defining several energy-constrained communication tasks, including quantum communication with a uniform energy constraint, entanglement transmission with an average energy constraint, private communication with a uniform energy constraint, and secret key transmission with an average energy constraint. In Section \[sec:code-conversions\], we develop several code conversions between these various communication tasks, which allow us to conclude non-trivial relations between the capacities corresponding to them. Section \[sec:coh-info-ach\] proves that the regularized, energy-constrained coherent information is an achievable rate for all of the tasks, whenever the energy observable satisfies the Gibbs condition of having a well defined thermal state for all temperatures (Definition \[def:Gibbs-obs\]) and the channel satisfies a finite output-entropy condition (Condition \[cond:finite-out-entropy\]). For degradable channels satisfying these conditions, we find in Section \[sec:degradable-channels\] that the single-letter energy-constrained coherent information is equal to all of the capacities. We finally apply our results to quantum Gaussian channels in Section \[sec:Gaussian-results\] and recover several results already established in the literature on Gaussian quantum information. In some cases, we establish new results, like the formula for private capacity in . We conclude in Section \[sec:conclusion\] with a summary and some open questions. We would like to suggest that our contribution on this topic is timely. At the least, we think it should be a useful resource for the community of researchers working on related topics to have such a formalism and associated results written down explicitly, even though a skeptic might argue that they have been part of the folklore of quantum information theory for many years now. To support our viewpoint, we note that there have been several papers released in the past few years which suggest that energy-constrained quantum and private capacities have not been sufficiently clarified in the existing literature. For example, in [@WSCW12], one of the main results contributed was a non-tight upper bound on the private capacity of a pure-loss bosonic channel, in spite of the fact that was already part of the folklore of quantum information theory. In [@PMLG15], it is stated that the entropy photon-number inequality turns out to be crucial in the determining the classical capacity regions of the quantum bosonic broadcast and wiretap channels, in spite of the fact that no such argument is needed to establish the quantum or private capacity of the pure-loss channel. Similarly, it is stated in [@ADO16] that the entropy photon-number inequality conjecture is of particular significance in quantum information theory since if it were true then it would allow one to evaluate classical capacities of various bosonic channels, e.g. the bosonic broadcast channel and the wiretap channel. Thus, it seems timely and legitimate to confirm that no such entropy photon-number inequality or minimum output-entropy conjecture is necessary in order to establish the results regarding quantum or private capacity of the pure-loss channel—the existing literature (specifically, [@PhysRevA.86.062306 Theorem 8] and now the previously folklore ) has established these capacities. The same is the case for the quantum-limited amplifier channel due to the results of [@QW16]. The entropy photon-number inequality indeed implies formulas for quantum and private capacities of the quantum-limited attenuator and amplifier channels, but it appears to be much stronger than what is actually necessary to accomplish this goal. The different proof of these formulas that we give in the present paper (see Section \[sec:Gaussian-results\]) is based on the monotonicity of quantum relative entropy, concavity of coherent information of degradable channels with respect to the input density operator, and covariance of Gaussian channels with respect to displacement operators. Quantum information preliminaries\[sec:prelims\] ================================================ Quantum states and channels --------------------------- Background on quantum information in infinite-dimensional systems is available in [@H12] (see also [@H04; @SH08; @HS10; @HZ12; @S15; @S15squashed]). We review some aspects here. We use $\mathcal{H}$ throughout the paper to denote a separable Hilbert space, unless specified otherwise. Let $I_{\mathcal{H}}$ denote the identity operator acting on $\mathcal{H}$. Let $\mathcal{B}(\mathcal{H})$ denote the set of bounded linear operators acting on $\mathcal{H}$, and let $\mathcal{P}(\mathcal{H})$ denote the subset of $\mathcal{B}(\mathcal{H})$ that consists of positive semi-definite operators. Let $\mathcal{T}(\mathcal{H})$ denote the set of trace-class operators, those operators $A$ for which the trace norm is finite:$\ \left\Vert A\right\Vert _{1}\equiv\operatorname{Tr}\{\left\vert A\right\vert \}<\infty$, where $\left\vert A\right\vert \equiv\sqrt{A^{\dag}A}$. The Hilbert-Schmidt norm of $A$ is defined as $\left\Vert A\right\Vert _{2}\equiv\sqrt{\operatorname{Tr}\{A^{\dag}A\}}$. Let $\mathcal{D}(\mathcal{H})$ denote the set of density operators (states), which consists of the positive semi-definite, trace-class operators with trace equal to one. A state $\rho\in\mathcal{D}(\mathcal{H})$ is pure if there exists a unit vector $|\psi\rangle\in\mathcal{H}$ such that $\rho=|\psi\rangle\langle\psi|$. Every density operator $\rho\in \mathcal{D}(\mathcal{H})$ has a spectral decomposition in terms of some countable, orthonormal basis $\{|\phi_{k}\rangle\}_{k}$ as$$\rho=\sum_{k}p(k)|\phi_{k}\rangle\langle\phi_{k}|,$$ where $p(k)$ is a probability distribution. The tensor product of two Hilbert spaces $\mathcal{H}_{A}$ and $\mathcal{H}_{B}$ is denoted by $\mathcal{H}_{A}\otimes\mathcal{H}_{B}$ or $\mathcal{H}_{AB}$. Given a multipartite density operator $\rho_{AB}\in\mathcal{D}(\mathcal{H}_{A}\otimes \mathcal{H}_{B})$, we unambiguously write $\rho_{A}=\operatorname{Tr}_{\mathcal{H}_{B}}\left\{ \rho_{AB}\right\} $ for the reduced density operator on system $A$. Every density operator $\rho$ has a purification $|\phi^{\rho}\rangle\in\mathcal{H}^{\prime}\otimes\mathcal{H}$, for an auxiliary Hilbert space $\mathcal{H}^{\prime}$, where $\left\Vert |\phi^{\rho }\rangle\right\Vert _{2}=1$ and $\operatorname{Tr}_{\mathcal{H}^{\prime}}\{|\phi^{\rho}\rangle\langle\phi^{\rho}|\}=\rho$. All purifications are related by an isometry acting on the purifying system. A state $\rho_{RA}\in\mathcal{D}(\mathcal{H}_{R}\otimes\mathcal{H}_{A})$ extends $\rho_{A}\in\mathcal{D}(\mathcal{H}_{A})$ if $\operatorname{Tr}_{\mathcal{H}_{R}}\{\rho_{RA}\}=\rho_{A}$. We also say that $\rho_{RA}$ is an extension of $\rho_{A}$. In what follows, we abbreviate notation like $\operatorname{Tr}_{\mathcal{H}_{R}}$ as $\operatorname{Tr}_{R}$. For finite-dimensional Hilbert spaces $\mathcal{H}_{R}$ and $\mathcal{H}_{S}$ such that $\dim(\mathcal{H}_{R})=\dim(\mathcal{H}_{S})\equiv M$, we define the maximally entangled state $\Phi_{RS}\in\mathcal{D}(\mathcal{H}_{R}\otimes\mathcal{H}_{S})$ of Schmidt rank $M$ as$$\Phi_{RS}\equiv\frac{1}{M}\sum_{m,m^{\prime}}|m\rangle\langle m^{\prime}|_{R}\otimes|m\rangle\langle m^{\prime}|_{S},$$ where $\{|m\rangle\}_{m}$ is an orthonormal basis for $\mathcal{H}_{R}$ and $\mathcal{H}_{S}$. We define the maximally correlated state $\overline{\Phi }_{RS}\in\mathcal{D}(\mathcal{H}_{R}\otimes\mathcal{H}_{S})$ as$$\overline{\Phi}_{RS}\equiv\frac{1}{M}\sum_{m}|m\rangle\langle m|_{R}\otimes|m\rangle\langle m|_{S},$$ which can be understood as arising by applying a completely dephasing channel $\sum_{m}|m\rangle\langle m|(\cdot)|m\rangle\langle m|$ to either system $R$ or $S$ of the maximally entangled state $\Phi_{RS}$. We define the maximally mixed state of system $S$ as $\pi_{S}\equiv I_{S}/M$. A quantum channel $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow \mathcal{T}(\mathcal{H}_{B})$ is a completely positive, trace-preserving linear map. The Stinespring dilation theorem [@S55] implies that there exists another Hilbert space $\mathcal{H}_{E}$ and a linear isometry $U:\mathcal{H}_{A}\rightarrow\mathcal{H}_{B}\otimes\mathcal{H}_{E}$ such that for all $\tau\in\mathcal{T}(\mathcal{H}_{A})$$$\mathcal{N}(\tau)=\operatorname{Tr}_{E}\{U\tau U^{\dag}\}.$$ The Stinespring representation theorem also implies that every quantum channel has a Kraus representation with a countable set $\{K_{l}\}_{l}$ of bounded Kraus operators:$$\mathcal{N}(\tau)=\sum_{l}K_{l}\tau K_{l}^{\dag},$$ where $\sum_{l}K_{l}^{\dag}K_{l}=I_{\mathcal{H}_{A}}$. The Kraus operators are defined by the relation$$\langle\varphi|K_{l}|\psi\rangle=\langle\varphi|\otimes\langle l|U|\psi \rangle,$$ for $|\varphi\rangle\in\mathcal{H}_{B}$, $|\psi\rangle\in\mathcal{H}_{A}$, and $\{|l\rangle\}_{l}$ some orthonormal basis for $\mathcal{H}_{E}$ [@S13]. A complementary channel $\mathcal{\hat{N}}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{E})$ of $\mathcal{N}$ is defined for all $\tau\in\mathcal{T}(\mathcal{H}_{A})$ as$$\mathcal{\hat{N}}(\tau)=\operatorname{Tr}_{B}\{U\tau U^{\dag}\}.$$ Complementary channels are unique up to partial isometries acting on the Hilbert space $\mathcal{H}_{E}$. A quantum channel $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow \mathcal{T}(\mathcal{H}_{B})$ is degradable [@cmp2005dev] if there exists a quantum channel $\mathcal{D}:\mathcal{T}(\mathcal{H}_{B})\rightarrow \mathcal{T}(\mathcal{H}_{E})$, called a degrading channel, such that for some complementary channel $\mathcal{\hat{N}}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{E})$ and all $\tau\in\mathcal{T}(\mathcal{H}_{A})$:$$\mathcal{\hat{N}}(\tau)=(\mathcal{D}\circ\mathcal{N})(\tau).$$ Quantum fidelity and trace distance ----------------------------------- The fidelity of two quantum states $\rho,\sigma\in\mathcal{D}(\mathcal{H})$ is defined as [@U76]$$F(\rho,\sigma)\equiv\left\Vert \sqrt{\rho}\sqrt{\sigma}\right\Vert _{1}^{2}.$$ Uhlmann’s theorem is the statement that the fidelity has the following alternate expression as a probability overlap [@U76]:$$F(\rho,\sigma)=\sup_{U}\left\vert \langle\phi^{\rho}|U\otimes I_{\mathcal{H}}|\phi^{\sigma}\rangle\right\vert ^{2}, \label{eq:uhlmann-fidelity}$$ where $|\phi^{\rho}\rangle\in\mathcal{H}^{\prime}\otimes\mathcal{H}$ and $|\phi^{\sigma}\rangle\in\mathcal{H}^{\prime\prime}\otimes\mathcal{H}$ are fixed purifications of $\rho$ and $\sigma$, respectively, and the optimization is with respect to all partial isometries $U:\mathcal{H}^{\prime\prime }\rightarrow\mathcal{H}^{\prime}$. The fidelity is non-decreasing with respect to a quantum channel $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow \mathcal{T}(\mathcal{H}_{B})$, in the sense that for all $\rho,\sigma \in\mathcal{D}(\mathcal{H}_{A})$:$$F(\mathcal{N}(\rho),\mathcal{N}(\sigma))\geq F(\rho,\sigma). \label{eq:fid-mono}$$ A simple modification of Uhlmann’s theorem, found by combining  with the monotonicity property in , implies that for a given extension $\rho_{AB}$ of $\rho_{A}$, there exists an extension $\sigma_{AB}$ of $\sigma_{A}$ such that$$F(\rho_{AB},\sigma_{AB})=F(\rho_{A},\sigma_{A}). \label{eq:Uhlmann-4-exts}$$ The trace distance between states $\rho$ and $\sigma$ is defined as $\left\Vert \rho-\sigma\right\Vert _{1}$. One can normalize the trace distance by multiplying it by $1/2$ so that the resulting quantity lies in the interval $\left[ 0,1\right] $. The trace distance obeys a direct-sum property: for an orthonormal basis $\{|x\rangle\}_{x}$ for an auxiliary Hilbert space $\mathcal{H}_{X}$, probability distributions $p(x)$ and $q(x)$, and sets $\left\{ \rho^{x}\right\} _{x}$ and $\left\{ \sigma^{x}\right\} _{x}$ of states in $\mathcal{D}(\mathcal{H}_{B})$, which realize classical–quantum states$$\begin{aligned} \rho_{XB} & \equiv\sum_{x}p(x)|x\rangle\langle x|_{X}\otimes\rho_{B}^{x},\label{eq:cq-rho}\\ \sigma_{XB} & \equiv\sum_{x}q(x)|x\rangle\langle x|_{X}\otimes\sigma_{B}^{x}, \label{eq:cq-sigma}$$ the following holds$$\left\Vert \rho_{XB}-\sigma_{XB}\right\Vert _{1}=\sum_{x}\left\Vert p(x)\rho_{B}^{x}-q(x)\sigma_{B}^{x}\right\Vert _{1}. \label{eq:direct-sum-TD}$$ The trace distance is monotone non-increasing with respect to a quantum channel $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{B})$, in the sense that for all $\rho,\sigma\in\mathcal{D}(\mathcal{H}_{A})$:$$\left\Vert \mathcal{N}(\rho)-\mathcal{N}(\sigma)\right\Vert _{1}\leq\left\Vert \rho-\sigma\right\Vert _{1}.$$ The following equality holds for any two pure states $\phi,\psi\in \mathcal{D}(\mathcal{H})$:$$\frac{1}{2}\left\Vert \phi-\psi\right\Vert _{1}=\sqrt{1-F(\phi,\psi)}. \label{eq:Td-fid-pure}$$ For any two arbitrary states $\rho,\sigma\in\mathcal{D}(\mathcal{H})$, the following inequalities hold$$1-\sqrt{F(\rho,\sigma)}\leq\frac{1}{2}\left\Vert \rho-\sigma\right\Vert _{1}\leq\sqrt{1-F(\rho,\sigma)}. \label{eq:F-vd-G}$$ The inequality on the left is a consequence of the Powers-Stormer inequality [@powers1970 Lemma 4.1], which states that $\left\Vert P-Q\right\Vert _{1}\geq\left\Vert P^{1/2}-Q^{1/2}\right\Vert _{2}^{2}$ for $P,Q\in \mathcal{P}(\mathcal{H})$. The inequality on the right follows from the monotonicity of trace distance with respect to quantum channels, the identity in , and Uhlmann’s theorem in . These inequalities are called Fuchs-van-de-Graaf inequalities, as they were established in [@FG98] for finite-dimensional states. Quantum entropies and information --------------------------------- The quantum entropy of a state $\rho\in\mathcal{D}(\mathcal{H})$ is defined as$$H(\rho)\equiv\operatorname{Tr}\{\eta(\rho)\},$$ where $\eta(x)=-x\log_{2}x$ if $x>0$ and $\eta(0)=0$. It is a non-negative, concave, lower semicontinuous function on $\mathcal{D}(\mathcal{H})$ [@W76]. It is also not necessarily finite (see, e.g., [@BV13]). When $\rho_{A}$ is assigned to a system $A$, we write $H(A)_{\rho}\equiv H(\rho _{A})$. The quantum relative entropy $D(\rho\Vert\sigma)$ of $\rho,\sigma \in\mathcal{D}(\mathcal{H})$ is defined as [@Lindblad1973]$$D(\rho\Vert\sigma)\equiv\sum_{i}\langle i|\rho\log_{2}\rho-\rho\log_{2}\sigma|i\rangle,$$ where $\{|i\rangle\}_{i=1}^{\infty}$ is an orthonormal basis of eigenvectors of the state $\rho$, if $\operatorname{supp}(\rho)\subseteq\operatorname{supp}(\sigma)$ and $D(\rho\Vert\sigma)=\infty$ otherwise. The quantum relative entropy $D(\rho\Vert\sigma)$ is non-negative for $\rho,\sigma\in \mathcal{D}(\mathcal{H})$ and is monotone with respect to a quantum channel $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{B})$ [@Lindblad1975]:$$D(\rho\Vert\sigma)\geq D(\mathcal{N}(\rho)\Vert\mathcal{N}(\sigma)). \label{eq:mono-rel-ent}$$ The quantum mutual information $I(A;B)_{\rho}$ of a bipartite state $\rho _{AB}\in\mathcal{D}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})$ is defined as [@Lindblad1973]$$I(A;B)_{\rho}=D(\rho_{AB}\Vert\rho_{A}\otimes\rho_{B}),$$ and obeys the bound [@Lindblad1973]$$I(A;B)_{\rho}\leq2\min\{H(A)_{\rho},H(B)_{\rho}\}.$$ The coherent information $I(A\rangle B)_{\rho}$ of $\rho_{AB}$ is defined as [@HS10; @K11]$$I(A\rangle B)_{\rho}\equiv I(A;B)_{\rho}-H(A)_{\rho}, \label{eq:coh-info-def}$$ when $H(A)_{\rho}<\infty$. This expression reduces to$$I(A\rangle B)_{\rho}=H(B)_{\rho}-H(AB)_{\rho}$$ if $H(B)_{\rho}<\infty$ [@HS10; @K11]. The mutual information of a quantum channel $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{B})$ with respect to a state $\rho\in\mathcal{D}(\mathcal{H}_{A})$ is defined as [@HS10]$$I(\rho,\mathcal{N})\equiv I(R;B)_{\omega},$$ where $\omega_{RB}\equiv(\operatorname{id}_{R}\otimes\mathcal{N}_{A\rightarrow B})(\psi_{RA}^{\rho})$ and $\psi_{RA}^{\rho}\in\mathcal{D}(\mathcal{H}_{R}\otimes\mathcal{H}_{A})$ is a purification of $\rho$, with $\mathcal{H}_{R}\simeq\mathcal{H}_{A}$. The coherent information of a quantum channel $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{B})$ with respect to a state $\rho\in\mathcal{D}(\mathcal{H}_{A})$ is defined as [@HS10]$$I_{c}(\rho,\mathcal{N})\equiv I(R\rangle B)_{\omega}, \label{eq:coh-info-ch-def}$$ with $\omega_{RB}$ defined as above. These quantities obey a data processing inequality, which is that for a quantum channel $\mathcal{M}:\mathcal{T}(\mathcal{H}_{B})\rightarrow\mathcal{T}(\mathcal{H}_{C})$ and $\rho$ and $\mathcal{N}$ as before, the following holds [@HS10]$$\begin{aligned} I(\rho,\mathcal{N}) & \geq I(\rho,\mathcal{M}\circ\mathcal{N}),\\ I_{c}(\rho,\mathcal{N}) & \geq I_{c}(\rho,\mathcal{M}\circ\mathcal{N}).\end{aligned}$$ We require the following proposition for some of the developments in this paper: \[prop:concave-degrad\]Let $\mathcal{N}$ be a degradable quantum channel and $\mathcal{\hat{N}}$ a complementary channel for it. Let $\rho_{0}$ and $\rho_{1}$ be states and let $\rho_{\lambda}=\lambda\rho_{0}+(1-\lambda )\rho_{1}$ for $\lambda\in\left[ 0,1\right] $. Suppose that the entropies $H(\rho_{\lambda})$, $H(\mathcal{N}(\rho_{\lambda}))$ and $H(\mathcal{\hat{N}}(\rho_{\lambda}))$ are finite for all $\lambda\in\left[ 0,1\right] $. Then the coherent information of $\mathcal{N}$ is concave with respect to these inputs, in the sense that$$\lambda I_{c}(\rho_{0},\mathcal{N})+(1-\lambda)I_{c}(\rho_{1},\mathcal{N})\leq I_{c}(\rho_{\lambda},\mathcal{N}).$$ This was established for the finite-dimensional case in [@YHD05MQAC]. We follow the proof given in [@W16 Theorem 13.5.2]. Set $\overline{\lambda }\equiv1-\lambda$. Consider that$$\begin{gathered} I_{c}(\rho_{\lambda},\mathcal{N})-\lambda I_{c}(\rho_{0},\mathcal{N})-\overline{\lambda}I_{c}(\rho_{1},\mathcal{N})\\ =H(\mathcal{N}(\rho_{\lambda}))-H(\mathcal{\hat{N}}(\rho_{\lambda}))-\lambda H(\mathcal{N}(\rho_{0}))\\ +\lambda H(\mathcal{\hat{N}}(\rho_{0}))-\overline{\lambda}H(\mathcal{N}(\rho_{1}))+\overline{\lambda}H(\mathcal{\hat{N}}(\rho_{1})).\end{gathered}$$ Defining the states$$\begin{aligned} \rho_{UB} & =\lambda|0\rangle\langle0|_{U}\otimes\mathcal{N}(\rho _{0})+\overline{\lambda}|1\rangle\langle1|_{U}\otimes\mathcal{N}(\rho_{1}),\\ \sigma_{UE} & =\lambda|0\rangle\langle0|_{U}\otimes\mathcal{\hat{N}}(\rho_{0})+\overline{\lambda}|1\rangle\langle1|_{U}\otimes\mathcal{\hat{N}}(\rho_{1}),\end{aligned}$$ we can then rewrite the last line above as$$I(U;B)_{\rho}-I(U;E)_{\sigma}.$$ This quantity is non-negative from data processing of mutual information because we can apply the degrading channel $\mathcal{D}_{B\rightarrow E}$ to system $B$ of $\rho_{UB}$ and recover $\sigma_{UE}$:$$\sigma_{UE}=\mathcal{D}_{B\rightarrow E}(\rho_{UB}).$$ This concludes the proof. The conditional quantum mutual information (CQMI) of a finite-dimensional tripartite state $\rho_{ABC}$ is defined as$$I(A;B|C)_{\rho}\equiv H(AC)_{\rho}+H(BC)_{\rho}-H(ABC)_{\rho}-H(C)_{\rho}.$$ In the general case, it is defined as [@S15; @S15squashed]$$\begin{gathered} I(A;B|C)_{\rho}\equiv\\ \sup_{P_{A}}\left\{ I(A;BC)_{Q\rho Q}-I(A;C)_{Q\rho Q}:Q=P_{A}\otimes I_{BC}\right\} ,\end{gathered}$$ where the supremum is with respect to all finite-rank projections $P_{A}\in\mathcal{B}(\mathcal{H}_{A})$ and we take the convention that $I(A;BC)_{Q\rho Q}=\lambda I(A;BC)_{Q\rho Q/\lambda}$ where $\lambda =\operatorname{Tr}\{Q\rho_{ABC}Q\}$. The above definition guarantees that many properties of CQMI in finite dimensions carry over to the general case [@S15; @S15squashed]. In particular, the following chain rule holds for a four-party state $\rho_{ABCD}\in\mathcal{D}(\mathcal{H}_{ABCD})$:$$I(A;BC|D)_{\rho}=I(A;C|D)_{\rho}+I(A;B|CD)_{\rho}.$$ Fano’s inequality is the statement that for random variables $X$ and $Y$ with alphabets $\mathcal{X}$ and $\mathcal{Y}$, respectively, the following inequality holds$$H(X|Y)\leq\varepsilon\log_{2}(\left\vert \mathcal{X}\right\vert -1)+h_{2}(\varepsilon), \label{eq:fano}$$ where$$\begin{aligned} \varepsilon & \equiv\Pr\{X\neq Y\},\\ h_{2}(\varepsilon) & \equiv-\varepsilon\log_{2}\varepsilon-(1-\varepsilon )\log_{2}(1-\varepsilon).\end{aligned}$$ Observe that $\lim_{\varepsilon\rightarrow0}h_{2}(\varepsilon)=0$. Let $\rho_{AB},\sigma_{AB}\in\mathcal{D}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})$ with $\dim(\mathcal{H}_{A})<\infty$, $\varepsilon\in\left[ 0,1\right] $, and suppose that $\left\Vert \rho_{AB}-\sigma_{AB}\right\Vert _{1}/2\leq \varepsilon$. The Alicki–Fannes–Winter (AFW) inequality is as follows [@AF04; @Winter15]:$$\left\vert H(A|B)_{\rho}-H(A|B)_{\sigma}\right\vert \leq2\varepsilon\log _{2}\dim(\mathcal{H}_{A})+g(\varepsilon),$$ where$$g(\varepsilon)\equiv\left( \varepsilon+1\right) \log_{2}\left( \varepsilon+1\right) -\varepsilon\log_{2}\varepsilon. \label{eq:g-function}$$ Observe that $\lim_{\varepsilon\rightarrow0}g(\varepsilon)=0$. If the states are classical on the first system, as in –, and $\dim(\mathcal{H}_{X})<\infty$ and $\left\Vert \rho_{XB}-\sigma_{XB}\right\Vert _{1}/2\leq\varepsilon$, then the inequality can be strengthened to [@W16 Theorem 11.10.3]$$\left\vert H(X|B)_{\rho}-H(X|B)_{\sigma}\right\vert \leq\varepsilon\log _{2}\dim(\mathcal{H}_{X})+g(\varepsilon). \label{eq:AFW-cq}$$ Energy-constrained quantum and private capacities {#sec:energy-constrained-caps} ================================================= In this section, we define various notions of energy-constrained quantum and private capacity of quantum channels. We start by defining an energy observable (see [@H12 Definition 11.3]): \[Energy observable\]\[def:energy-obs\]Let $G$ be a positive semi-definite operator, i.e., $G\in\mathcal{P}(\mathcal{H}_{A})$. Throughout, we refer to $G$ as an energy observable. In more detail, we define $G$ as follows: let $\{|e_{j}\rangle\}_{j}$ be an orthonormal basis for a Hilbert space $\mathcal{H}$, and let $\{g_{j}\}_{j}$ be a sequence of non-negative real numbers bounded from below. Then the following formula$$G|\psi\rangle=\sum_{j=1}^{\infty}g_{j}|e_{j}\rangle\langle e_{j}|\psi\rangle$$ defines a self-adjoint operator $G$ on the dense domain $\{|\psi\rangle :\sum_{j=1}^{\infty}g_{j}^{2}\left\vert \left\langle e_{j}|\psi\right\rangle \right\vert ^{2}<\infty\}$, for which $|e_{j}\rangle$ is an eigenvector with corresponding eigenvalue $g_{j}$. For a state $\rho\in\mathcal{D}(\mathcal{H}_{A})$, we follow the convention [@HS12] that$$\operatorname{Tr}\{G\rho\}\equiv\sup_{n}\operatorname{Tr}\{\Pi_{n}G\Pi_{n}\rho\},$$ where $\Pi_{n}$ denotes the spectral projection of $G$ corresponding to the interval $[0,n]$. The $n$th extension $\overline{G}_{n}$ of an energy observable $G$ is defined as$$\overline{G}_{n}\equiv\frac{1}{n}\sum_{i=1}^{n}G\otimes I\otimes\cdots\otimes I+\cdots+I\otimes\cdots\otimes I\otimes G.$$ In the subsections that follow, let $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{B})$ denote a quantum channel, and let $G$ be an energy observable. Let $n\in\mathbb{N}$ denote the number of channel uses, $M\in\mathbb{N}$ the size of a code, $P\in\lbrack0,\infty)$ an energy parameter, and $\varepsilon\in\left[ 0,1\right] $ an error parameter. In what follows, we discuss four different notions of capacity: quantum communication with a uniform energy constraint, entanglement transmission with an average energy constraint, private communication with a uniform energy constraint, and secret key transmission with an average energy constraint. Quantum communication with a uniform energy constraint ------------------------------------------------------ An $(n,M,G,P,\varepsilon)$ code for quantum communication with uniform energy constraint consists of an encoding channel $\mathcal{E}^{n}:\mathcal{T}(\mathcal{H}_{S})\rightarrow\mathcal{T}(\mathcal{H}_{A}^{\otimes n})$ and a decoding channel $\mathcal{D}^{n}:\mathcal{T}(\mathcal{H}_{B}^{\otimes n})\rightarrow\mathcal{T}(\mathcal{H}_{S})$, where $M=\dim(\mathcal{H}_{S})$. The energy constraint is uniform, in the sense that the following bound is required to hold for all states resulting from the output of the encoding channel $\mathcal{E}^{n}$: $$\operatorname{Tr}\left\{ \overline{G}_{n}\mathcal{E}^{n}(\rho_{S})\right\} \leq P, \label{eq:q-code-unif-energy-const}$$ where $\rho_{S}\in\mathcal{D}(\mathcal{H}_{S})$. Note that$$\operatorname{Tr}\left\{ \overline{G}_{n}\mathcal{E}^{n}(\rho_{S})\right\} =\operatorname{Tr}\left\{ G\overline{\rho}_{n}\right\} ,$$ where$$\overline{\rho}_{n}\equiv\frac{1}{n}\sum_{i=1}^{n}\operatorname{Tr}_{A^{n}\backslash A_{i}}\{\mathcal{E}^{n}(\rho_{S})\}.$$ due to the i.i.d. nature of the observable $\overline{G}_{n}$. Furthermore, the encoding and decoding channels are good for quantum communication, in the sense that for all pure states $\phi_{RS}\in\mathcal{D}(\mathcal{H}_{R}\otimes\mathcal{H}_{S})$, where $\mathcal{H}_{R}$ is isomorphic to$~\mathcal{H}_{S}$, the following entanglement fidelity criterion holds$$F(\phi_{RS},(\operatorname{id}_{R}\otimes\lbrack\mathcal{D}^{n}\circ \mathcal{N}^{\otimes n}\circ\mathcal{E}^{n}])(\phi_{RS}))\geq1-\varepsilon. \label{eq:q-code-fidelity}$$ A rate $R$ is achievable for quantum communication over $\mathcal{N}$ subject to the uniform energy constraint $P$ if for all $\varepsilon\in(0,1)$, $\delta>0$, and sufficiently large $n$, there exists an $(n,2^{n[R-\delta ]},G,P,\varepsilon)$ quantum communication code with uniform energy constraint. The quantum capacity $Q(\mathcal{N},G,P)$ of $\mathcal{N}$ with uniform energy constraint is equal to the supremum of all achievable rates. Entanglement transmission with an average energy constraint ----------------------------------------------------------- An $(n,M,G,P,\varepsilon)$ code for entanglement transmission with average energy constraint is defined very similarly as above, except that the requirements are less stringent. The energy constraint holds on average, in the sense that it need only hold for the maximally mixed state $\pi_{S}$ input to the encoding channel $\mathcal{E}^{n}$:$$\operatorname{Tr}\left\{ \overline{G}_{n}\mathcal{E}^{n}(\pi_{S})\right\} \leq P. \label{eq:EG-avg-energy-constraint}$$ Furthermore, we only demand that the particular maximally entangled state $\Phi_{RS}\in\mathcal{D}(\mathcal{H}_{R}\otimes\mathcal{H}_{S})$, defined as$$\Phi_{RS}\equiv\frac{1}{M}\sum_{m,m^{\prime}=1}^{M}|m\rangle\langle m^{\prime }|_{R}\otimes|m\rangle\langle m^{\prime}|_{S}, \label{eq:MES-EG-AVG}$$ is preserved with good fidelity:$$F(\Phi_{RS},(\operatorname{id}_{R}\otimes\lbrack\mathcal{D}^{n}\circ \mathcal{N}^{\otimes n}\circ\mathcal{E}^{n}])(\Phi_{RS}))\geq1-\varepsilon. \label{eq:EG-good-reliable-code}$$ A rate $R$ is achievable for entanglement transmission over $\mathcal{N}$ subject to the average energy constraint $P$ if for all $\varepsilon\in (0,1)$, $\delta>0$, and sufficiently large $n$, there exists an $(n,2^{n[R-\delta]},G,P,\varepsilon)$ entanglement transmission code with average energy constraint. The entanglement transmission capacity $E(\mathcal{N},G,P)$ of $\mathcal{N}$ with average energy constraint is equal to the supremum of all achievable rates. From definitions, it immediately follows that quantum capacity with uniform energy constraint can never exceed entanglement transmission capacity with average energy constraint:$$Q(\mathcal{N},G,P)\leq E(\mathcal{N},G,P). \label{eq:Q-less-than-E}$$ In Section \[sec:cap-imps\], we establish the opposite inequality. Private communication with a uniform energy constraint ------------------------------------------------------ An $(n,M,G,P,\varepsilon)$ code for private communication consists of a set $\{\rho_{A^{n}}^{m}\}_{m=1}^{M}$ of quantum states, each in $\mathcal{D}(\mathcal{H}_{A}^{\otimes n})$, and a POVM $\{\Lambda_{B^{n}}^{m}\}_{m=1}^{M}$ such that$$\begin{aligned} \operatorname{Tr}\left\{ \overline{G}_{n}\rho_{A^{n}}^{m}\right\} & \leq P,\label{eq:energy-constraint}\\ \operatorname{Tr}\{\Lambda_{B^{n}}^{m}\mathcal{N}^{\otimes n}(\rho_{A^{n}}^{m})\} & \geq1-\varepsilon,\label{eq:private-good-comm}\\ \frac{1}{2}\left\Vert \mathcal{\hat{N}}^{\otimes n}(\rho_{A^{n}}^{m})-\omega_{E^{n}}\right\Vert _{1} & \leq\varepsilon, \label{eq:security-cond}$$ for all $m\in\left\{ 1,\ldots,M\right\} $, with $\omega_{E^{n}}$ some fixed state in $\mathcal{D}(\mathcal{H}_{E}^{\otimes n})$. In the above, $\mathcal{\hat{N}}$ is a channel complementary to $\mathcal{N}$. Observe that$$\operatorname{Tr}\left\{ \overline{G}_{n}\rho_{A^{n}}^{m}\right\} =\operatorname{Tr}\left\{ G\overline{\rho}_{A}^{m}\right\} ,$$ where$$\overline{\rho}_{A}^{m}\equiv\frac{1}{n}\sum_{i=1}^{n}\operatorname{Tr}_{A^{n}\backslash A_{i}}\{\rho_{A^{n}}^{m}\}. \label{eq:avg-state-energy}$$ A rate $R$ is achievable for private communication over $\mathcal{N}$ subject to uniform energy constraint $P$ if for all $\varepsilon\in(0,1)$, $\delta >0$, and sufficiently large $n$, there exists an $(n,2^{n[R-\delta ]},G,P,\varepsilon)$ private communication code. The private capacity $P(\mathcal{N},G,P)$ of $\mathcal{N}$ with uniform energy constraint is equal to the supremum of all achievable rates. Secret key transmission with an average energy constraint {#sec:SKT-AVG-code} --------------------------------------------------------- An $(n,M,G,P,\varepsilon)$ code for secret key transmission with average energy constraint is defined very similarly as above, except that the requirements are less stringent. The energy constraint holds on average, in the sense that it need only hold for the average input state:$$\frac{1}{M}\sum_{m=1}^{M}\operatorname{Tr}\left\{ \overline{G}_{n}\rho _{A^{n}}^{m}\right\} \leq P. \label{eq:SKT-energy-constraint}$$ Furthermore, we only demand that the conditions in – hold on average:$$\begin{aligned} \frac{1}{M}\sum_{m=1}^{M}\operatorname{Tr}\{\Lambda_{B^{n}}^{m}\mathcal{N}^{\otimes n}(\rho_{A^{n}}^{m})\} & \geq1-\varepsilon ,\label{eq:private-good-comm-SKT}\\ \frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert \mathcal{\hat{N}}^{\otimes n}(\rho_{A^{n}}^{m})-\omega_{E^{n}}\right\Vert _{1} & \leq\varepsilon, \label{eq:security-cond-SKT}$$ with $\omega_{E^{n}}$ some fixed state in $\mathcal{D}(\mathcal{H}_{E}^{\otimes n})$. A rate $R$ is achievable for secret key transmission over $\mathcal{N}$ subject to the average energy constraint $P$ if for all $\varepsilon\in (0,1)$, $\delta>0$, and sufficiently large $n$, there exists an $(n,2^{n[R-\delta]},G,P,\varepsilon)$ secret key transmission code with average energy constraint. The secret key transmission capacity $K(\mathcal{N},G,P)$ of $\mathcal{N}$ with average energy constraint is equal to the supremum of all achievable rates. From definitions, it immediately follows that private capacity with uniform energy constraint can never exceed secret key transmission capacity with average energy constraint$$P(\mathcal{N},G,P)\leq K(\mathcal{N},G,P). \label{eq:P-less-than-K}$$ In Section \[sec:cap-imps\], we establish the opposite inequality. Code conversions\[sec:code-conversions\] ======================================== In this section, we establish several code conversions, which allow for converting one type of code into another type of code along with some loss in the code parameters. In particular, in the forthcoming subsections, we show how to convert 1. an entanglement transmission code with an average energy constraint to a quantum communication code with a uniform energy constraint, 2. a quantum communication code with a uniform energy constraint to a private communication code with a uniform energy constraint, 3. and a secret key transmission code with an average energy constraint to a private communication code with a uniform energy constraint. These code conversions then allow us to establish several non-trivial relations between the corresponding capacities, which we do in Section \[sec:cap-imps\]. Entanglement transmission with an average energy constraint to quantum communication with a uniform energy constraint --------------------------------------------------------------------------------------------------------------------- In this subsection, we show how an entanglement transmission code with an average energy constraint implies the existence of a quantum communication code with a uniform energy constraint, such that there is a loss in performance in the resulting code with respect to several code parameters. A result like this was first established in [@BKN98] and reviewed in [@KW04; @K07; @Wat16], under the assumption that there is no energy constraint. Here we follow the proof approach available in [@K07; @Wat16], but we make several modifications in order to deal with going from an average energy constraint to a uniform energy constraint. \[thm:EG-2-QC\]For all $\delta\in(1/M,1/2)$, the existence of an $(n,M,G,P,\varepsilon)$ entanglement transmission code with average energy constraint implies the existence of an $(n,\left\lfloor \delta M\right\rfloor ,G,P/\left( 1-2\delta\right) ,2\sqrt{\varepsilon/[\delta-1/M]})$ quantum communication code with uniform energy constraint. Suppose that an $(n,M,G,P,\varepsilon)$ entanglement transmission code with average energy constraint exists. This implies that the conditions in  and  hold. Let $\mathcal{C}^{n}:\mathcal{T}(\mathcal{H}_{S})\rightarrow\mathcal{T}(\mathcal{H}_{S})$ denote the finite-dimensional channel consisting of the encoding, communication channel, and decoding:$$\mathcal{C}^{n}\equiv\mathcal{D}^{n}\circ\mathcal{N}^{\otimes n}\circ\mathcal{E}^{n}.$$ We proceed with the following algorithm: 1. Set $k=M$, $\mathcal{H}_{M}=\mathcal{H}_{S}$, and $\delta\in\left( 1/M,1/2\right) $. Suppose for now that $\delta M$ is a positive integer. 2. Set $|\phi_{k}\rangle\in\mathcal{H}_{k}$ to be a state vector such that the input-output fidelity is minimized:$$|\phi_{k}\rangle\equiv\arg\min_{|\phi\rangle\in\mathcal{H}_{k}}\langle \phi|\mathcal{C}^{n}(|\phi\rangle\langle\phi|)|\phi\rangle,$$ and set the fidelity $F_{k}$ and energy $E_{k}$ of $|\phi_{k}\rangle$ as follows:$$\begin{aligned} F_{k} & \equiv\min_{|\phi\rangle\in\mathcal{H}_{k}}\langle\phi |\mathcal{C}^{n}(|\phi\rangle\langle\phi|)|\phi\rangle\\ & =\langle\phi_{k}|\mathcal{C}^{n}(|\phi_{k}\rangle\langle\phi_{k}|)|\phi _{k}\rangle,\\ E_{k} & \equiv\operatorname{Tr}\{\overline{G}_{n}\mathcal{E}^{n}(|\phi _{k}\rangle\langle\phi_{k}|)\}.\end{aligned}$$ 3. Set$$\mathcal{H}_{k-1}\equiv\operatorname{span}\{|\psi\rangle\in\mathcal{H}_{k}:\left\vert \left\langle \psi|\phi_{k}\right\rangle \right\vert =0\}.$$ That is, $\mathcal{H}_{k-1}$ is set to the orthogonal complement of $|\phi _{k}\rangle$ in $\mathcal{H}_{k}$, so that $\mathcal{H}_{k}=\mathcal{H}_{k-1}\oplus\operatorname{span}\{|\phi_{k}\rangle\}$. Set $k:=k-1$. 4. Repeat steps 2-3 until $k=\left( 1-\delta\right) M$ after step 3. 5. Let $|\phi_{k}\rangle\in\mathcal{H}_{k}$ be a state vector such that the input energy is maximized:$$|\phi_{k}\rangle\equiv\arg\max_{|\phi\rangle\in\mathcal{H}_{k}}\operatorname{Tr}\{\overline{G}_{n}\mathcal{E}^{n}(|\phi\rangle\langle \phi|)\},$$ and set the fidelity $F_{k}$ and energy $E_{k}$ of $|\phi_{k}\rangle$ as follows:$$\begin{aligned} F_{k} & \equiv\langle\phi_{k}|\mathcal{C}^{n}(|\phi_{k}\rangle\langle \phi_{k}|)|\phi_{k}\rangle\\ E_{k} & \equiv\max_{|\phi\rangle\in\mathcal{H}_{k}}\operatorname{Tr}\{\overline{G}_{n}\mathcal{E}^{n}(|\phi\rangle\langle\phi|)\}\\ & =\operatorname{Tr}\{\overline{G}_{n}\mathcal{E}^{n}(|\phi_{k}\rangle \langle\phi_{k}|)\}.\end{aligned}$$ 6. Set$$\mathcal{H}_{k-1}\equiv\operatorname{span}\{|\psi\rangle\in\mathcal{H}_{k}:\left\vert \left\langle \psi|\phi_{k}\right\rangle \right\vert =0\}.$$ Set $k:=k-1$. 7. Repeat steps 5-6 until $k=0$ after step 6. The idea behind this algorithm is to successively remove minimum fidelity states from $\mathcal{H}_{S}$ until $k=\left( 1-\delta\right) M$. By the structure of the algorithm and some analysis given below, we are then guaranteed for this $k$ and lower that$$1-\min_{|\phi\rangle\in\mathcal{H}_{k}}\langle\phi|\mathcal{C}^{n}(|\phi\rangle\langle\phi|)|\phi\rangle\leq\varepsilon/\delta.$$ That is, the subspace $\mathcal{H}_{k}$ is good for quantum communication with fidelity at least $1-\varepsilon/\delta$. After this $k$, we then successively remove maximum energy states from $\mathcal{H}_{k}$ until the algorithm terminates. Furthermore, the algorithm implies that$$\begin{aligned} F_{M} & \leq F_{M-1}\leq\cdots\leq F_{\left( 1-\delta\right) M+1},\label{eq:fidelity-ordering}\\ E_{\left( 1-\delta\right) M} & \geq E_{\left( 1-\delta\right) M-1}\geq\cdots\geq E_{1},\label{eq:energy-ordering}\\ \mathcal{H}_{M} & \supseteq\mathcal{H}_{M-1}\supseteq\cdots\supseteq \mathcal{H}_{1}.\end{aligned}$$ Also, $\{|\phi_{k}\rangle\}_{k=1}^{l}$ is an orthonormal basis for $\mathcal{H}_{l}$, where $l\in\{1,\ldots,M\}$. We now analyze the result of this algorithm by employing Markov’s inequality and some other tools. From the condition in that the original code is good for entanglement transmission, we have that$$F(\Phi_{RS},(\operatorname{id}_{R}\otimes\mathcal{C}^{n})(\Phi_{RS}))\geq1-\varepsilon.$$ Since $\{|\phi_{k}\rangle\}_{k=1}^{M}$ is an orthonormal basis for $\mathcal{H}_{M}$, we can write$$|\Phi\rangle_{RS}=\frac{1}{\sqrt{M}}\sum_{k=1}^{M}|\phi_{k}^{\ast}\rangle _{R}\otimes|\phi_{k}\rangle_{S},$$ where $\ast$ denotes complex conjugate with respect to the basis in , and the reduced state can be written as $\Phi_{S}=\frac{1}{M}\sum_{k=1}^{M}|\phi_{k}\rangle\langle\phi_{k}|_{S}$. A consequence of [@W16 Exercise 9.5.1] is that$$\begin{aligned} F(\Phi_{RS},(\operatorname{id}_{R}\otimes\mathcal{C}^{n})(\Phi_{RS})) & \leq\frac{1}{M}\sum_{k}\langle\phi_{k}|\mathcal{C}^{n}(|\phi_{k}\rangle \langle\phi_{k}|)|\phi_{k}\rangle\nonumber\\ & =\frac{1}{M}\sum_{k}F_{k}.\end{aligned}$$ So this means that$$\frac{1}{M}\sum_{k}F_{k}\geq1-\varepsilon\quad\Leftrightarrow\quad\frac{1}{M}\sum_{k}\left( 1-F_{k}\right) \leq\varepsilon.$$ Now taking $K$ as a uniform random variable with realizations $k\in\left\{ 1,\ldots,M\right\} $ and applying Markov’s inequality, we find that$$\Pr_{K}\{1-F_{K}\geq\varepsilon/\delta\}\leq\frac{\mathbb{E}_{K}\{1-F_{K}\}}{\varepsilon/\delta}\leq\frac{\varepsilon}{\varepsilon/\delta}=\delta.$$ So this implies that $\left( 1-\delta\right) M$ of the $F_{k}$ values are such that $F_{k}\geq1-\varepsilon/\delta$. Since they are ordered as given in , we can conclude that $\mathcal{H}_{\left( 1-\delta\right) M}$ is a subspace good for quantum communication in the following sense:$$\min_{|\phi\rangle\in\mathcal{H}_{\left( 1-\delta\right) M}}\langle \phi|\mathcal{C}^{n}(|\phi\rangle\langle\phi|)|\phi\rangle\geq1-\varepsilon /\delta.$$ Now consider from the average energy constraint in that$$\begin{aligned} P & \geq\operatorname{Tr}\left\{ \overline{G}_{n}\mathcal{E}^{n}(\pi _{S})\right\} \\ & =\frac{1}{M}\sum_{k=1}^{M}\operatorname{Tr}\left\{ \overline{G}_{n}\mathcal{E}^{n}(|\phi_{k}\rangle\langle\phi_{k}|_{S})\right\} \\ & =\frac{1}{M}\sum_{k=1}^{M}E_{k}\\ & \geq\frac{1-\delta}{\left( 1-\delta\right) M}\sum_{k=1}^{\left( 1-\delta\right) M}E_{k},\end{aligned}$$ which we can rewrite as$$\frac{1}{\left( 1-\delta\right) M}\sum_{k=1}^{\left( 1-\delta\right) M}E_{k}\leq P/\left( 1-\delta\right) .$$ Taking $K^{\prime}$ as a uniform random variable with realizations $k\in\left\{ 1,\ldots,\left( 1-\delta\right) M\right\} $ and applying Markov’s inequality, we find that$$\begin{aligned} \Pr_{K^{\prime}}\left\{ E_{K^{\prime}}\geq P/\left( 1-2\delta\right) \right\} & \leq\frac{P/\left( 1-\delta\right) }{P/\left( 1-2\delta \right) }\\ & =\frac{1-2\delta}{1-\delta}.\end{aligned}$$ Rewriting this, we find that$$\begin{aligned} \Pr_{K^{\prime}}\left\{ E_{K^{\prime}}\leq P/\left( 1-2\delta\right) \right\} & \geq1-\frac{1-2\delta}{1-\delta}\\ & =\frac{\delta}{1-\delta}.\end{aligned}$$ Thus, a fraction $\delta/\left( 1-\delta\right) $ of the remaining $\left( 1-\delta\right) M$ state vectors $|\phi_{k}\rangle$ are such that $E_{k}\leq P/\left( 1-2\delta\right) $. Since they are ordered as in , this means that $\left\{ |\phi_{\delta M}\rangle,\ldots,|\phi_{1}\rangle\right\} $ have this property. We can then conclude that the subspace $\mathcal{H}_{\delta M}$ is such that$$\begin{aligned} \dim(\mathcal{H}_{\delta M}) & =\delta M,\label{eq:resulting-code-size-1}\\ \min_{|\phi\rangle\in\mathcal{H}_{\delta M}}\langle\phi|\mathcal{C}^{n}(|\phi\rangle\langle\phi|)|\phi\rangle & \geq1-\varepsilon/\delta ,\label{eq:min-fid-condition-almost-done}\\ \max_{|\phi\rangle\in\mathcal{H}_{\delta M}}\operatorname{Tr}\{\overline {G}_{n}\mathcal{E}^{n}(|\phi\rangle\langle\phi|)\} & \leq P/\left( 1-2\delta\right) . \label{eq:resulting-code-power-constr}$$ Now applying Proposition \[prop:min-fid-to-min-ent-fid\] (in the appendix) to , we can conclude that the minimum entanglement fidelity obeys the following bound:$$\min_{|\psi\rangle\in\mathcal{H}_{\delta M}^{\prime}\otimes\mathcal{H}_{\delta M}}\langle\psi|(\operatorname{id}_{\mathcal{H}_{\delta M}^{\prime}}\otimes\mathcal{C}^{n})(|\psi\rangle\langle\psi|)|\psi\rangle\geq 1-2\sqrt{\varepsilon/\delta}. \label{eq:resulting-code-fid-1}$$ To finish off the proof, suppose that $\delta M$ is not an integer. Then there exists a $\delta^{\prime}<\delta$ such that $\delta^{\prime}M=\left\lfloor \delta M\right\rfloor $ is a positive integer. By the above reasoning, there exists a code with parameters as given in –, except with $\delta$ replaced by $\delta^{\prime}$. Then the code dimension is equal to $\left\lfloor \delta M\right\rfloor $. Using that $\delta^{\prime }M=\left\lfloor \delta M\right\rfloor >\delta M-1$, we find that $\delta^{\prime}>\delta-1/M$, which implies that $1-2\sqrt{\varepsilon /\delta^{\prime}}>1-2\sqrt{\varepsilon/[\delta-1/M]}$. We also have that $P/\left( 1-2\delta^{\prime}\right) <P/\left( 1-2\delta\right) $. This concludes the proof. Quantum communication with a uniform energy constraint implies private communication with a uniform energy constraint --------------------------------------------------------------------------------------------------------------------- This subsection establishes that a quantum communication code with uniform energy constraint can always be converted to one for private communication with uniform energy constraint, such that there is negligible loss with respect to code parameters. \[thm:QC-to-PC\]The existence of an $(n,M,G,P,\varepsilon)$ quantum communication code with uniform energy constraint implies the existence of an $(n,\left\lfloor M/2\right\rfloor ,G,P,2\sqrt{\varepsilon})$ code for private communication with uniform energy constraint. Starting from an $(n,M,G,P,\varepsilon)$ quantum communication code with uniform energy constraint, we can use it to transmit a maximally entangled state$$\Phi_{RS}\equiv\frac{1}{M}\sum_{m,m^{\prime}=1}^{M}|m\rangle\langle m^{\prime }|_{R}\otimes|m\rangle\langle m^{\prime}|_{S}$$ of Schmidt rank $M$ faithfully, by applying :$$F(\Phi_{RS},(\operatorname{id}_{R}\otimes\mathcal{D}^{n}\circ\mathcal{N}^{\otimes n}\circ\mathcal{E}^{n})(\Phi_{RS}))\geq1-\varepsilon. \label{eq:fid-crit-q-to-priv}$$ Consider that the state$$\sigma_{RSE^{n}}\equiv(\operatorname{id}_{R}\otimes\mathcal{D}^{n}\circ \lbrack\mathcal{U}^{\mathcal{N}}]^{\otimes n}\circ\mathcal{E}^{n})(\Phi_{RS})$$ extends the state output from the actual protocol. By Uhlmann’s theorem (see ), there exists an extension of $\Phi_{RS}$ such that the fidelity between this extension and the state $\sigma_{RSE^{n}}$ is equal to the fidelity in . However, the maximally entangled state $\Phi_{RS}$ is unextendible in the sense that the only possible extension is a tensor-product state $\Phi_{RS}\otimes\omega_{E^{n}}$ for some state $\omega_{E^{n}}$. So, putting these statements together, we find that$$F(\Phi_{RS}\otimes\omega_{E^{n}},(\operatorname{id}_{R}\otimes\mathcal{D}^{n}\circ\lbrack\mathcal{U}^{\mathcal{N}}]^{\otimes n}\circ\mathcal{E}^{n})(\Phi_{RS}))\geq1-\varepsilon.$$ Furthermore, measuring the $R$ and $S$ systems locally in the Schmidt basis of $\Phi_{RS}$ only increases the fidelity, so that$$F(\overline{\Phi}_{RS}\otimes\omega_{E^{n}},(\operatorname{id}_{R}\otimes\overline{\mathcal{D}}^{n}\circ\lbrack\mathcal{U}^{\mathcal{N}}]^{\otimes n}\circ\mathcal{E}^{n})(\overline{\Phi}_{RS}))\geq1-\varepsilon,$$ where $\overline{\mathcal{D}}^{n}$ denotes the concatenation of the original decoder $\mathcal{D}^{n}$ followed by the local measurement:$$\begin{aligned} \overline{\mathcal{D}}^{n}(\cdot) & \equiv\sum_{m}|m\rangle\langle m|\mathcal{D}^{n}(\cdot)|m\rangle\langle m|\\ & =\sum_{m}\operatorname{Tr}\{\mathcal{D}^{n\dag}[|m\rangle\langle m|](\cdot)\}|m\rangle\langle m|.\end{aligned}$$ Observe that $\{\mathcal{D}^{n\dag}[|m\rangle\langle m|]\}_{m}$ is a valid POVM. Employing the inequalities in , we can conclude that$$\frac{1}{2}\left\Vert \overline{\Phi}_{RS}\otimes\omega_{E^{n}}-(\operatorname{id}_{R}\otimes\overline{\mathcal{D}}^{n}\circ\lbrack \mathcal{U}^{\mathcal{N}}]^{\otimes n}\circ\mathcal{E}^{n})(\overline{\Phi }_{RS})\right\Vert _{1}\leq\sqrt{\varepsilon}.$$ Using the direct sum property of the trace distance from and defining $\rho_{A^{n}}^{m}\equiv\mathcal{E}^{n}(|m\rangle\langle m|_{S})$, we can then rewrite this as$$\frac{1}{2M}\sum_{m=1}^{M}\left\Vert |m\rangle\langle m|_{S}\otimes \omega_{E^{n}}-(\overline{\mathcal{D}}^{n}\circ\lbrack\mathcal{U}^{\mathcal{N}}]^{\otimes n})(\rho_{A^{n}}^{m})\right\Vert _{1}\leq \sqrt{\varepsilon}.$$ Markov’s inequality then guarantees that there exists a subset $\mathcal{M}^{\prime}$ of $\left[ M\right] $ of size $\left\lfloor M/2\right\rfloor $ such that the following condition holds for all $m\in\mathcal{M}^{\prime}$:$$\frac{1}{2}\left\Vert |m\rangle\langle m|_{S}\otimes\omega_{E^{n}}-(\overline{\mathcal{D}}^{n}\circ\lbrack\mathcal{U}^{\mathcal{N}}]^{\otimes n})(\rho_{A^{n}}^{m})\right\Vert _{1}\leq2\sqrt{\varepsilon}. \label{eq:q-to-p-good-condition}$$ We now define the private communication code to consist of codewords $\{\rho_{A^{n}}^{m}\equiv\mathcal{E}^{n}(|m\rangle\langle m|_{S})\}_{m\in\mathcal{M}^{\prime}}$ and the decoding POVM to be$$\begin{gathered} \{\Lambda_{B^{n}}^{m}\equiv\mathcal{D}^{n\dag}(|m\rangle\langle m|)\}_{m\in \mathcal{M}^{\prime}}\\ \cup\left\{ \Lambda_{B^{n}}^{0}\equiv\mathcal{D}^{n\dag}\!\left( \sum_{m\not \in \mathcal{M}^{\prime}}|m\rangle\langle m|\right) \right\} .\end{gathered}$$ Note that the energy constraint holds for all codewords$$\operatorname{Tr}\{\overline{G}_{n}\rho_{A^{n}}^{m}\}\leq P,$$ due to the assumption that we start from a quantum communication code with uniform energy constraint as given in . Applying monotonicity of partial trace to with respect to system $S$, we find that the following condition holds for all $m\in\mathcal{M}^{\prime}$:$$\frac{1}{2}\left\Vert \omega_{E^{n}}-\mathcal{\hat{N}}^{\otimes n}(\rho _{A^{n}}^{m})\right\Vert _{1}\leq2\sqrt{\varepsilon},$$ which gives the desired security condition in . Applying monotonicity of partial trace to  with respect to system $E^{n}$ gives that$$\frac{1}{2}\left\Vert |m\rangle\langle m|_{S}-(\overline{\mathcal{D}}^{n}\circ\mathcal{N}^{\otimes n})(\rho_{A^{n}}^{m})\right\Vert _{1}\leq 2\sqrt{\varepsilon}, \label{eq:q-to-p-good-decoding-1}$$ for all $m\in\mathcal{M}^{\prime}$. Abbreviating $\Gamma^{m'}_{B^n} \equiv \mathcal{D}^{n\dag}(|m'\rangle\langle m'|)$, consider then that for all $m \in \mathcal{M}'$ $$\begin{aligned} & \frac{1}{2}\left\Vert |m\rangle\langle m|_{S}-(\overline{\mathcal{D}}^{n}\circ\mathcal{N}^{\otimes n})(\rho_{A^{n}}^{m})\right\Vert _{1}\nonumber\\ & =\frac{1}{2}\left\Vert |m\rangle\langle m|_{S}-\sum_{m^{\prime}=1}^{M }\operatorname{Tr}\{\Gamma^{m'}_{B^n}\mathcal{N}^{\otimes n}(\rho_{A^{n}}^{m})\}|m^{\prime}\rangle\langle m^{\prime}|\right\Vert _{1}\nonumber\\ & =\frac{1}{2}\left\Vert p_{e}|m\rangle\langle m|_{S}-\sum_{m^{\prime}\neq m}\operatorname{Tr}\{\Gamma^{m'}_{B^n}\mathcal{N}^{\otimes n}(\rho_{A^{n}}^{m})\}|m^{\prime}\rangle\langle m^{\prime}|\right\Vert _{1}\nonumber\\ & =\frac{1}{2}\left( p_{e}+\sum_{m^{\prime}\neq m}\operatorname{Tr}\{\Gamma^{m'}_{B^n}\mathcal{N}^{\otimes n}(\rho_{A^{n}}^{m})\}\right) \nonumber\\ & =1-\operatorname{Tr}\{\Lambda_{B^{n}}^{m}\mathcal{N}^{\otimes n}(\rho_{A^{n}}^{m})\},\end{aligned}$$ where $p_{e}\equiv1-\operatorname{Tr}\{\Lambda_{B^{n}}^{m}\mathcal{N}^{\otimes n}(\rho_{A^{n}}^{m})\}$. Combining this equality with gives the desired reliable decoding condition in for all $m \in \mathcal{M}'$ $$\operatorname{Tr}\{\Lambda_{B^{n}}^{m}\mathcal{N}^{\otimes n}(\rho_{A^{n}}^{m})\}\geq1-2\sqrt{\varepsilon}.$$ Thus, we have shown that from an $(n,M,G,P,\varepsilon)$ quantum communication code with uniform energy constraint, one can realize an $(n,\left\lfloor M/2\right\rfloor ,G,P,2\sqrt{\varepsilon})$ code for private communication with uniform energy constraint. That a quantum communication code can be easily converted to a private communication code is part of the folklore of quantum information theory. Ref. [@ieee2005dev] proved that the unconstrained quantum capacity never exceeds the unconstrained private capacity, but we are not aware of an explicit code conversion statement of the form given in Theorem \[thm:QC-to-PC\]. Secret key transmission with an average energy constraint implies private communication with a uniform energy constraint ------------------------------------------------------------------------------------------------------------------------ We finally establish that a secret key transmission code with average energy constraint can be converted to a private communication code with uniform energy constraint. \[thm:SK-to-PC\]For $\delta\in(1/M,1/3)$, the existence of an $(n,M,G,P,\varepsilon)$ secret key transmission code with average energy constraint implies the existence of an $(n,\left\lfloor \delta M\right\rfloor ,G,P/(1-3\delta),\varepsilon/[\delta-1/M])$ private communication code with uniform energy constraint. To begin with, suppose that $\delta M$ is an integer. The existence of an $(n,M,G,P,\varepsilon)$ secret key transmission code with average energy constraint implies that the following three conditions hold: $$\begin{aligned} \frac{1}{M}\sum_{m=1}^{M}E_{m} & \leq P~,\quad\frac{1}{M}\sum_{m=1}^{M}T_{m}\geq1-\varepsilon~,\label{eq:average-E-SK-code}\\ \frac{1}{M}\sum_{m=1}^{M}D_{m} & \leq\varepsilon~,\end{aligned}$$ where$$\begin{aligned} E_{m} & \equiv\operatorname{Tr}\{\overline{G}_{n}\rho_{A^{n}}^{m}\}~,\\ T_{m} & \equiv\operatorname{Tr}\{\Lambda_{B^{n}}^{m}\mathcal{N}^{\otimes n}(\rho_{A^{n}}^{m})\}~,\\ D_{m} & \equiv\frac{1}{2}\left\Vert \mathcal{\hat{N}}^{\otimes n}(\rho_{A^{n}}^{m})-\omega_{E^{n}}\right\Vert _{1}~.\end{aligned}$$ Now taking $\hat{M}$ as a uniform random variable with realizations $m\in\left\{ {1,\ldots,M}\right\} $ and applying Markov’s inequality, we have for $\delta\in(0,1/3)$ that$$\Pr_{\hat{M}}\{1-T_{\hat{M}}\geq\varepsilon/\delta\}\leq\frac{\mathbb{E}_{\hat{M}}\{1-T_{\hat{M}}\}}{\varepsilon/\delta}\leq\frac{\varepsilon }{\varepsilon/\delta}~.$$ This implies that $(1-\delta)M$ of the $T_{m}$ values are such that $T_{m}\geq1-\varepsilon/\delta$. We then rearrange the order of $T_{m}$, $D_{m}$, and $E_{m}$ using a label $m^{\prime}$ such that the first $(1-\delta)M$ of the $T_{m^{\prime}}$ variables satisfy the condition $T_{m^{\prime}}\geq1-\varepsilon/\delta$. Now from , we have that$$\varepsilon\geq\frac{1}{M}\sum_{m^{\prime}=1}^{M}D_{m^{\prime}}\geq \frac{1-\delta}{(1-\delta)M}\sum_{m^{\prime}=1}^{(1-\delta)M}D_{m^{\prime}}~,$$ which can be rewritten as$$\frac{1}{(1-\delta)M}\sum_{m=1}^{(1-\delta)M}D_{m^{\prime}}\leq\frac {\varepsilon}{1-\delta}.$$ Now taking $\hat{M}^{\prime}$ as a uniform random variable with realizations $m^{\prime}\in\{1,\ldots,(1-\delta)M\}$ and applying Markov’s inequality, we find that$$\begin{aligned} \Pr_{\hat{M}^{\prime}}\left\{ D_{\hat{M}^{\prime}}\geq\varepsilon /\delta\right\} & \leq\frac{\mathbb{E}_{\hat{M}^{\prime}}\{D_{\hat {M}^{\prime}}\}}{\varepsilon/\delta}\\ & \leq\frac{\varepsilon/(1-\delta)}{\varepsilon/\delta}\\ & =\frac{\delta}{1-\delta}~.\end{aligned}$$ Thus a fraction $1-\left[ \delta/(1-\delta)\right] =(1-2\delta)/(1-\delta)$ of the first $(1-\delta)M$ variables $D_{m^{\prime}}$ satisfy $D_{\hat {M}^{\prime}}\leq\varepsilon/\delta$. Now rearrange the order of $T_{m^{\prime}}$, $D_{m^{\prime}}$, and $E_{m^{\prime}}$ with label $m^{\prime\prime}$ such that the first $(1-2\delta)M$ of them satisfy $$\begin{aligned} T_{m^{\prime\prime}} & \geq1-\varepsilon/\delta~,\\ D_{m^{\prime\prime}} & \leq\varepsilon/\delta~.\end{aligned}$$ From , we get that$$P\geq\frac{1}{M}\sum_{m^{\prime\prime}=1}^{M}E_{m^{\prime\prime}}\geq \frac{1-2\delta}{(1-2\delta)M}\sum_{m^{\prime\prime}=1}^{(1-2\delta )M}E_{m^{\prime\prime}}~,$$ which can be rewritten as$$\frac{1}{(1-2\delta)M}\sum_{m^{\prime\prime}=1}^{(1-2\delta)M}E_{m^{\prime \prime}}\leq\frac{P}{1-2\delta}~.$$ Taking $\hat{M}^{\prime\prime}$ as a uniform random variable with realizations $m^{\prime\prime}\in\{1,...,(1-2\delta)M\}$ and applying Markov’s inequality, we find that$$\begin{aligned} \Pr_{\hat{M}^{\prime\prime}}\left\{ E_{\hat{M}^{\prime\prime}}\geq P/(1-3\delta)\right\} & \leq\frac{\mathbb{E}_{\hat{M}^{\prime\prime}}\{E_{\hat{M}^{\prime\prime}}\}}{P/(1-\delta)}\\ & \leq\frac{P/(1-2\delta)}{P/(1-3\delta)}\\ & =\frac{1-3\delta}{1-2\delta}~.\end{aligned}$$ Thus a fraction $1-(1-3\delta)/(1-2\delta)=\delta/(1-2\delta)$ of the first $(1-2\delta)M$ variables $E_{m^{\prime\prime}}$ satisfy the condition $E_{\hat{M}^{\prime\prime}}\leq P/(1-3\delta)$. We can finally relabel $T_{m^{\prime\prime}}$, $D_{m^{\prime\prime}}$, and $E_{m^{\prime\prime}}$ with a label $m^{\prime\prime\prime}$ such that the first $\delta M$ of them satisfy $$\begin{aligned} E_{m^{\prime\prime\prime}} & \leq P/(1-3\delta )~,\label{eq:resulting-code-energy-2}\\ T_{m^{\prime\prime\prime}} & \geq1-\varepsilon/\delta~,\\ D_{m^{\prime\prime\prime}} & \leq\varepsilon/\delta~. \label{eq:resulting-code-secrecy-2}$$ The corresponding codewords then constitute an $(n,\delta M,G,P/(1-3\delta ),\varepsilon/\delta)$ private communication code with uniform energy constraint. To finish off the proof, suppose that $\delta M$ is not an integer. Then there exists a $\delta^{\prime}<\delta$ such that $\delta^{\prime}M=\left\lfloor \delta M\right\rfloor $ is a positive integer. By the above reasoning, there exists a code with parameters as given in –, except with $\delta$ replaced by $\delta^{\prime}$. Then the code size is equal to $\left\lfloor \delta M\right\rfloor $. Using that $\delta^{\prime }M=\left\lfloor \delta M\right\rfloor >\delta M-1$, we find that $\delta^{\prime}>\delta-1/M$, which implies that $1-\varepsilon/\delta ^{\prime}>1-\varepsilon/[\delta-1/M]$ and $\varepsilon/\delta^{\prime }<\varepsilon/\left[ \delta-1/M\right] $. We also have that $P/\left( 1-3\delta^{\prime}\right) <P/\left( 1-3\delta\right) $. This concludes the proof. Implications of code conversions for capacities {#sec:cap-imps} =============================================== In this brief section, we show how the various code conversions from Section \[sec:code-conversions\] have implications for the capacities defined in Section \[sec:energy-constrained-caps\]. The main result is the following theorem: \[thm:cap-relations\]Let $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{B})$ be a quantum channel, $G\in\mathcal{P}(\mathcal{H}_{A})$ an energy observable, and $P\in \lbrack0,\infty)$. Then the following relations hold for the capacities defined in Section \[sec:energy-constrained-caps\]:$$\begin{aligned} Q(\mathcal{N},G,P) & =E(\mathcal{N},G,P)\nonumber\\ & \leq P(\mathcal{N},G,P)=K(\mathcal{N},G,P).\end{aligned}$$ As a consequence of the definitions of these capacities and as remarked in and , we have that$$\begin{aligned} Q(\mathcal{N},G,P) & \leq E(\mathcal{N},G,P),\\ P(\mathcal{N},G,P) & \leq K(\mathcal{N},G,P).\end{aligned}$$ So it suffices to prove the following three inequalities:$$\begin{aligned} Q(\mathcal{N},G,P) & \geq E(\mathcal{N},G,P),\label{eq:Q>=E}\\ Q(\mathcal{N},G,P) & \leq P(\mathcal{N},G,P),\label{eq:Q<=P}\\ P(\mathcal{N},G,P) & \geq K(\mathcal{N},G,P). \label{eq:K<=P}$$ These follow from Theorems \[thm:EG-2-QC\], \[thm:QC-to-PC\], and \[thm:SK-to-PC\], respectively. Let us establish . Suppose that $R$ is an achievable rate for entanglement transmission with an average energy constraint. This implies the existence of a sequence of $(n,M_{n},G,P,\varepsilon_{n})$ codes such that$$\begin{aligned} \liminf_{n\rightarrow\infty}\frac{1}{n}\log M_{n} & =R,\\ \lim_{n\rightarrow\infty}\varepsilon_{n} & =0.\end{aligned}$$ Suppose that the sequence is such that $M_{n}$ is non-decreasing with $n$ (if it is not the case, then pick out a subsequence for which it is the case). Fix a constant $\delta\in(0,1/2)$. Now pick $n$ large enough such that $\delta \geq1/M_{n}$. Invoking Theorem \[thm:EG-2-QC\], there exists an $(n,\left\lfloor \delta M_{n}\right\rfloor ,G,P/(1-2\delta),2\sqrt {\varepsilon_{n}/\left[ \delta-1/M_{n}\right] })$ quantum communication code with uniform energy constraint. From the facts that$$\begin{aligned} \liminf_{n\rightarrow\infty}\frac{1}{n}\log\left( \left\lfloor \delta M_{n}\right\rfloor \right) & =\liminf_{n\rightarrow\infty}\frac{1}{n}\log M_{n}\\ & =R,\\ \limsup_{n\rightarrow\infty}2\sqrt{\varepsilon_{n}/\left[ \delta -1/M_{n}\right] } & =0,\end{aligned}$$ we can conclude that $R$ is an achievable rate for quantum communication with uniform energy constraint $P/(1-2\delta)$. However, since we have shown this latter statement to be true for all $\delta\in(0,1/2)$, we can then conclude that the rate $R$ is achievable with uniform energy constraint $\inf _{\delta\in(0,1/2)}P/(1-2\delta)=P$. So this implies . We can argue the other inequalities in and  similarly, by applying Theorems \[thm:QC-to-PC\] and \[thm:SK-to-PC\], respectively. Achievability of regularized, energy-constrained coherent information for energy-constrained quantum communication\[sec:coh-info-ach\] ====================================================================================================================================== The main result of this section is Theorem \[thm:coh-info-ach\], which shows that the regularized energy-constrained coherent information is achievable for energy-constrained quantum communication. In order to do so, we need to restrict the energy observables and channels that we consider. We impose two arguably natural constraints: that the energy observable be a Gibbs observable as given in Definition \[def:Gibbs-obs\] and that the channel have finite output entropy as given in Condition \[cond:finite-out-entropy\]. Gibbs observables have been considered in several prior works [@H03; @H04; @HS06; @Holevo2010; @H12; @Winter15] as well as finite output-entropy channels [@H03; @H04; @H12]. When defining a Gibbs observable, we follow [@H12 Lemma 11.8] and [@Winter15 Section IV]: \[Gibbs observable\]\[def:Gibbs-obs\]Let $G$ be an energy observable as given in Definition \[def:energy-obs\]. Such an operator $G$ is a Gibbs observable if for all $\beta>0$, the following holds$$\operatorname{Tr}\{\exp(-\beta G)\}<\infty. \label{eq:thermal-well-defined}$$ The above condition implies that a Gibbs observable$~G$ always has a finite value of the partition function $\operatorname{Tr}\{\exp(-\beta G)\}$ for all $\beta>0$ and thus a well defined thermal state for all $\beta>0$, given by $e^{-\beta G}/\operatorname{Tr}\{e^{-\beta G}\}$. \[Finite output entropy\]\[cond:finite-out-entropy\]Let $G$ be a Gibbs observable and $P\in\lbrack0,\infty)$. A quantum channel $\mathcal{N}$ satisfies the finite-output entropy condition with respect to $G$ and $P$ if$$\sup_{\rho:\operatorname{Tr}\{G\rho\}\leq P}H(\mathcal{N}(\rho))<\infty, \label{eq:finiteness-cap}$$ \[lem:env-out-ent\]Let $\mathcal{N}$ denote a quantum channel satisfying Condition \[cond:finite-out-entropy\], $G$ a Gibbs observable, and $P\in\lbrack0,\infty)$. Then any complementary channel $\mathcal{\hat{N}}$ of $\mathcal{N}$ satisfies the finite-entropy condition$$\sup_{\rho:\operatorname{Tr}\{G\rho\}\leq P}H(\mathcal{\hat{N}}(\rho))<\infty. \label{eq:finite-entropy-comp-ch}$$ Let $\rho$ be a density operator satisfying $\operatorname{Tr}\{G\rho\}\leq P$, and let $\sum_{i}p_{i}|i\rangle\langle i|$ be a spectral decomposition of $\rho$. Let$$\theta_{\beta}\equiv e^{-\beta G}/\operatorname{Tr}\{e^{-\beta G}\}$$ denote a thermal state of $G$ with inverse temperature $\beta>0$. Consider that $H(\rho)$ is finite because a rewriting of $D(\rho\Vert\theta_{\beta })\geq0$ implies that$$\begin{aligned} H(\rho) & \leq\beta\operatorname{Tr}\{G\rho\}+\log\operatorname{Tr}\{e^{-\beta G}\}\\ & \leq\beta P+\log\operatorname{Tr}\{e^{-\beta G}\}<\infty, \label{eq:input-finite-entropy}$$ where the last inequality follows from and from the assumption that $P<\infty$. Consider that $|\psi^{\rho}\rangle =\sum_{i}\sqrt{p_{i}}|i\rangle\otimes|i\rangle$ is a purification of $\rho$ and satisfies$$\begin{aligned} H(\mathcal{\hat{N}}(\rho)) & =H((\operatorname{id}\otimes\mathcal{N})(|\psi^{\rho}\rangle\langle\psi^{\rho}|))\\ & \leq H(\rho)+H(\mathcal{N}(\rho))<\infty.\end{aligned}$$ The equality follows because the marginals of a pure bipartite state have the same entropy. The first inequality follows from subadditivity of entropy, and the last from and the assumption that Condition \[cond:finite-out-entropy\] holds. We have shown that the entropy $H(\mathcal{\hat{N}}(\rho))$ is finite for all states satisfying $\operatorname{Tr}\{G\rho\}\leq P$, and so holds. \[thm:coh-info-ach\]Let $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{B})$ denote a quantum channel satisfying Condition \[cond:finite-out-entropy\], $G$ a Gibbs observable, and $P\in\lbrack0,\infty)$. Then the energy-constrained entanglement transmission capacity $E(\mathcal{N},G,P)$ is bounded from below by the regularized energy-constrained coherent information of the channel $\mathcal{N}$:$$E(\mathcal{N},G,P)\geq\lim_{k\rightarrow\infty}\frac{1}{k}I_{c}(\mathcal{N}^{\otimes k},\overline{G}_{k},P),\nonumber$$ where the energy-constrained coherent information of $\mathcal{N}$ is defined as$$I_{c}(\mathcal{N},G,P)\equiv\sup_{\rho:\operatorname{Tr}\{G\rho\}\leq P}H(\mathcal{N}(\rho))-H(\mathcal{\hat{N}}(\rho)),$$ and $\mathcal{\hat{N}}$ denotes a complementary channel of $\mathcal{N}$. The main challenge in proving this theorem is to have codes achieving the coherent information while meeting the average energy constraint. We prove the theorem by combining Klesse’s technique for constructing entanglement transmission codes [@K07; @qcap2008second] with an adaptation of Holevo’s technique of approximation and constructing codes meeting an energy constraint [@H03; @H04]. We follow their arguments very closely and show how to combine the techniques to achieve the desired result. First, we recall what Klesse accomplished in [@K07] (see also the companion paper [@qcap2008second]). Let $\mathcal{M}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{B})$ denote a quantum channel satisfying Condition \[cond:finite-out-entropy\] for some Gibbs observable and energy constraint, so that the receiver entropy is finite, as well as the environment entropy by Lemma \[lem:env-out-ent\]. This implies that entropy-typical subspaces and sequences corresponding to these entropies are well defined and finite, a fact of which we make use. Let $V$ denote a finite-dimensional linear subspace of $\mathcal{H}_{A}$. Set $L\equiv\dim(V)$, and let $\mathcal{L}$ denote a channel defined to be the restriction of $\mathcal{M}$ to states with support contained in $V$. Let $\{K_{y}\}_{y}$ be a set of Kraus operators for $\mathcal{M}$ and define the probability $p_{Y}(y)$ by$$p_{Y}(y)\equiv\frac{1}{L}\operatorname{Tr}\{\Pi_{V}K_{y}^{\dag}K_{y}\Pi_{V}\},$$ where $\Pi_{V}$ is a projection onto $V$. As discussed in [@K07], there is unitary freedom in the choice of the Kraus operators, and they can be chosen diagonal, so that $\operatorname{Tr}\{\Pi_{V}K_{y}^{\dag}K_{x}\Pi_{V}\}=0$ for $x\neq y$. Let $T_{Y}^{n,\delta}$ denote the $\delta$-entropy-typical set for $p_{Y}$, defined as$$T_{Y}^{n,\delta}\equiv\left\{ y^{n}:\left\vert -\left[ \log p_{Y^{n}}(y^{n})\right] /n-H(Y)\right\vert \leq\delta\right\} ,$$ for integer $n\geq1$ and real $\delta>0$, where $p_{Y^{n}}(y^{n})\equiv p_{Y}(y_{1})p_{Y}(y_{2})\cdots p_{Y}(y_{n})$. Let $K_{y^{n}}\equiv K_{y_{1}}\otimes K_{y_{2}}\otimes\cdots\otimes K_{y_{n}}$. Now define the (trace-non-increasing) quantum operation $\mathcal{L}^{n,\delta}$ to be a map consisting of only the entropy-typical Kraus operators $K_{y^{n}}$ such that $y^{n}\in T_{Y}^{n,\delta}$. The number of such Kraus operators is no larger than $2^{n\left[ H(Y)+\delta\right] }$, and one can show that $H(Y)=H(\mathcal{\hat{M}}(\pi_{V}))$, where $\mathcal{\hat{M}}$ is a channel complementary to $\mathcal{M}$ and $\pi_{V}\equiv\Pi_{V}/L$ denotes the maximally mixed state on $V$ [@K07]. One can then further reduce the quantum operation $\mathcal{L}^{n,\delta}$ to another one $\widetilde{\mathcal{L}}^{n,\delta}$ defined by projecting the output of $\mathcal{L}^{n,\delta}$ to the entropy-typical subspace of the density operator $\mathcal{L}(\pi_{V})=\mathcal{M}(\pi_{V})$. The entropy-typical subspace of a density operator $\sigma$ with spectral decomposition $\sigma=\sum_{z}p_{Z}(z)|z\rangle\langle z|$ is defined as$$T_{\sigma}^{n,\delta}\equiv\operatorname{span}\{|z^{n}\rangle:\left\vert -\left[ \log p_{Z^{n}}(z^{n})\right] /n-H(\sigma)\right\vert \leq\delta\},$$ for integer $n\geq1$ and real $\delta>0$. The resulting quantum operation $\widetilde{\mathcal{L}}^{n,\delta}$ is thus finite-dimensional and has a finite number of Kraus operators. We then have the following bounds argued in [@K07]:$$\begin{aligned} \widetilde{L}^{n,\delta} & \leq2^{n\left[ H(\mathcal{\hat{M}}(\pi _{V}))+\delta\right] },\label{eq:klesse-1}\\ \operatorname{Tr}\{\widetilde{\mathcal{L}}^{n,\delta}(\pi_{V^{\otimes n}})\} & \geq1-\varepsilon_{1},\\ \left\Vert \widetilde{\mathcal{L}}^{n,\delta}(\pi_{V^{\otimes n}})\right\Vert _{2}^{2} & \leq2^{-n\left[ H(\mathcal{M}(\pi_{V}))-3\delta\right] },\\ F_{e}(C_{n},\mathcal{L}^{\otimes n}) & \geq F_{e}(C_{n},\widetilde {\mathcal{L}}^{n,\delta}), \label{eq:klesse-4}$$ where $\widetilde{L}^{n,\delta}$ denotes the number of Kraus operators for $\widetilde{\mathcal{L}}^{n,\delta}$ and the second inequality inequality holds for all $\varepsilon_{1}\in(0,1)$ and sufficiently large $n$. Note that for this latter estimate, we require the law of large numbers to hold when we only know that the entropy is finite (this can be accomplished using the technique discussed in [@T12]). In the last line, we have written the entanglement fidelity of a code $C_{n}$ (some subspace of $V^{\otimes n}$), which is defined as$$F_{e}(C_{n},\mathcal{L}^{\otimes n})\equiv\sup_{\mathcal{R}^{n}}\langle \Phi_{C_{n}}|(\operatorname{id}\otimes\lbrack\mathcal{R}^{n}\circ \mathcal{L}^{\otimes n}])(\Phi_{C_{n}})|\Phi_{C_{n}}\rangle,$$ where $|\Phi_{C_{n}}\rangle$ denotes a maximally entangled state built from an orthonormal basis of $C_{n}$ and the optimization is with respect to recovery channels $\mathcal{R}^{n}$. Let $K_{n}\equiv\dim C_{n}$. From the developments in [@K07], the following bound holds$$\begin{gathered} \mathbb{E}_{U_{K_{n}}(V^{\otimes n})}\{F_{e}(U_{K_{n}}C_{n},\widetilde {\mathcal{L}}^{n,\delta})\}\\ \geq\operatorname{Tr}\{\widetilde{\mathcal{L}}^{n,\delta}(\pi_{V^{\otimes n}})\}-\sqrt{K\widetilde{L}^{n,\delta}\left\Vert \widetilde{\mathcal{L}}^{n,\delta}(\pi_{V^{\otimes n}})\right\Vert _{2}^{2}},\end{gathered}$$ where $\mathbb{E}_{U_{K_{n}}(V^{\otimes n})}$ denotes the expected entanglement fidelity when we apply a randomly selected unitary $U_{K_{n}}$ to the codespace $C_{n}$, taking it to some different subspace of $V^{\otimes n}$. The unitary $U_{K}$ is selected according to the unitarily invariant measure on the group $\mathbf{U}(V^{\otimes n})$ of unitaries acting on the subspace $V^{\otimes n}$. Combining with the inequalities in –, we find that$$\begin{gathered} \mathbb{E}_{U_{K_{n}}(V^{\otimes n})}\{F_{e}(U_{K_{n}}C_{n},\mathcal{L}^{\otimes n})\}\\ \geq1-\varepsilon_{1}-\left[ 2^{-n\left[ H(\mathcal{M}(\pi_{V}))-\mathcal{\hat{M}}(\pi_{V}))-R-4\delta\right] }\right] ^{\frac{1}{2}},\end{gathered}$$ where the rate $R$ of entanglement transmission is defined as $R\equiv\left[ \log K_{n}\right] /n$. Thus, if we choose$$R=H(\mathcal{M}(\pi_{V}))-\mathcal{\hat{M}}(\pi_{V}))-5\delta,$$ then we find that$$\mathbb{E}_{U_{K_{n}}(V^{\otimes n})}\{F_{e}(U_{K_{n}}C_{n},\widetilde {\mathcal{L}}^{n,\delta})\}\geq1-\varepsilon_{1}-2^{-n\delta/2}, \label{eq:good-q-klesse-code}$$ and we see that the RHS can be made arbitrarily close to one by taking $n$ large enough. We can then conclude that there exists a unitary $U_{K_{n}}$, such that the codespace defined by $U_{K_{n}}C_{n}$ achieves the same entanglement fidelity given above, implying that the rate $H(\mathcal{M}(\pi_{V}))-\mathcal{\hat{M}}(\pi_{V}))$ is achievable for entanglement transmission over $\mathcal{M}$. Now we apply the methods of Holevo [@H04] and further arguments of Klesse [@K07] to see how to achieve the rate given in the statement of the theorem for the channel$~\mathcal{N}$ while meeting the desired energy constraint. We follow the reasoning in [@H04] very closely. Consider that $G$ is a non-constant operator. Thus, the image of the convex set of all density operators under the map $\rho\rightarrow\operatorname{Tr}\{G\rho\}$ is an interval. Suppose first that $P$ is not equal to the minimum eigenvalue of $G$. Then there exists a real number $P^{\prime}$ and a density operator $\rho$ in $\mathcal{D}(\mathcal{H}_{A})$ such that$$\operatorname{Tr}\{G\rho\}\leq P^{\prime}<P.$$ Let $\rho=\sum_{j=1}^{\infty}\lambda_{j}|j\rangle\langle j|$ be a spectral decomposition of $\rho$, and define$$\begin{aligned} \rho_{d} & \equiv\sum_{j=1}^{d}\tilde{\lambda}_{j}|j\rangle\langle j|,\ \ \text{where}\\ \tilde{\lambda}_{j} & \equiv\lambda_{j}\left( \sum_{j=1}^{d}\lambda _{j}\right) ^{-1}.\end{aligned}$$ Then $\left\Vert \rho-\rho_{d}\right\Vert _{1}\rightarrow0$ as $d\rightarrow \infty$. Let $g(j)\equiv\langle j|G|j\rangle$, so that$$\operatorname{Tr}\{G\rho_{d}\}=\sum_{j=1}^{d}\tilde{\lambda}_{j}g(j)=P^{\prime}+\varepsilon_{d},$$ where $\varepsilon_{d}\rightarrow0$ as $d\rightarrow\infty$. Consider the density operator $\rho_{d}^{\otimes m}$, and let $\Pi_{d}^{m,\delta}$ denote its strongly typical projector, defined as the projection onto the strongly typical subspace$$\operatorname{span}\{|j^{m}\rangle:\left\vert N(j|j^{m})/m-\tilde{\lambda}_{j}\right\vert \leq\delta\},$$ where $|j^{m}\rangle\equiv|j_{1}\rangle\otimes\cdots\otimes|j_{m}\rangle$ and $N(j|j^{m})$ denotes the number of appearances of the symbol $j$ in the sequence $j^{m}$. Let$$\pi_{d}^{m,\delta}\equiv\Pi_{d}^{m,\delta}/\operatorname{Tr}\{\Pi _{d}^{m,\delta}\}$$ denote the maximally mixed state on the strongly typical subspace. We then find that for positive integers$~m$ and$~n$,$$\begin{aligned} & \operatorname{Tr}\left\{ \overline{G}_{mn}\left( \left[ \pi _{d}^{m,\delta}\right] ^{\otimes n}-\rho_{d}^{\otimes mn}\right) \right\} \nonumber\\ & =\operatorname{Tr}\left\{ \overline{\left( \overline{G}_{m}\right) }_{n}\left( \left[ \pi_{d}^{m,\delta}\right] ^{\otimes n}-\rho_{d}^{\otimes mn}\right) \right\} \\ & =\operatorname{Tr}\left\{ \overline{G}_{m}\left( \pi_{d}^{m,\delta}-\rho_{d}^{\otimes m}\right) \right\} \leq\delta\max_{j\in\left[ d\right] }g(j),\end{aligned}$$ where $\left[ d\right] \equiv\{1,\ldots,d\}$ and the inequality follows from applying a bound from [@Hol01a] (also called typical average lemma in [@el2010lecture]). Now we can apply the above inequality to find that$$\begin{aligned} & \operatorname{Tr}\left\{ \overline{G}_{mn}\left[ \pi_{d}^{m,\delta }\right] ^{\otimes n}\right\} \nonumber\\ & \leq\operatorname{Tr}\{\overline{G}_{m}\rho_{d}^{\otimes m}\}+\delta \max_{j\in\left[ d\right] }g(j)\\ & =\operatorname{Tr}\{G\rho_{d}\}+\delta\max_{j\in\left[ d\right] }g(j)\\ & =P^{\prime}+\varepsilon_{d}+\delta\max_{j\in\left[ d\right] }g(j).\end{aligned}$$ For all $d$ large enough, we can then find $\delta_{0}$ such that the last line above is $\leq P/(1+\delta_{1})$ for $\delta,\delta_{1}\in(0,\delta _{0}]$. The quantum coding scheme we use is that of Klesse [@K07] discussed previously, now setting $\mathcal{M}=\mathcal{N}^{\otimes m}$ and the subspace $V$ to be the frequency-typical subspace of $\rho_{d}^{\otimes m}$, so that $\Pi_{V}=\Pi_{d}^{m,\delta}$. Letting $\pi_{C_{n}}$ denote the maximally mixed projector onto the codespace $C_{n}\subset V^{\otimes n}$, we find that [@K07 Section 5.3]$$\mathbb{E}_{U_{K_{n}}(V^{\otimes n})}\{U_{K_{n}}\pi_{C_{n}}U_{K_{n}}^{\dag }\}=\pi_{V^{\otimes n}}=\left[ \pi_{d}^{m,\delta}\right] ^{\otimes n}.$$ So this and the reasoning directly above imply that$$\mathbb{E}_{U_{K_{n}}(V^{\otimes n})}\{\operatorname{Tr}\{\overline{G_{mn}}U_{K_{n}}\pi_{C_{n}}U_{K_{n}}^{\dag}\}\}\leq P/(1+\delta_{1}),$$ for $\delta,\delta_{1}\leq\delta_{0}$. Furthermore, from , for arbitrary $\varepsilon\in(0,1)$ and sufficiently large $n$, we find that$$\mathbb{E}_{U_{K_{n}}(V^{\otimes n})}\{1-F_{e}(U_{K_{n}}C_{n},\mathcal{N}^{\otimes mn})\}\leq\varepsilon,$$ as long as the rate$$R=[H(\mathcal{N}^{\otimes m}(\pi_{d}^{m,\delta}))-H(\mathcal{\hat{N}}^{\otimes m}(\pi_{d}^{m,\delta}))]/m-\delta^{\prime}$$ for $\delta^{\prime}>0$. At this point, we would like to argue the existence of a code that has arbitrarily small error and meets the energy constraint. Let $E_{0}$ denote the event $1-F_{e}(U_{K_{n}}C_{n},\mathcal{N}^{\otimes mn})\leq\sqrt{\varepsilon}$ and let $E_{1}$ denote the event $\operatorname{Tr}\{\overline{G_{mn}}U_{K_{n}}\pi_{C_{n}}U_{K_{n}}^{\dag }\}\leq P$. We can apply the union bound and Markov’s inequality to find that$$\begin{aligned} & \Pr_{U_{K_{n}}(V^{\otimes n})}\{\overline{E_{0}\cap E_{1}}\}\nonumber\\ & =\Pr_{U_{K_{n}}(V^{\otimes n})}\{E_{0}^{c}\cup E_{1}^{c}\}\\ & \leq\Pr_{U_{K_{n}}(V^{\otimes n})}\{1-F_{e}(U_{K_{n}}C_{n},\mathcal{N}^{\otimes mn})\geq\sqrt{\varepsilon}\}\nonumber\\ & \qquad+\Pr_{U_{K_{n}}(V^{\otimes n})}\left\{ \operatorname{Tr}\{\overline{G_{mn}}U_{K_{n}}\pi_{C_{n}}U_{K_{n}}^{\dag}\}\geq P\right\} \\ & \leq\frac{1}{\sqrt{\varepsilon}}\mathbb{E}_{U_{K_{n}}(V^{\otimes n})}\{1-F_{e}(U_{K_{n}}C_{n},\mathcal{N}^{\otimes mn})\}\nonumber\\ & \qquad+\frac{1}{P}\mathbb{E}_{U_{K_{n}}(V^{\otimes n})}\{\operatorname{Tr}\{\overline{G_{mn}}U_{K_{n}}\pi_{C_{n}}U_{K_{n}}^{\dag}\}\}\\ & \leq\sqrt{\varepsilon}+1/(1+\delta_{1}).\end{aligned}$$ Since we can choose $n$ large enough to have $\varepsilon$ arbitrarily small, there exists such an $n$ such that the last line is strictly less than one. This then implies the existence of a code $C_{n}$ such that $F_{e}(C_{n},\mathcal{N}^{\otimes mn})\geq1-\sqrt{\varepsilon}$ and $\operatorname{Tr}\{\overline{G_{mn}}\pi_{C_{n}}\}\leq P$ (i.e., it has arbitrarily good entanglement fidelity and meets the average energy constraint). Furthermore, the rate achievable using this code is equal to $[H(\mathcal{N}^{\otimes m}(\pi_{d}^{m,\delta}))-H(\mathcal{\hat{N}}^{\otimes m}(\pi_{d}^{m,\delta}))]/m$. We have shown that this rate is achievable for all $\delta>0$ and all integer $m\geq1$. By applying the limiting argument from [@Hol01a] (see also [@ieee2002bennett]), we thus have that the following is an achievable rate as well:$$\begin{gathered} \lim_{\delta\rightarrow0}\lim_{m\rightarrow\infty}\frac{1}{m}[H(\mathcal{N}^{\otimes m}(\pi_{d}^{m,\delta}))-H(\mathcal{\hat{N}}^{\otimes m}(\pi _{d}^{m,\delta}))]\\ =H(\mathcal{N}(\rho_{d}))-H(\mathcal{\hat{N}}(\rho_{d})),\end{gathered}$$ where $\operatorname{Tr}\{G\rho_{d}\}\leq P^{\prime}+\varepsilon_{d}\leq P$. Given that both $H(\mathcal{N}(\rho_{d}))$ and $H(\mathcal{\hat{N}}(\rho _{d}))$ are finite, we can apply – and rewrite $$H(\mathcal{N}(\rho_{d}))-H(\mathcal{\hat{N}}(\rho_{d}))=I_{c}(\rho _{d},\mathcal{N}).$$ Finally, we take the limit $d\rightarrow\infty$ and find that$$\liminf_{d\rightarrow\infty}I_{c}(\rho_{d},\mathcal{N})\geq I_{c}(\rho,\mathcal{N}),$$ where we have used the representation$$I_{c}(\rho_{d},\mathcal{N})=I(\rho_{d},\mathcal{N})-H(\rho_{d}),$$ applied that the mutual information is lower semicontinuous [@HS10 Proposition 1], the entropy $H$ is continuous for all states $\sigma$ such that $\operatorname{Tr}\{G\sigma\}<P$ (following from a variation of [@H12 Lemma 11.8]), and the fact that a purification $|\psi_{d}^{\rho}\rangle\equiv\sum_{j=1}^{d}\tilde{\lambda}_{j}^{1/2}|j\rangle\otimes|j\rangle$ has the convergence $\left\Vert |\psi_{d}^{\rho }\rangle\langle\psi_{d}^{\rho}|-|\psi^{\rho}\rangle\langle\psi^{\rho }|\right\Vert _{1}\rightarrow0$ as $d\rightarrow\infty$. Now since $H(\mathcal{N}(\rho))$ and $H(\mathcal{\hat{N}}(\rho))$ are each finite, we can rewrite$$I_{c}(\rho,\mathcal{N})=H(\mathcal{N}(\rho))-H(\mathcal{\hat{N}}(\rho)).$$ We have thus proven that the rate $H(\mathcal{N}(\rho))-H(\mathcal{\hat{N}}(\rho))$ is achievable for entanglement transmission with average energy constraint for all $\rho$ satisfying $\operatorname{Tr}\{G\rho\}<P$. We can extend this argument to operators $\rho$ such that $\operatorname{Tr}\{G\rho\}=P$ by approximating them with operators $\rho_{\xi}=(1-\xi)\rho +\xi|e\rangle\langle e|$, where $|e\rangle$ is chosen such that $\langle e|G|e\rangle<P$. Suppose now that $P$ is the minimum eigenvalue of $G$. In this case, the condition $\operatorname{Tr}\{G\rho\}\leq P$ reduces to the support of $\rho$ being contained in the spectral projection of $G$ corresponding to this minimum eigenvalue. The condition in Definition \[def:Gibbs-obs\] implies that the eigenvalues of $G$ have finite multiplicity, and so the support of $\rho$ is a fixed finite-dimensional subspace. Thus we can take $\rho_{d}=\rho$, and we can repeat the above argument with the equality $\operatorname{Tr}\{G\rho\}=P$ holding at each step. As a consequence, we can conclude that$$\sup_{\operatorname{Tr}\{G\rho\}\leq P}H(\mathcal{N}(\rho))-H(\mathcal{\hat {N}}(\rho))$$ is achievable as well. Finally, we can repeat the whole argument for all $\rho^{(k)}\in\mathcal{D}(\mathcal{H}_{A}^{\otimes k})$ satisfying $\operatorname{Tr}\{\overline{G}_{k}\rho^{(k)}\}\leq P$, take the channel as $\mathcal{N}^{\otimes k}$, and conclude that the following rate is achievable:$$\frac{1}{k}\sup_{\operatorname{Tr}\{\overline{G}_{k}\rho^{(k)}\}\leq P}H(\mathcal{N}^{\otimes k}(\rho^{(k)}))-H(\mathcal{\hat{N}}^{\otimes k}(\rho^{(k)})).$$ Taking the limit as $k\rightarrow\infty$ gives the statement of the theorem. Energy-constrained quantum and private capacity of degradable channels {#sec:degradable-channels} ====================================================================== It is unknown how to compute the quantum and private capacities of general channels, but if they are degradable, the task simplifies considerably. That is, it is known from [@cmp2005dev] and [@S08], respectively, that both the unconstrained quantum and private capacities of a degradable channel $\mathcal{N}$ are given by the following formula:$$Q(\mathcal{N})=P(\mathcal{N})=\sup_{\rho}I_{c}(\rho,\mathcal{N}).$$ Here we prove the following theorem, which holds for the energy-constrained quantum and private capacities of a channel $\mathcal{N}$: \[thm:-energy-constr-q-p-cap\]Let $G$ be a Gibbs observable and $P\in\lbrack0,\infty)$. Let a quantum channel $\mathcal{N}$ be degradable and satisfy Condition \[cond:finite-out-entropy\]. Then the energy-constrained capacities $Q(\mathcal{N},G,P)$, $E(\mathcal{N},G,P)$, $P(\mathcal{N},G,P)$, and $K(\mathcal{N},G,P)$ are finite, equal, and given by the following formula:$$\sup_{\rho:\operatorname{Tr}\{G\rho\}\leq P}H(\mathcal{N}(\rho ))-H(\mathcal{\hat{N}}(\rho)), \label{eq:coh-info-1-letter}$$ where $\mathcal{\hat{N}}$ denotes a complementary channel of $\mathcal{N}$. That the quantity in is finite follows directly from the assumption in Condition \[cond:finite-out-entropy\] and Lemma \[lem:env-out-ent\]. From Theorem \[thm:cap-relations\], we have that$$\begin{aligned} Q(\mathcal{N},G,P) & =E(\mathcal{N},G,P)\nonumber\\ & \leq P(\mathcal{N},G,P)=K(\mathcal{N},G,P).\end{aligned}$$ Theorem \[thm:coh-info-ach\] implies that the rate in is achievable. So this gives that$$\begin{gathered} \sup_{\rho:\operatorname{Tr}\{G\rho\}\leq P}H(\mathcal{N}(\rho ))-H(\mathcal{\hat{N}}(\rho))\\ \leq Q(\mathcal{N},G,P)=E(\mathcal{N},G,P).\end{gathered}$$ To establish the theorem, it thus suffices to prove the following converse inequality$$K(\mathcal{N},G,P)\leq\sup_{\rho:\operatorname{Tr}\{G\rho\}\leq P}H(\mathcal{N}(\rho))-H(\mathcal{\hat{N}}(\rho)). \label{eq:key-less-than-coh-info}$$ To do so, we make use of several ideas from [@ieee2005dev; @cmp2005dev; @S08; @YHD05MQAC]. Consider an $(n,M,G,P,\varepsilon )$ code for secret key transmission with an average energy constraint, as described in Section \[sec:SKT-AVG-code\]. Using such a code, we take a uniform distribution over the codewords, and the state resulting from an isometric extension of the channel is as follows:$$\sigma_{\hat{M}B^{n}E^{n}}\equiv\frac{1}{M}\sum_{m=1}^{M}|m\rangle\langle m|_{\hat{M}}\otimes\lbrack\mathcal{U}^{\mathcal{N}}]^{\otimes n}(\rho_{A^{n}}^{m}).$$ Now consider that each codeword in such a code has a spectral decomposition as follows:$$\rho_{A^{n}}^{m}\equiv\sum_{l=1}^{\infty}p_{L|\hat{M}}(l|m)|\psi^{l,m}\rangle\langle\psi^{l,m}|_{A^{n}},$$ for a probability distribution $p_{L|\hat{M}}$ and some orthonormal basis $\{|\psi^{l,m}\rangle_{A^{n}}\}_{l}$ for $\mathcal{H}_{A^{n}}$. Then the state $\sigma_{\hat{M}B^{n}E^{n}}$ has the following extension:$$\begin{gathered} \sigma_{L\hat{M}B^{n}E^{n}}\equiv\frac{1}{M}\sum_{m=1}^{M}\sum_{l=1}^{\infty }p_{L|\hat{M}}(l|m)|l\rangle\langle l|_{L}\otimes|m\rangle\langle m|_{\hat{M}}\\ \otimes\lbrack\mathcal{U}^{\mathcal{N}}]^{\otimes n}(|\psi^{l,m}\rangle \langle\psi^{l,m}|_{A^{n}}).\end{gathered}$$ We can also define the state after the decoding measurement acts as$$\begin{gathered} \sigma_{L\hat{M}M^{\prime}E^{n}}\equiv\frac{1}{M}\sum_{m,m^{\prime}=1}^{M}\sum_{l=1}^{\infty}p_{L|\hat{M}}(l|m)|l\rangle\langle l|_{L}\otimes |m\rangle\langle m|_{\hat{M}}\\ \otimes\operatorname{Tr}_{B^{n}}\{\Lambda_{B^{n}}^{m^{\prime}}[\mathcal{U}^{\mathcal{N}}]^{\otimes n}(|\psi^{l,m}\rangle\langle\psi^{l,m}|_{A^{n}})\}\otimes|m^{\prime}\rangle\langle m^{\prime}|_{M^{\prime}}.\end{gathered}$$ Let $\overline{\rho}_{A}$ denote the average single-channel input state, defined as$$\overline{\rho}_{A}\equiv\frac{1}{Mn}\sum_{m=1}^{M}\sum_{i=1}^{n}\operatorname{Tr}_{A^{n}\backslash A_{i}}\{\rho_{A^{n}}^{m}\}. \label{eq:avg-input-state-conv-prf}$$ Applying the partial trace and the assumption in , it follows that$$\operatorname{Tr}\{G\overline{\rho}_{A}\}=\frac{1}{M}\sum_{m=1}^{M}\operatorname{Tr}\{\overline{G}_{n}\rho_{A^{n}}^{m}\}\leq P. \label{eq:conv-pf-energy-constr}$$ Let $\overline{\sigma}_{B}$ denote the average single-channel output state:$$\overline{\sigma}_{B}\equiv\mathcal{N}(\overline{\rho}_{A})=\frac{1}{n}\sum_{i=1}^{n}\operatorname{Tr}_{B^{n}\backslash B_{i}}\{\sigma_{B^{n}}\},$$ and let $\overline{\sigma}_{E}$ denote the average single-channel environment state:$$\overline{\sigma}_{E}\equiv\mathcal{\hat{N}}(\overline{\rho}_{A})=\frac{1}{n}\sum_{i=1}^{n}\operatorname{Tr}_{E^{n}\backslash E_{i}}\{\sigma_{E^{n}}\}.$$ It follows from non-negativity, subadditivity of entropy, concavity of entropy, , and the assumption that $G$ is a Gibbs observable that$$\begin{gathered} 0\leq H\left( \frac{1}{M}\sum_{m=1}^{M}\rho_{A^{n}}^{m}\right) \\ \leq\sum_{i=1}^{n}H\left( \frac{1}{M}\sum_{m=1}^{M}\operatorname{Tr}_{A^{n}\backslash A_{i}}\{\rho_{A^{n}}^{m}\}\right) \\ \leq nH(\overline{\rho}_{A})<\infty.\end{gathered}$$ Similar reasoning but applying Condition \[cond:finite-out-entropy\] implies that$$0\leq H(B^{n})_{\sigma}\leq\sum_{i=1}^{n}H(B_{i})_{\sigma}\leq nH(B)_{\overline{\sigma}}<\infty.$$ Similar reasoning but applying Lemma \[lem:env-out-ent\] implies that$$0\leq H(E^{n})_{\sigma}\leq\sum_{i=1}^{n}H(E_{i})_{\sigma}\leq nH(E)_{\overline{\sigma}}<\infty.$$ Furthermore, the entropy $H(\hat{M})_{\sigma}=\log_{2}M$ because the reduced state $\sigma_{M}$ is maximally mixed with dimension equal to $M$. Our analysis makes use of several other entropic quantities, each of which we need to argue is finitely bounded from above and below and thus can be added or subtracted at will in our analysis. The quantities involved are as follows, along with bounds for them [@Lindblad1973; @K11; @S15]:$$\begin{aligned} 0 & \leq I(\hat{M};B^{n})_{\sigma}\leq\min\{\log_{2}M,nH(B)_{\overline {\sigma}}\},\\ 0 & \leq I(\hat{M};E^{n})_{\sigma}\leq\min\{\log_{2}M,nH(E)_{\overline {\sigma}}\},\\ 0 & \leq H(\hat{M}|E^{n})_{\sigma}\leq\log_{2}M,\end{aligned}$$ as well as$$\begin{gathered} 0\leq I(\hat{M}L;B^{n})_{\sigma},\ I(L;B^{n}|\hat{M})_{\sigma},\\ H(B^{n}|L\hat{M})_{\sigma}\leq nH(B)_{\overline{\sigma}},\end{gathered}$$ and$$\begin{gathered} 0\leq I(\hat{M}L;E^{n})_{\sigma},\ I(L;E^{n}|\hat{M})_{\sigma},\\ H(E^{n}|L\hat{M})_{\sigma}\leq nH(E)_{\overline{\sigma}}.\end{gathered}$$ We now proceed with the converse proof:$$\begin{aligned} \log_{2}M & =H(\hat{M})_{\sigma}\\ & =I(\hat{M};M^{\prime})_{\sigma}+H(\hat{M}|M^{\prime})_{\sigma}\\ & \leq I(\hat{M};M^{\prime})_{\sigma}+h_{2}(\varepsilon)+\varepsilon\log _{2}(M-1)\\ & \leq I(\hat{M};B^{n})_{\sigma}+h_{2}(\varepsilon)+\varepsilon\log_{2}M. \label{eq:1st-block-last-line}$$ The first equality follows because the entropy of a uniform distribution is equal to the logarithm of its cardinality. The second equality is an identity. The first inequality follows from applying Fano’s inequality in  to the condition in . The second inequality follows from applying the Holevo bound [@Holevo73; @PhysRevLett.70.363]. The direct sum property of the trace distance and the security condition in imply that$$\begin{gathered} \frac{1}{2}\left\Vert \sigma_{\hat{M}E^{n}}-\pi_{\hat{M}}\otimes\omega_{E^{n}}\right\Vert _{1}\\ =\frac{1}{M}\sum_{m=1}^{M}\frac{1}{2}\left\Vert \mathcal{\hat{N}}^{\otimes n}(\rho_{A^{n}}^{m})-\omega_{E^{n}}\right\Vert _{1}\leq\varepsilon,\end{gathered}$$ which, by the AFW inequality in  for classical–quantum states, means that$$\left\vert H(\hat{M}|E^{n})_{\pi\otimes\omega}-H(\hat{M}|E^{n})_{\sigma }\right\vert \leq\varepsilon\log_{2}(M)+g(\varepsilon).$$ But$$\begin{aligned} & H(\hat{M}|E^{n})_{\pi\otimes\omega}-H(\hat{M}|E^{n})_{\sigma}\nonumber\\ & =H(\hat{M})_{\pi}-H(\hat{M}|E^{n})_{\sigma}\\ & =H(\hat{M})_{\sigma}-H(\hat{M}|E^{n})_{\sigma}\\ & =I(\hat{M};E^{n})_{\sigma},\end{aligned}$$ so then$$I(\hat{M};E^{n})_{\sigma}\leq\varepsilon\log_{2}(M)+g(\varepsilon). \label{eq:eve-holevo-upper}$$ Returning to  and inserting , we find that$$\begin{gathered} \log_{2}M\leq I(\hat{M};B^{n})_{\sigma}-I(\hat{M};E^{n})_{\sigma}\\ +2\varepsilon\log_{2}M+h_{2}(\varepsilon)+g(\varepsilon).\end{gathered}$$ We now focus on bounding the term $I(\hat{M};B^{n})_{\sigma}-I(\hat{M};E^{n})_{\sigma}$:$$\begin{aligned} & I(\hat{M};B^{n})_{\sigma}-I(\hat{M};E^{n})_{\sigma}\nonumber\\ & =I(\hat{M}L;B^{n})_{\sigma}-I(L;B^{n}|\hat{M})_{\sigma}\nonumber\\ & \qquad-\left[ I(\hat{M}L;E^{n})_{\sigma}-I(L;E^{n}|\hat{M})_{\sigma }\right] \\ & =I(\hat{M}L;B^{n})_{\sigma}-I(\hat{M}L;E^{n})_{\sigma}\nonumber\\ & \qquad-\left[ I(L;B^{n}|\hat{M})_{\sigma}-I(L;E^{n}|\hat{M})_{\sigma }\right] \\ & \leq I(\hat{M}L;B^{n})_{\sigma}-I(\hat{M}L;E^{n})_{\sigma}\\ & =H(B^{n})_{\sigma}-H(B^{n}|L\hat{M})_{\sigma}\nonumber\\ & \qquad-\left[ H(E^{n})_{\sigma}-H(E^{n}|L\hat{M})_{\sigma}\right] \\ & =H(B^{n})_{\sigma}-H(B^{n}|L\hat{M})_{\sigma}\nonumber\\ & \qquad-\left[ H(E^{n})_{\sigma}-H(B^{n}|L\hat{M})_{\sigma}\right] \\ & =H(B^{n})_{\sigma}-H(E^{n})_{\sigma}. \label{eq:second-block-last-line}$$ The first equality follows from the chain rule for mutual information. The second equality follows from a rearrangement. The first inequality follows from the assumption of degradability of the channel, which implies that Bob’s mutual information is never smaller than Eve’s: $I(L;B^{n}|\hat{M})_{\sigma }\geq I(L;E^{n}|\hat{M})_{\sigma}$. The third equality follows from definitions. The fourth equality follows because the marginal entropies of a pure state are equal, i.e.,$$\begin{aligned} & H(B^{n}|L\hat{M})_{\sigma}\nonumber\\ & =\frac{1}{M}\sum_{l,m}p_{L|\hat{M}}(l|m)H(\operatorname{Tr}_{E^{n}}\{[\mathcal{U}^{\mathcal{N}}]^{\otimes n}(|\psi^{l,m}\rangle\langle\psi ^{l,m}|_{A^{n}})\})\nonumber\\ & =\frac{1}{M}\sum_{l,m}p_{L|\hat{M}}(l|m)H(\operatorname{Tr}_{B^{n}}\{[\mathcal{U}^{\mathcal{N}}]^{\otimes n}(|\psi^{l,m}\rangle\langle\psi ^{l,m}|_{A^{n}})\})\nonumber\\ & =H(E^{n}|L\hat{M})_{\sigma}.\end{aligned}$$ Continuing, we have that$$\begin{aligned} \eqref{eq:second-block-last-line} & =H(B_{1})_{\sigma}-H(E_{1})_{\sigma }+H(B_{2}\cdots B_{n})_{\sigma}\nonumber\\ & \qquad-H(E_{1}\cdots E_{n})_{\sigma}\nonumber\\ & \qquad-\left[ I(B_{1};B_{2}\cdots B_{n})_{\sigma}-I(E_{1};E_{2}\cdots E_{n})_{\sigma}\right] \\ & \leq H(B_{1})_{\sigma}-H(E_{1})_{\sigma}\nonumber\\ & \qquad+H(B_{2}\cdots B_{n})_{\sigma}-H(E_{1}\cdots E_{n})_{\sigma}\\ & \leq\sum_{i=1}^{n}H(B_{i})_{\sigma}-H(E_{i})_{\sigma}\\ & \leq n\left[ H(B)_{\mathcal{U}(\overline{\rho})}-H(E)_{\mathcal{U}(\overline{\rho})}\right] \\ & \leq n\left[ \sup_{\rho:\operatorname{Tr}\{G\rho\}\leq P}H(\mathcal{N}(\rho))-H(\mathcal{\hat{N}}(\rho))\right] .\end{aligned}$$ The first equality follows by exploiting the definition of mutual information. The first inequality follows from the assumption of degradability, which implies that $I(B_{1};B_{2}\cdots B_{n})_{\sigma}\geq I(E_{1};E_{2}\cdots E_{n})_{\sigma}$. The second inequality follows by iterating the argument. The third inequality follows from the concavity of the coherent information for degradable channels (Proposition \[prop:concave-degrad\]), with $\overline{\rho}_{A}$ defined as in and satisfying . Thus, the final inequality follows because we can optimize the coherent information with respect all density operators satisfying the energy constraint. Putting everything together and assuming that $\varepsilon<1/2$, we find the following bound for all $\left( n,M,G,P,\varepsilon\right) $ private communication codes:$$\begin{gathered} \left( 1-2\varepsilon\right) \frac{1}{n}\log_{2}M-\frac{1}{n}\left[ h_{2}(\varepsilon)+g(\varepsilon)\right] \\ \leq\sup_{\rho:\operatorname{Tr}\{G\rho\}\leq P}H(\mathcal{N}(\rho ))-H(\mathcal{\hat{N}}(\rho)).\end{gathered}$$ Now taking the limit as $n\rightarrow\infty$ and then as $\varepsilon \rightarrow0$, we can conclude the inequality in . This concludes the proof. Thermal state as the optimizer ============================== In this section, we prove that the function$$\sup_{\operatorname{Tr}\{G\rho\}=P}H(\mathcal{N}(\rho))-H(\mathcal{\hat{N}}(\rho))$$ is optimized by a thermal state input if the channel $\mathcal{N}$ is degradable and satisfies certain other properties. In what follows, for a Gibbs observable $G$, we define the thermal state $\theta_{\beta}$ of inverse temperature $\beta>0$ as$$\theta_{\beta}\equiv\frac{e^{-\beta G}}{\operatorname{Tr}\{e^{-\beta G}\}}. \label{eq:thermal-state-beta-G}$$ \[thm:thermal-optimal-degrad\]Let $G$ be a Gibbs observable and $P\in\lbrack0,\infty)$. Let $\mathcal{N}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{B})$ be a degradable quantum channel satisfying Condition \[cond:finite-out-entropy\]. Let $\theta_{\beta}$ denote the thermal state of $G$, as in , satisfying $\operatorname{Tr}\{G\theta_{\beta}\}=P$ for some $\beta>0$. Suppose that $\mathcal{N}$ and a complementary channel $\mathcal{\hat{N}}:\mathcal{T}(\mathcal{H}_{A})\rightarrow\mathcal{T}(\mathcal{H}_{E})$ are Gibbs preserving, in the sense that there exist $\beta_{1},\beta_{2}>0$ such that$$\mathcal{N}(\theta_{\beta})=\theta_{\beta_{1}},\qquad\mathcal{\hat{N}}(\rho_{\beta})=\theta_{\beta_{2}}.$$ Set$$P_{1}\equiv\operatorname{Tr}\{G\mathcal{N}(\theta_{\beta})\},\qquad P_{2}\equiv\operatorname{Tr}\{G\mathcal{\hat{N}}(\theta_{\beta})\}.$$ Suppose further that $\mathcal{N}$ and $\mathcal{\hat{N}}$ are such that, for all input states $\rho$ such that $\operatorname{Tr}\{G\rho\}=P$, the output energies satisfy$$\operatorname{Tr}\{G\mathcal{N}(\rho)\}\leq P_{1},\qquad\operatorname{Tr}\{G\mathcal{\hat{N}}(\rho)\}\geq P_{2}.$$ Then the function$$\sup_{\operatorname{Tr}\{G\rho\}=P}H(\mathcal{N}(\rho))-H(\mathcal{\hat{N}}(\rho)),$$ is optimized by the thermal state $\theta_{\beta}$. Let $\mathcal{D}:\mathcal{T}(\mathcal{H}_{B})\rightarrow\mathcal{T}(\mathcal{H}_{E})$ be a degrading channel such that $\mathcal{D}\circ\mathcal{N}=\mathcal{\hat{N}}$. Consider a state $\rho$ such that $\operatorname{Tr}\{G\rho\}=P$. The monotonicity of quantum relative entropy with respect to quantum channels (see ) implies that$$\begin{aligned} D(\mathcal{N}(\rho)\Vert\mathcal{N}(\theta_{\beta})) & \geq D((\mathcal{D}\circ\mathcal{N})(\rho)\Vert(\mathcal{D}\circ\mathcal{N})(\theta_{\beta}))\\ & =D(\mathcal{\hat{N}}(\rho)\Vert\mathcal{\hat{N}}(\theta_{\beta})).\end{aligned}$$ By the assumption of the theorem, this means that$$D(\mathcal{N}(\rho)\Vert\theta_{\beta_{1}})\geq D(\mathcal{\hat{N}}(\rho )\Vert\theta_{\beta_{2}}),$$ where $\beta_{1}$ and $\beta_{2}$ are such that $\operatorname{Tr}\{G\theta_{\beta_{1}}\}=P_{1}$ and $\operatorname{Tr}\{G\theta_{\beta_{2}}\}=P_{2}$. After a rewriting using definitions, the inequality above becomes$$\begin{gathered} \operatorname{Tr}\{\mathcal{\hat{N}}(\rho)\log\theta_{\beta_{2}}\}-\operatorname{Tr}\{\mathcal{N}(\rho)\log\theta_{\beta_{1}}\}\\ \geq H(\mathcal{N}(\rho))-H(\mathcal{\hat{N}}(\rho)).\end{gathered}$$ Set $Z_{1}\equiv\operatorname{Tr}\{e^{-\beta_{1}G}\}$ and $Z_{2}\equiv\operatorname{Tr}\{e^{-\beta_{2}G}\}$. We can then rewrite the upper bound as$$\begin{aligned} & \operatorname{Tr}\{\mathcal{\hat{N}}(\rho)\log\theta_{\beta_{2}}\}-\operatorname{Tr}\{\mathcal{N}(\rho)\log\theta_{\beta_{1}}\}\nonumber\\ & =\operatorname{Tr}\{\mathcal{\hat{N}}(\rho)\log\left[ e^{-\beta_{2}G}/Z_{2}\right] \}\nonumber\\ & \qquad-\operatorname{Tr}\{\mathcal{N}(\rho)\log\left[ e^{-\beta_{1}G}/Z_{1}\right] \}\\ & =\log\left[ Z_{1}/Z_{2}\right] -\beta_{2}\operatorname{Tr}\{G\mathcal{\hat{N}}(\rho)\}+\beta_{1}\operatorname{Tr}\{G\mathcal{N}(\rho)\}\\ & \leq\log\left[ Z_{1}/Z_{2}\right] -\beta_{2}P_{2}+\beta_{1}P_{1}.\end{aligned}$$ Thus, we have established a uniform upper bound on the coherent information of states subject to the constraints given in the theorem:$$H(\mathcal{N}(\rho))-H(\mathcal{\hat{N}}(\rho))\leq\log\left[ Z_{1}/Z_{2}\right] -\beta_{2}P_{2}+\beta_{1}P_{1}.$$ This bound is saturated when we choose the input $\rho=\theta_{\beta}$, where $\beta$ is such that $\operatorname{Tr}\{G\theta_{\beta}\}=P$, because$$\log\left[ Z_{1}/Z_{2}\right] -\beta_{2}P_{2}+\beta_{1}P_{1}=H(\mathcal{N}(\theta_{\beta}))-H(\mathcal{\hat{N}}(\theta_{\beta})).$$ This concludes the proof. Note that we can also conclude that $P_{1}\geq P_{2}$ for channels satisfying the hypotheses of the above theorem because the channel is degradable, implying that $H(\theta_{\beta_{1}})\geq H(\theta_{\beta_{2}})$, and the entropy of a thermal state is a strictly increasing function of the energy (and thus invertible) [@Winter15 Proposition 10]. Application to Gaussian quantum channels\[sec:Gaussian-results\] ================================================================ We can now apply all of the results from previous sections to the particular case of quantum bosonic Gaussian channels [@EW07; @WPGCRSL12]. These channels model natural physical processes such as photon loss, photon amplification, thermalizing noise, or random kicks in phase space. They satisfy Condition \[cond:finite-out-entropy\] when the Gibbs observable for $m$ modes is taken to be$$\hat{E}_{m}\equiv\sum_{j=1}^{m}\omega_{j}\hat{a}_{j}^{\dag}\hat{a}_{j}, \label{eq:photon-num-op-freqs}$$ where $\omega_{j}>0$ is the frequency of the $j$th mode and $\hat{a}_{j}$ is the photon annihilation operator for the $j$th mode, so that $\hat{a}_{j}^{\dag}\hat{a}_{j}$ is the photon number operator for the $j$th mode. We start with a brief review of Gaussian states and channels (see [@EW07; @WPGCRSL12; @PLOB15] for more comprehensive reviews, but note that here we mostly follow the conventions of [@EW07]). Let$$\hat{x}\equiv\left[ \hat{q}_{1},\ldots,\hat{q}_{m},\hat{p}_{1},\ldots,\hat {p}_{m}\right] \equiv\left[ \hat{x}_{1},\ldots,\hat{x}_{2m}\right]$$ denote a vector of position- and momentum-quadrature operators, satisfying the canonical commutation relations:$$\left[ \hat{x}_{j},\hat{x}_{k}\right] =i\Omega_{j,k},\quad\text{where}\quad\Omega\equiv\begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix} \otimes I_{m},$$ and $I_{m}$ denotes the $m\times m$ identity matrix. We take the annihilation operator for the $j$th mode as $\hat{a}_{j}=(\hat{q}_{j}+i\hat{p}_{j})/\sqrt{2}$. For $\xi\in\mathbb{R}^{2m}$, we define the unitary displacement operator $D(\xi)\equiv\exp(i\xi^{T}\Omega\hat{x})$. Displacement operators satisfy the following relation:$$D(\xi)^{\dag}D(\xi^{\prime})=D(\xi^{\prime})D(\xi)^{\dag}\exp(i\xi^{T}\Omega\xi^{\prime}).$$ Every state $\rho\in\mathcal{D}(\mathcal{H})$ has a corresponding Wigner characteristic function, defined as$$\chi_{\rho}(\xi)\equiv\operatorname{Tr}\{D(\xi)\rho\},$$ and from which we can obtain the state $\rho$ as$$\rho=\frac{1}{\left( 2\pi\right) ^{m}}\int d^{2m}\xi\ \chi_{\rho}(\xi)\ D^{\dag}(\xi).$$ A quantum state $\rho$ is Gaussian if its Wigner characteristic function has a Gaussian form as$$\chi_{\rho}(\xi)=\exp\left( -\frac{1}{4}\left[ \Omega\xi\right] ^{T}V^{\rho}\Omega\xi+\left[ \Omega\mu^{\rho}\right] ^{T}\xi\right) ,$$ where $\mu^{\rho}$ is the $2m\times1$ mean vector of $\rho$, whose entries are defined by $\mu_{j}^{\rho}\equiv\left\langle \hat{x}_{j}\right\rangle _{\rho}$ and $V^{\rho}$ is the $2m\times2m$ covariance matrix of $\rho$, whose entries are defined as$$V_{j,k}^{\rho}\equiv\langle\{\hat{x}_{j}-\mu_{j}^{\rho},\hat{x}_{k}-\mu _{k}^{\rho}\}\rangle.$$ The following condition holds for a valid covariance matrix: $V+i\Omega\geq0$, which is a manifestation of the uncertainty principle. A thermal Gaussian state $\theta_{\beta}$ of $m$ modes with respect to $\hat{E}_{m}$ from  and having inverse temperature $\beta>0$ thus has the following form:$$\theta_{\beta}=e^{-\beta\hat{E}_{m}}/\operatorname{Tr}\{e^{-\beta\hat{E}_{m}}\}, \label{eq:thermal-E-m-op}$$ and has a mean vector equal to zero and a diagonal $2m\times2m$ covariance matrix. One can calculate that the photon number in this state is equal to$$\sum_{j}\frac{1}{e^{\beta\omega_{j}}-1}.$$ It is also well known that thermal states can be written as a Gaussian mixture of displacement operators acting on the vacuum state:$$\theta_{\beta}=\int d^{2m}\xi\ p(\xi)\ D(\xi)\left[ |0\rangle\langle 0|\right] ^{\otimes m}D^{\dag}(\xi),$$ where $p(\xi)$ is a zero-mean, circularly symmetric Gaussian distribution. From this, it also follows that randomly displacing a thermal state in such a way leads to another thermal state of higher temperature:$$\theta_{\beta}=\int d^{2m}\xi\ q(\xi)\ D(\xi)\theta_{\beta^{\prime}}D^{\dag }(\xi), \label{eq:displaced-thermal-is-thermal}$$ where $\beta^{\prime}\geq\beta$ and $q(\xi)$ is a particular circularly symmetric Gaussian distribution. A $2m\times2m$ matrix $S$ is symplectic if it preserves the symplectic form: $S\Omega S^{T}=\Omega$. According to Williamson’s theorem [@W36], there is a diagonalization of the covariance matrix $V^{\rho}$ of the form, $$V^{\rho}=S^{\rho}\left( D^{\rho}\oplus D^{\rho}\right) \left( S^{\rho }\right) ^{T},$$ where $S^{\rho}$ is a symplectic matrix and $D^{\rho}\equiv\operatorname{diag}(\nu_{1},\ldots,\nu_{m})$ is a diagonal matrix of symplectic eigenvalues such that $\nu_{i}\geq1$ for all $i\in\left\{ 1,\ldots,m\right\} $. Computing this decomposition is equivalent to diagonalizing the matrix $iV^{\rho}\Omega$ [@WTLB16 Appendix A]. The entropy $H(\rho)$ of a quantum Gaussian state $\rho$ is a direct function of the symplectic eigenvalues of its covariance matrix $V^{\rho}$ [@EW07]:$$H(\rho)=\sum_{j=1}^{m}g((\nu_{j}-1)/2)\equiv g(V^{\rho}),$$ where $g(\cdot)$ is defined in  and we have indicated a shorthand for this entropy as $g(V^{\rho})$. A Gaussian quantum channel $\mathcal{N}_{X,Y}$ from $m$ modes to $m$ modes has the following effect on a displacement operator $D(\xi)$ [@EW07]:$$D(\xi)\longmapsto D(X\xi)\exp\left( -\frac{1}{2}\xi^{T}Y\xi+i\xi^{T}\Omega d\right) ,$$ where $X$ is a real $2m\times2m$ matrix, $Y$ is a real $2m\times2m$ positive semi-definite matrix, and $d\in\mathbb{R}^{2m}$, such that they satisfy$$Y+i\Omega-iX\Omega X^{T}\geq0.$$ The effect of the channel on the mean vector $\mu^{\rho}$ and the covariance matrix $V^{\rho}$ is thus as follows:$$\begin{aligned} \mu^{\rho} & \longmapsto X\mu^{\rho}+d,\\ V^{\rho} & \longmapsto XV^{\rho}X^{T}+Y.\end{aligned}$$ All Gaussian channels are covariant with respect to displacement operators. That is, the following relation holds$$\mathcal{N}_{X,Y}(D(\xi)\rho D^{\dag}(\xi))=D(X\xi)\mathcal{N}_{X,Y}(\rho)D^{\dag}(X\xi). \label{eq:covariance-gaussian}$$ Just as every quantum channel can be implemented as a unitary transformation on a larger space followed by a partial trace, so can Gaussian channels be implemented as a Gaussian unitary on a larger space with some extra modes prepared in the vacuum state, followed by a partial trace [@CEGH08]. Given a Gaussian channel $\mathcal{N}_{X,Y}$ with $Z$ such that $Y=ZZ^{T}$ we can find two other matrices $X_{E}$ and $Z_{E}$ such that there is a symplectic matrix $$S=\begin{bmatrix} X & Z\\ X_{E} & Z_{E}\end{bmatrix} , \label{eq:gaussian-dilation}$$ which corresponds to the Gaussian unitary transformation on a larger space. The complementary channel $\mathcal{\hat{N}}_{X_{E},Y_{E}}$ from input to the environment then effects the following transformation on mean vectors and covariance matrices:$$\begin{aligned} \mu^{\rho} & \longmapsto X_{E}\mu^{\rho},\\ V^{\rho} & \longmapsto X_{E}V^{\rho}X_{E}^{T}+Y_{E},\end{aligned}$$ where $Y_{E}\equiv Z_{E}Z_{E}^{T}$. A quantum Gaussian channel for which $X=X^{\prime}\oplus X^{\prime}$, $Y=Y^{\prime}\oplus Y^{\prime}$, and $d=d^{\prime}\oplus d^{\prime}$ is known as a phase-insensitive Gaussian channel, because it does not have a bias to either quadrature when applying noise to the input state. The main result of this section is the following theorem, which gives an explicit expression for the energy-constrained capacities of all phase-insensitive degradable Gaussian channels that satisfy the conditions of Theorem \[thm:thermal-optimal-degrad\] for all $\beta>0$: \[thm:PI-Gauss-degrad-caps\]Let $\mathcal{N}_{X,Y}$ be a phase-insensitive degradable Gaussian channel, having a dilation of the form in . Suppose that $\mathcal{N}_{X,Y}$ satisfies the conditions of Theorem \[thm:thermal-optimal-degrad\] for all $\beta>0$. Then its energy-constrained capacities $Q(\mathcal{N}_{X,Y},\hat{E}_{m},P)$, $E(\mathcal{N}_{X,Y},\hat{E}_{m},P)$, $P(\mathcal{N}_{X,Y},\hat{E}_{m},P)$, and $K(\mathcal{N}_{X,Y},\hat{E}_{m},P)$ are equal and given by the following formula:$$g(XV^{\theta_{\beta}}X^{T}+Y)-g(X_{E}V^{\theta_{\beta}}X_{E}^{T}+Y_{E}),$$ where $\theta_{\beta}$ is a thermal state of mean photon number $P$. Since the channel is degradable, satisfies Condition \[cond:finite-out-entropy\], and $\hat{E}_{m}$ is a Gibbs observable, Theorem \[thm:-energy-constr-q-p-cap\] applies and these capacities are given by the following formula:$$\sup_{\rho:\operatorname{Tr}\{\hat{E}_{m}\rho\}\leq P}H(\mathcal{N}_{X,Y}(\rho))-H(\mathcal{\hat{N}}_{X_{E},Y_{E}}(\rho)).$$ By assumption, the channel satisfies the conditions of Theorem \[thm:thermal-optimal-degrad\] as well for all $\beta>0$, so that the following function is optimized by a thermal state $\theta_{\beta}$ of mean photon number $P$:$$\begin{gathered} \sup_{\rho:\operatorname{Tr}\{\hat{E}_{m}\rho\}=P}H(\mathcal{N}_{X,Y}(\rho))-H(\mathcal{\hat{N}}_{X_{E},Y_{E}}(\rho))\\ =H(\mathcal{N}_{X,Y}(\theta_{\beta}))-H(\mathcal{\hat{N}}_{X_{E},Y_{E}}(\theta_{\beta})).\end{gathered}$$ It thus remains to prove that $H(\mathcal{N}_{X,Y}(\theta_{\beta }))-H(\mathcal{\hat{N}}_{X_{E},Y_{E}}(\theta_{\beta}))$ is increasing with decreasing $\beta$. This follows from the covariance property in , the concavity of coherent information in the input for degradable channels (Proposition \[prop:concave-degrad\]), and the fact that thermal states can be realized by random Gaussian displacements of thermal states with lower temperature. Consider that$$\begin{aligned} & H(\mathcal{N}_{X,Y}(\theta_{\beta^{\prime}}))-H(\mathcal{\hat{N}}_{X_{E},Y_{E}}(\theta_{\beta^{\prime}}))\nonumber\\ & =\int d^{2m}\xi\ q(\xi)\ \left[ H(\mathcal{N}_{X,Y}(\theta_{\beta^{\prime }}))-H(\mathcal{\hat{N}}_{X_{E},Y_{E}}(\theta_{\beta^{\prime}}))\right] \\ & =\int d^{2m}\xi\ q(\xi)\ \Big[H(D(X\xi)\mathcal{N}_{X,Y}(\theta _{\beta^{\prime}})D^{\dag}(X\xi))\nonumber\\ & \qquad-H(D(X_{E}\xi)\mathcal{\hat{N}}_{X_{E},Y_{E}}(\theta_{\beta^{\prime}})D^{\dag}(X_{E}\xi))\Big]\\ & =\int d^{2m}\xi\ q(\xi)\ \Big[H(\mathcal{N}_{X,Y}(D(\xi)\theta _{\beta^{\prime}}D^{\dag}(\xi)))\nonumber\\ & \qquad-H(\mathcal{\hat{N}}_{X_{E},Y_{E}}(D(\xi)\theta_{\beta^{\prime}}D^{\dag}(\xi)))\Big]\\ & \leq H(\mathcal{N}_{X,Y}(\theta_{\beta}))-H(\mathcal{\hat{N}}_{X_{E},Y_{E}}(\theta_{\beta})).\end{aligned}$$ The first equality follows by placing a probability distribution in front, and the second follows from the unitary invariance of quantum entropy. The third equality follows from the covariance property of quantum Gaussian channels, given in . The inequality follows because degradable channels are concave in the input state (Proposition \[prop:concave-degrad\]) and from . Special cases: Pure-loss and quantum-limited amplifier channels --------------------------------------------------------------- We can now discuss some special cases of the above result, some of which have already been known in the literature. Suppose that the channel is a single-mode pure-loss channel $\mathcal{L}_{\eta}$, where $\eta\in\left[ 1/2,1\right] $ characterizes the average fraction of photons that make it through the channel from sender to receiver [^1]. In this case, the channel has $X=\sqrt{\eta}I_{2}$ and $Y=(1-\eta)I_{2}$. We take the Gibbs observable to be the photon-number operator $\hat{a}^{\dag}\hat{a}$ and the energy constraint to be $N_{S}\in\lbrack0,\infty)$. Such a channel is degradable [@CG06] and was conjectured [@GSE08] to have energy-constrained quantum and private capacities equal to$$g(\eta N_{S})-g((1-\eta)N_{S}). \label{eq:q-cap-loss}$$ This conjecture was proven for the quantum capacity in [PhysRevA.86.062306]{}, and the present paper establishes the statement for private capacity. This was argued by exploiting particular properties of the $g$ function (established in great detail in [@G08thesis]) to show that the thermal state input is optimal for any fixed energy constraint. Here we can see this latter result as a consequence of the more general statements in Theorems \[thm:thermal-optimal-degrad\] and \[thm:PI-Gauss-degrad-caps\], which are based on the monotonicity of relative entropy and other properties of this channel, such as covariance and degradability. Taking the limit $N_{S}\rightarrow\infty$, the formula in  converges to$$\log_{2}(\eta/[1-\eta]), \label{eq:loss-unconstrained}$$ which is consistent with the formula stated in [@WPG07]. Suppose that the channel is a single-mode quantum-limited amplifier channel $\mathcal{A}_{\kappa}$ of gain $\kappa\geq1$. In this case, the channel has $X=\sqrt{\kappa}I_{2}$ and $Y=(\kappa-1)I_{2}$. Again we take the energy operator and constraint as above. This channel is degradable [@CG06] and was recently proven [@QW16] to have energy-constrained quantum and private capacity equal to$$g(\kappa N_{S}+\kappa-1)-g(\left[ \kappa-1\right] \left[ N_{S}+1\right] ).$$ The result was established by exploiting particular properties of the $g$ function in addition to other arguments. However, we can again see this result as a consequence of the more general statements given in Theorems \[thm:thermal-optimal-degrad\] and \[thm:PI-Gauss-degrad-caps\]. Taking the limit $N_{S}\rightarrow\infty$, the formula converges to$$\log_{2}(\kappa/\left[ \kappa-1\right] ), \label{eq:amp-unconstrained}$$ which is consistent with the formula stated in [@WPG07] and recently proven in [@PLOB15]. \[rem:WPG07\]Ref. [@WPG07] has been widely accepted to have provided a complete proof of the unconstrained quantum capacity formulas given in and . The important developments of [@WPG07] were to identify that it suffices to optimize coherent information of these channels with respect to a single channel use and Gaussian input states. The issue is that [@WPG07] relied on an optimization procedure carried out in [@HW01] in order to establish the infinite-energy quantum capacity formula given there (see just before [@WPG07 Eq. (12)]). However, a careful inspection of [@HW01 Section V-B] reveals that no explicit optimization procedure is given there. The contentious point is that it is necessary to show that, among all Gaussian states, the thermal state is the input state optimizing the coherent information of the quantum-limited attenuator and amplifier channels. This point is not argued or in any way justified in [@HW01 Section V-B] or in any subsequent work or review on the topic [@Holevo2007; @CGH06; @HG12; @H12]. As a consequence, we have been left to conclude that the proof from [@WPG07] features a gap which was subsequently closed in [@PhysRevA.86.062306 Section III-G-1] and [@QW16]. The result in [@PLOB15 Eq. (21)] gives a completely different approach for establishing the unconstrained quantum and private capacities of the quantum-limited amplifier channel, which preceded the development in [@QW16]. Our results from Theorems \[thm:thermal-optimal-degrad\] and \[thm:PI-Gauss-degrad-caps\] allow for making more general statements, applicable to broadband scenarios considered in prior works for other capacities [@GLMS03; @GGLMSY04; @Guha04]. Let the Gibbs observable be $\hat {E}_{m}$, as given in , and suppose that the energy constraint is $P\in\lbrack0,\infty)$. Suppose that the channel is an $m$-mode channel consisting of $m$ parallel pure-loss channels $\mathcal{L}_{\eta}$, each with the same transmissivity $\eta\in\left[ 1/2,1\right] $. Then for $\hat{E}_{m}$ and such an $m$-mode channel, the conditions of Theorems \[thm:thermal-optimal-degrad\] and \[thm:PI-Gauss-degrad-caps\] are satisfied, so that the energy-constrained quantum and private capacities are given by$$\sum_{j=1}^{m}g(\eta N_{j}(\beta))-g((1-\eta)N_{j}(\beta)),$$ where$$N_{s}(\beta)\equiv1/(e^{\beta\omega_{s}}-1),$$ and $\beta$ is chosen such that $P=\sum_{j=1}^{m}N_{j}(\beta)$, so that the energy constraint is satisfied. A similar statement applies to $m$ parallel quantum-limited amplifier channels each having the same gain $\kappa\geq1$. In this case, the conditions of Theorems \[thm:thermal-optimal-degrad\] and \[thm:PI-Gauss-degrad-caps\] are satisfied, so that the energy-constrained quantum and private capacities are given by$$\sum_{j=1}^{m}g(\kappa N_{j}(\beta)+\kappa-1)-g(\left[ \kappa-1\right] \left[ N_{j}(\beta)+1\right] ),$$ where $N_{j}(\beta)$ is as defined above and $\beta$ is chosen to satisfy $P=\sum_{j=1}^{m}N_{j}(\beta)$. Theorems \[thm:thermal-optimal-degrad\] and \[thm:PI-Gauss-degrad-caps\] can be applied indirectly to a more general scenario. Let $m=k+l$, where $k$ and $l$ are positive integers. Suppose that the channel consists of $k$ pure-loss channels $\mathcal{L}_{\eta_{i}}$, each of transmissivity $\eta _{i}\in\lbrack1/2,1]$, and $l$ quantum-limited amplifier channels $\mathcal{A}_{\kappa_{j}}$, each of gain $\kappa_{j}$ for $j\in\left\{ 1,\ldots,l\right\} $. In this scenario, Theorems \[thm:thermal-optimal-degrad\] and \[thm:PI-Gauss-degrad-caps\] apply to the individual channels, so that we know that a thermal state is the optimal input to each of them for a fixed input energy. The task is then to determine how to allocate the energy such that the resulting capacity is optimal. Let $P$ denote the total energy budget, and suppose that a particular allocation $\{\{N_{i}\}_{i=1}^{k},\{M_{j}\}_{j=1}^{l}\}$ is made such that$$P=\sum_{i=1}^{k}\omega_{i}N_{i}+\sum_{j=1}^{m}\omega_{j}M_{j}.$$ Then Theorems \[thm:thermal-optimal-degrad\] and \[thm:PI-Gauss-degrad-caps\] apply to the scenario when the allocation is fixed and imply that the resulting quantum and private capacities are equal and given by$$\begin{gathered} \sum_{i=1}^{k}g(\eta_{i}N_{i})-g((1-\eta_{i})N_{i})\\ +\sum_{j=1}^{l}g(\kappa M_{j}+\kappa-1)-g(\left[ \kappa-1\right] \left[ M_{j}+1\right] ).\end{gathered}$$ However, we can then optimize this expression with respect to the energy allocation, leading to the following constrained optimization problem:$$\begin{gathered} \max_{\{\{N_{i}\}_{i=1}^{k},\{N_{j}\}_{j=1}^{l}\}}\sum_{i=1}^{k}g(\eta _{i}N_{i})-g((1-\eta_{i})N_{i})\\ +\sum_{j=1}^{l}g(\kappa M_{j}+\kappa-1)-g(\left[ \kappa-1\right] \left[ M_{j}+1\right] ),\end{gathered}$$ such that$$P=\sum_{i=1}^{k}\omega_{i}N_{i}+\sum_{j=1}^{m}\omega_{j}M_{j}.$$ This problem can be approached using Lagrange multiplier methods, and in some cases handled analytically, while others need to be handled numerically. Many different scenarios were considered already in [@GLMS03], to which we point the interested reader. However, we should note that [@GLMS03] was developed when the formulas above were only conjectured to be equal to the capacity and not proven to be so. Conclusion\[sec:conclusion\] ============================ This paper has provided a general theory of energy-constrained quantum and private communication over quantum channels. We defined several communication tasks (Section \[sec:energy-constrained-caps\]), and then established ways of converting a code for one task to that of another task (Section \[sec:code-conversions\]). These code conversions have implications for capacities, establishing non-trivial relations between them (Section \[sec:cap-imps\]). We showed that the regularized, energy-constrained coherent information is achievable for entanglement transmission with an average energy constraint, under the assumption that the energy observable is of the Gibbs form (Definition \[def:Gibbs-obs\]) and the channel satisfies the finite-output entropy condition (Condition \[cond:finite-out-entropy\]). We then proved that the various quantum and private capacities of degradable channels are equal and characterized by the single-letter, energy-constrained coherent information (Section \[sec:degradable-channels\]). We finally applied our results to Gaussian channels and recovered some results already known in the literature in addition to establishing new ones. Going forward from here, a great challenge is to establish a general theory of energy-constrained private and quantum communication with a limited number of channel uses. Recent progress in these scenarios without energy constraints [@TBR15; @WTB16] suggest that this might be amenable to analysis. Another question is to identify and explore other physical systems, beyond bosonic channels, to which the general framework could apply. It could be interesting to explore generalizations of the results and settings from [@B05; @PhysRevLett.95.260503; @PhysRevA.75.022331; @G13; @GG16]. A more particular question we would like to see answered is whether concavity of coherent information of degradable channels could hold in settings beyond that considered in Proposition \[prop:concave-degrad\]. We suspect that an approximation argument along the lines of that given in the proof of [@HS10 Proposition 1] should make this possible, but we leave this for future endeavors. We are grateful to Saikat Guha for discussions related to this paper. MMW acknowledges the NSF under Award No. CCF-1350397. HQ is supported by the Air Force Office of Scientific Research, the Army Research Office and the National Science Foundation. Minimum fidelity and minimum entanglement fidelity ================================================== The following proposition states that a quantum code with good minimum fidelity implies that it has good minimum entanglement fidelity with negligible loss in parameters. This was first established in [@BKN98] and reviewed in [@KW04]. Here we follow the proof available in [@Wat16], which therein established a relation between trace distance and diamond distance between an arbitrary channel and the identity channel. \[prop:min-fid-to-min-ent-fid\]Let $\mathcal{C}:\mathcal{T}(\mathcal{H})\rightarrow\mathcal{T}(\mathcal{H})$ be a quantum channel with finite-dimensional input and output. Let $\mathcal{H}^{\prime}$ be a Hilbert space isomorphic to $\mathcal{H}$. If$$\min_{|\phi\rangle\in\mathcal{H}}\langle\phi|\mathcal{C}(|\phi\rangle \langle\phi|)|\phi\rangle\geq1-\varepsilon, \label{eq:min-fid-starting-pnt}$$ then$$\min_{|\psi\rangle\in\mathcal{H}^{\prime}\otimes\mathcal{H}}\langle \psi|(\operatorname{id}_{\mathcal{H}^{\prime}}\otimes\mathcal{C})(|\psi \rangle\langle\psi|)|\psi\rangle\geq1-2\sqrt{\varepsilon},$$ where the optimizations are with respect to state vectors. The inequality in implies that the following inequality holds for all state vectors $|\phi\rangle\in\mathcal{H}$:$$\langle\phi|\left[ |\phi\rangle\langle\phi|-\mathcal{C}(|\phi\rangle \langle\phi|)\right] |\phi\rangle\leq\varepsilon.$$ By the inequalities in , this implies that$$\left\Vert |\phi\rangle\langle\phi|-\mathcal{C}(|\phi\rangle\langle \phi|)\right\Vert _{1}\leq2\sqrt{\varepsilon}, \label{eq:min-to-ent-t-norm-bnd}$$ for all state vectors $|\phi\rangle\in\mathcal{H}$. We will show that$$\left\vert \langle\phi|\left[ |\phi\rangle\langle\phi^{\bot}|-\mathcal{C}(|\phi\rangle\langle\phi^{\bot}|)\right] |\phi^{\bot}\rangle\right\vert \leq2\sqrt{\varepsilon}, \label{eq:ortho-pairs-fid-bnd}$$ for every orthonormal pair $\left\{ |\phi\rangle,|\phi^{\bot}\rangle\right\} $ of state vectors in$~\mathcal{H}$. Set$$|w_{k}\rangle\equiv\frac{|\phi\rangle+i^{k}|\phi^{\bot}\rangle}{\sqrt{2}}$$ for $k\in\{0,1,2,3\}$. Then it follows that$$|\phi\rangle\langle\phi^{\bot}|=\frac{1}{2}\sum_{k=0}^{3}i^{k}|w_{k}\rangle\langle w_{k}|. \label{eq:ortho-pairs-char}$$ Consider now that$$\begin{aligned} & \left\vert \langle\phi|\left[ |\phi\rangle\langle\phi^{\bot}|-\mathcal{C}(|\phi\rangle\langle\phi^{\bot}|)\right] |\phi^{\bot}\rangle\right\vert \nonumber\\ & \leq\left\Vert |\phi\rangle\langle\phi^{\bot}|-\mathcal{C}(|\phi \rangle\langle\phi^{\bot}|)\right\Vert _{\infty}\\ & \leq\frac{1}{2}\sum_{k=0}^{3}\left\Vert |w_{k}\rangle\langle w_{k}|-\mathcal{C}(|w_{k}\rangle\langle w_{k}|)\right\Vert _{\infty}\\ & \leq\frac{1}{4}\sum_{k=0}^{3}\left\Vert |w_{k}\rangle\langle w_{k}|-\mathcal{C}(|w_{k}\rangle\langle w_{k}|)\right\Vert _{1}\\ & \leq2\sqrt{\varepsilon}.\end{aligned}$$ The first inequality follows from the characterization of the operator norm as $\left\Vert A\right\Vert _{\infty}=\sup_{|\phi\rangle,|\psi\rangle}\left\vert \langle\phi|A|\psi\rangle\right\vert $, where the optimization is with respect to state vectors $|\varphi\rangle$ and $|\psi\rangle$. The second inequality follows from substituting and applying the triangle inequality and homogeneity of the $\infty$-norm. The third inequality follows because the $\infty$-norm of a traceless Hermitian operator is bounded from above by half of its trace norm [@AE05 Lemma 4]. The final inequality follows from applying . Let $|\psi\rangle\in\mathcal{H}^{\prime}\otimes\mathcal{H}$ be an arbitrary state vector. All such state vectors have a Schmidt decomposition of the following form:$$|\psi\rangle=\sum_{x}\sqrt{p(x)}|\zeta_{x}\rangle\otimes|\varphi_{x}\rangle,$$ where $\{p(x)\}_{x}$ is a probability distribution and $\{|\zeta_{x}\rangle\}_{x}$ and $\{|\varphi_{x}\rangle\}_{x}$ are orthonormal sets, respectively. Then consider that$$\begin{aligned} & 1-\langle\psi|(\operatorname{id}_{\mathcal{H}^{\prime}}\otimes \mathcal{C})(|\psi\rangle\langle\psi|)|\psi\rangle\nonumber\\ & =\langle\psi|(\operatorname{id}_{\mathcal{H}^{\prime}}\otimes \operatorname{id}_{\mathcal{H}}-\operatorname{id}_{\mathcal{H}^{\prime}}\otimes\mathcal{C})(|\psi\rangle\langle\psi|)|\psi\rangle\nonumber\\ & =\langle\psi|(\operatorname{id}_{\mathcal{H}^{\prime}}\otimes\left[ \operatorname{id}_{\mathcal{H}}-\mathcal{C}\right] )(|\psi\rangle\langle \psi|)|\psi\rangle\nonumber\\ & =\sum_{x,y}p(x)p(y)\langle\varphi_{x}|\left[ |\varphi_{x}\rangle \langle\varphi_{y}|-\mathcal{C}(|\varphi_{x}\rangle\langle\varphi _{y}|)\right] |\varphi_{y}\rangle.\end{aligned}$$ Now applying the triangle inequality and , we find that$$\begin{aligned} & 1-\langle\psi|(\operatorname{id}_{\mathcal{H}^{\prime}}\otimes \mathcal{C})(|\psi\rangle\langle\psi|)|\psi\rangle\nonumber\\ & =\left\vert \sum_{x,y}p(x)p(y)\langle\varphi_{x}|\left[ |\varphi _{x}\rangle\langle\varphi_{y}|-\mathcal{C}(|\varphi_{x}\rangle\langle \varphi_{y}|)\right] |\varphi_{y}\rangle\right\vert \nonumber\\ & \leq\sum_{x,y}p(x)p(y)\left\vert \langle\varphi_{x}|\left[ |\varphi _{x}\rangle\langle\varphi_{y}|-\mathcal{C}(|\varphi_{x}\rangle\langle \varphi_{y}|)\right] |\varphi_{y}\rangle\right\vert \nonumber\\ & \leq2\sqrt{\varepsilon}.\end{aligned}$$ This concludes the proof. [^1]: We do not consider transmissivities $\eta\in\left[ 0,1/2\right] $ because the quantum capacity vanishes in this range since the channel becomes antidegradable.
--- abstract: 'We discuss a class of generalized divided difference operators which give rise to a representation of Nichols-Woronowicz algebras associated to Weyl groups. For the root system of type $A,$ we also study the condition for the deformations of the Fomin-Kirillov quadratic algebra, which is a quadratic lift of the Nichols-Woronowicz algebra, to admit a representation given by generalized divided difference operators. The relations satisfied by the mutually commuting elements called Dunkl elements in the deformed Fomin-Kirillov algebra are determined. The Dunkl elements correspond to the truncated elliptic Dunkl operators via the representation given by the generalized divided difference operators.' author: - 'Anatol N. Kirillov and Toshiaki Maeno[^1]' title: | Braided differential structure on Weyl groups, quadratic algebras and elliptic functions\ *[To the memory of Leonid Vaksman]{}* --- Introduction {#introduction .unnumbered} ============ The rational Dunkl operators, which were introduced in [@Du] for any finite Coxeter group, constitute a remarkable family of operators of differential-difference type. The Dunkl operators are defined to be the ones acting on the functions on the reflection representation $V$ of the corresponding Weyl group $W.$ For the root system of type $A_{n-1},$ the Dunkl operators $D_1,\ldots,D_n$ are defined by the formula $$D_i:= \frac{\partial}{\partial x_i}+ \sum_{j \not = i}\frac{1-s_{ij}}{x_i-x_j} ,$$ where $s_{ij}$ is the transposition of $i$ and $j.$ They are $S_n$-invariant and mutually commute. The Dunkl opearotors play an important role in the representation theory and in the study of integrable systems. Here we would like to mention only a remarkable result, due to Dunkl, that the algebra generated by [*truncated Dunkl operetors*]{} is isomorphic to the coinvariant algebra of the corresponding finite Coxeter group [@Du], [@Ba]. A trigonometric generalization of Dunkl operators has been proposed by Cherednik [@Ch], and an elliptic one by Buchstaber, Felder and Veselov [@BFV]. The basic requirement for such generalizations is that the operators to be constructed are bounded to pairwise commute. Another important property of rational Dunkl operators, namely, their $W$-invariance, may be broken for generalizations. For a crystallographic irreducible root system $R,$ Buchstaber, Felder and Veselov [@BFV] have determined the conditions on the functions $f_{\alpha}(z),$ $\alpha\in R,$ so that the operators $$\nabla_{\xi}=\partial_{\xi}+\sum_{\alpha\in R_+}(\alpha,\xi)f_{\alpha}((\alpha,x)) s_{\alpha}$$ satisfy the commutativity condition $[ \nabla_{\xi},\nabla_{\eta} ]=0$ for all $\xi,\eta \in V.$ Here, we denote by $R_+$ the set of positive roots and by $s_{\alpha}$ the reflection corresponding to a root $\alpha.$ Under the assumption of the $W$-invariance of $\nabla_{\xi},$ they proved that the solutions of the functional equation for $f_{\alpha}$ must be rational unless $R$ is of type $B_2.$ Without the assumption of the $W$-invariance, some elliptic solutions given by Kronecker’s $\sigma$-function may appear. If $R$ is of type $A_n,$ such functions exhaust the general solution. The present paper contains two main results. The first one is concerned about the existence of a representation given by the generalized divided difference operators for the (certain extension of) Nichols-Woronowicz algebra $\B_W$ corresponding to a Weyl group $W.$ Our second main result describes relations among the Dunkl elements in the elliptic extension of the Fomin-Kirillov algebra introduced originally in [@K1]. In particular, we describe the relations among truncated elliptic Dunkl operators of type $A_{n-1}.$ By analogy with Dunkl’s theorem mentioned above, one can consider the algebra generated by truncated elliptic Dunkl operators of type $A_{n-1}$ as an elliptic deformation of the cohomology ring of the flag variety $Fl_n.$ We also prove an elliptic analogue of the Pieri rule in the elliptic extension of Fomin-Kirillov algebra. These results can be considered as further generalizations of those obtained in [@FK], [@Po], since the latter correspond to certain degenerations of the elliptic case, see Section 4 for details. The Nichols-Woronowicz algebra $\B(M)$ is a braided analogue of the symmetric algebra, which is defined for a given braided vector space $M.$ Nichols [@Ni] studied graded bialgebras generated by the primitive elements of degree one. The braided Hopf algebra $\B(M)$ satisfying such a condition was called Nichols algebra by Andruskiewitsch and Schneider [@AS]. The algebra $\B(M)$ has been constructed also in the theory of the differential forms on quantum groups due to Woronowicz [@Wo]. Woronowicz constructed $\B(M)$ as a braided symmetric (or exterior) algebra based on the construction of his (anti-)symmetrizer. The Nichols-Woronowicz algebra provides a natural framework for the braided differential calculus, which was developed by Majid [@M1]. In this paper we are interested in the Nichols-Woronowicz algebra associated to a particular kind of braided vector space called Yetter-Drinfeld module. See [@Ba] for more details of general construction of $\B(M).$ In our case, we use a $\C$-vector space $M_W$ spanned by the symbols $[\alpha]=-[-\alpha],$ $\alpha \in R,$ with the braiding $\psi : M_W^{\otimes 2}\rightarrow M_W^{\otimes 2},$ $[\alpha]\otimes [\beta] \mapsto [s_{\alpha}(\beta)] \otimes [\alpha].$ The algebra $\B_W=\B(M_W)$ of our interest is defined to be the quotint of the tensor algebra of $M_W$ by the kernel of the braided symmetrizer. Milinski and Schneider [@MS] and Majid [@M2] have pointed out that the algebra $\B_W$ for $W=S_n$ is a quotient of the Fomin-Kirillov quadratic algebra $\E_n$ defined in [@FK]. The algebra $\B_{S_n}$ is conjectured to be isomorphic to $\E_n.$ Fomin and the first author introduced the algebra $\E_n$ to construct a model of the cohomology ring of the flag variety $Fl_n.$ In [@Ba], Bazlov has reformulated their construction of the model of the cohomology ring in terms of the Nichols-Woronowicz algebra $\B_W,$ and generalized it to arbitrary finite Coxeter groups. The braided differential operators on the algebra $\B_W,$ which were used by Majid [@M2] for root system of type $A,$ play an essential role in Bazlov’s construction. His construction also has an important implication on the representation of $\B_W,$ since the braided differential operators act on the coinvariant algebra of $W$ as the divided difference operators $\partial_{\alpha}=(1-s_{\alpha})/\alpha,$ $\alpha\in R.$ In Section 1, we discuss the conditions for the generalized divided difference operators $$\DD_{\alpha}=f_{\alpha}((\alpha,\xi))+g_{\alpha}((\alpha,\xi))s_{\alpha}$$ to give rise a representation of $\B_W.$ These conditions are interpreted as functional equations for $f_{\alpha}$ and $g_{\alpha}.$ We prove that the operators corresponding to the $W$-invariant solutions described in [@BFV] define a representation of $\B_W.$ Komori [@Ko] studied when the operators $\DD_{\alpha}$ satisfy the Yang-Baxter equation. Since the generators $[\alpha]$ of the algebra $\B_W$ satisfy the Yang-Baxter equation, our operators also correspond to special part of the solutions found in [@Ko]. In order to get a more general class of solutions like elliptic functions, we have to loose part of defining relations of $\B_W.$ In Section 2 we introduce a deformed version $\tilde{\E}_n(\psi_{ij})$ of the Fomin-Kirillov quadratic algebra, which is defined for a given family of meromorphic functions $\psi_{ij}(z),$ $1\leq i,j \leq n,$ $i\not=j.$ The algebra $\tilde{\E}_n(\psi_{ij})$ admits the representation by the operators $\DD_{\alpha}$ only when $\psi_{ij}(z)$ is given by the Weierstrass $\wp$-function or its degenerations. In this case, the operator $\DD_{\alpha}$ exactly corresponds to the general solution for $A_{n-1}$-system obtained in [@BFV Theorem 16]. Our second main result is the study of relations among the Dunkl elements in the elliptic extension $\tilde{\E}_n(\psi_{ij})$ of the Fomin-Kirilov algebra. The Dunkl elements $\theta_1,\ldots,\theta_n \in \tilde{\E}_n(\psi_{ij})$ are mutually commuting elements defined by $\theta_i=\sum_{j\not=i}[ij].$ The images of the Dunkl elements, via the representation $[\alpha] \mapsto \DD_{\alpha},$ become the so-called truncated (or level zero) elliptic Dunkl operators, cf [@BFV]. It is well-known that the (truncated) rational or trigonometric Dunkl operators can be obtained as certain degenerations of the (truncated) elliptic Dunkl operators. The identities among the Dunkl elements in $\tilde{\E}_n(\psi_{ij})$ are also satisfied by the corresponding truncated elliptic Dunkl operators or their degenerations. In the context of Schubert calculus, the Dunkl elements describe the multiplication by the classes of standard line bundles in the cohomology ring of the flag variety. The formula of the elementary symmetric polynomials in the Dunkl elements in the Fomin-Kirillov algebra reflects the Pieri formula. In Section 3 we give a formula for the deformed elementary symmetric polynomial $E_k(\theta_i \; | \; i\in I)$ in the algebra $\tilde{\E}_n(\psi_{ij}).$ The algebra $\tilde{\E}_n(\psi_{ij})$ has degenerations to variants of the deformation of the Fomin-Kirillov algebra. In particular, the multiparameter deformation $\E^p_n$ studied in [@FK] and [@Po], and the extended quadratic algebra $\tilde{\E}_n \langle R \rangle [t]$ defined in [@KM2] after the specialization $t=0$ can be regarded as degenerations of $\tilde{\E}_n(\psi_{ij}).$ In Section 4 we show that our algebra recovers the Pieri formulas in the corresponding degenerations. Representation of Nichols-Woronowicz algebra ============================================ Let us consider the reflection representation $V$ of the Weyl group $W.$ Denote by $R\subset V$ the set of roots for the Weyl group $W.$ Fix $R_+$ the set of positive roots in $R.$ Let $\{ \alpha_1,\ldots , \alpha_r \} \subset R_+$ be the set of simple roots. The Weyl group $W$ naturally acts on the space $\M=\M(V_{\C})$ of meromorphic functions on $V_{\C}.$ We also denote by $\M_0$ the space of meromorphic functions on $\C.$ We discuss the generalized Calogero-Moser representaion of the Nichols-Woronowicz algebra $\B_W$ for the Weyl group $W.$ The Nichols-Woronowicz algebra $\B_W=\B(M_W)$ is associated to the Yetter-Drinfeld module $M_W$ generated by the symbols $[\alpha],$ $\alpha \in R.$ Define the operator $\DD_{\alpha},$ $\alpha \in R,$ acting on $\M$ by $$\DD_{\alpha}=f_{\alpha}((\alpha,\xi))+g_{\alpha}((\alpha,\xi))s_{\alpha}, \; \; \xi \in V,$$ where $s_{\alpha}$ is the reflection with respect to $\alpha,$ and $f_{\alpha},g_{\alpha}\in \M_0.$ We assume that $f_{-\alpha}(z)=-f_{\alpha}(-z)$ and $g_{-\alpha}(z)=-g_{\alpha}(-z)$ so that $\DD_{-\alpha}=-\DD_{\alpha}.$ The divided difference operator $\partial_{\alpha}=(1-s_{\alpha})/(\alpha,\xi)$ gives a well-defined representation of $\B_W$ on $P.$ [*Proof.*]{} From the construction of the model of the coinvariant algebra $P_W$ in [@Ba], we can see that the natural action of the braided differential operator $\overleftarrow{D}_{\alpha}$ on $P_W$ coincides with the divided difference operator $\partial_{\alpha}.$ Since $P=P^W \otimes P_W,$ we can extend $P^W$-linearly the action of $\B_W$ on $P_W$ to that on $P.$ If $[\alpha] \mapsto \DD_{\alpha}$ defines the representation of $\B_W,$ then $f_{\alpha}$ must be an odd function, and $$g_{\alpha}(z)=f_{\alpha}(z) \phi_{\alpha}(z),$$ where $\phi_{\alpha}(z)\phi_{\alpha}(-z)=1.$ [*Proof.*]{} The condition $\DD_{\alpha}^2=0$ is equivalent to the equations $$f_{\alpha}(z)^2+g_{\alpha}(-z)g_{\alpha}(z)=0$$ and $$g_{\alpha}(z)\cdot (f_{\alpha}(z)+f_{\alpha}(-z))=0 .$$ The second equation shows that $f_{\alpha}$ is odd. Define the function $\phi_{\alpha}(z)$ by $$\phi_{\alpha}(z)= \frac{g_{\alpha}(z)}{f_{\alpha}(z)} .$$ Then the first equation can be written as $$\phi_{\alpha}(z)\phi_{\alpha}(-z)=1.$$ We take the standard realization of the root systems of type $A_n$ and $B_n$ as follows: $$R(A_n)= \{ ij=\epsilon_i-\epsilon_j \; | \; 1\leq i,j \leq n, i\not= j \} ,$$ $$R(B_n)= \{ ij=\epsilon_i-\epsilon_j, \overline{ij}=\epsilon_i+\epsilon_j, i=\epsilon_i | \; 1\leq i,j \leq n, i\not= j \} ,$$ where $(\epsilon_1,\ldots ,\epsilon_n)$ is an orthonormal basis of $V.$ Suppose that $R$ is not of type $A_1$ or $B_2.$ If the operators $\DD_{\alpha}$ give a representation of $\B_W,$ then $f_{\alpha}(z)= k_{\alpha}/z$ and $g_{\alpha}(z)=\pm k_{\alpha}e^{\lambda_{\alpha}z}/z,$ where $k_{\alpha}$ are $W$-invariant constants and the choice of the signature $\pm$ is independent of roots $\alpha.$ The constants $\lambda_{\alpha}$ are obtained as $\lambda_{\alpha}=\lambda(\alpha^{\vee})$ from an element $\lambda \in V^*.$ Conversely, the operators $\DD_{\alpha}$ corresponding to the above solutions give the representation of $\B_W.$ [*Proof.*]{} When $R$ is of type $A_2,$ we have the functional equations $$\begin{aligned} f_{12}(x-y)f_{23}(y-z)+f_{23}(y-z)f_{31}(z-x)+f_{31}(z-x)f_{12}(x-y)=0, \\ g_{12}(x-y)g_{23}(x-z)+g_{23}(y-z)g_{31}(y-x)+g_{31}(z-x)g_{12}(z-y)=0. \end{aligned}$$ If $f_{12}$ is regular at the origin, then we have $f_{12}(0)=0$ since $f_{12}$ is odd. We have $f_{23}(x-z)f_{31}(z-x)=0$ by putting $x=y$ in the equation (1), and hence $f_{12},$ $f_{23}$ and $f_{13}$ must be constantly zero. So we may assume $f_{12},$ $f_{23}$ and $f_{13}$ have a pole at the origin. Now the equation (1) shows $$f_{31}(z-x)^{-1}+ f_{12}(x-y)^{-1} + f_{23}(y-z)^{-1}=0 .$$ Therefore we have $$f_{12}(x)=f_{23}(x)=f_{13}(x)= \frac{k}{x}$$ for some constant $k.$ From Lemma 1.2, we can write $$g_{ij}(x)=\frac{k\phi_{ij}(x)}{x},$$ where $\phi_{ij}(x)\phi_{ij}(-x)=1.$ From the results in [@BFV Theorem 16], we can conclude that $$g_{ij}(x)=\pm \frac{ke^{\lambda_{ij}x}}{x},$$ where $\lambda_{12}+\lambda_{23}+\lambda_{31}=0.$ When $R$ contains $B_2$ as a subsystem, the argument works well. If $R$ contains the subsystem $\{\pm 12,\pm \overline{12},\pm 1, \pm 2 \}$ of type $B_2,$ we have the functional equations $$\begin{aligned} f_{12}(x-y)f_1(x)-f_2(y)f_{12}(x-y)+f_{\overline{12}}(x+y)f_2(y) + f_1(x)f_{\overline{12}}(x+y)=0, \\ g_{12}(x-y)g_1(y)-g_2(y)g_{12}(x+y)+g_{\overline{12}}(x+y)g_2(-x) + g_1(x)g_{\overline{12}}(-x+y)=0. \end{aligned}$$ Since $R$ is not of type $A_1$ or $B_2,$ $R$ contains a subsystem of type $A_2.$ We may assume that $f_{12},$ $f_{\overline{12}},$ $g_{12}$ and $g_{\overline{12}}$ are determined from the subsystems of type $A_2$ in $R$ as follows: $$f_{12}(x)=f_{\overline{12}}(x)=\frac{k}{x}, \; \; g_{12}(x)=\frac{ke^{\lambda_{12}x}}{x}, \; \; g_{\overline{12}}(x)= \frac{ke^{\lambda_{\overline{12}}x}}{x}.$$ Then the functional equations (3) and (4) can be written as $$\begin{aligned} \left( \frac{1}{x-y}+\frac{1}{x+y} \right)f_1(x) + \left( -\frac{1}{x-y}+\frac{1}{x+y} \right)f_2(y) =0, \\ \frac{e^{\lambda_{12}(x-y)}}{x-y}g_1(y)- \frac{e^{\lambda_{12}(x+y)}}{x+y} g_2(y) + \frac{e^{\lambda_{\overline{12}}(x+y)}}{x+y}g_2(-x)+ \frac{e^{\lambda_{\overline{12}}(-x+y)}}{-x+y} g_1(x) =0. \end{aligned}$$ Hence we get $$f_1(x)=f_2(x)=\frac{k'}{x}$$ from the equation (5). The equation (6) is written as $$(x+y)e^{\lambda_{12}(x-y)}\frac{\phi_1(y)}{y}-(x-y)e^{\lambda_{12}(x+y)}\frac{\phi_2(y)}{y}$$ $$- (x-y)e^{\lambda_{\overline{12}}(x+y)}\frac{\phi_2(-x)}{x}- (x+y)e^{\lambda_{\overline{12}}(-x+y)}\frac{\phi_1(x)}{x}$$ $$\begin{aligned} = 0. \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\end{aligned}$$ We obtain, by taking the limit $y \rightarrow 0,$ $$e^{-\lambda_{\overline{12}}x}\phi_1(x)+e^{\lambda_{\overline{12}}x}\phi_2(-x) =e^{\lambda_{12}x}(2+x(\phi'_1(0)-\phi'_2(0)-2\lambda_{12})),$$ and by taking the limit $x \rightarrow 0,$ $$e^{-\lambda_{12}y}\phi_1(y)+e^{\lambda_{12}y}\phi_2(y)=e^{\lambda_{\overline{12}}y} (2+y(\phi'_1(0)+\phi'_2(0)-2\lambda_{\overline{12}})).$$ After eliminating $\phi_2(y)$ and $\phi_2(-x)$ from the equation (7), we have $$e^{-(\lambda_{12}+\lambda_{\overline{12}})x}\frac{\phi_1(x)}{x^2}-\frac{1}{x^2}- \frac{\phi'_1(0)-\lambda_{12}-\lambda_{\overline{12}}}{x}$$ $$=e^{-(\lambda_{12}+\lambda_{\overline{12}})y}\frac{\phi_1(y)}{y^2}-\frac{1}{y^2}- \frac{\phi'_1(0)-\lambda_{12}-\lambda_{\overline{12}}}{y}.$$ This means that the both sides must be a constant $C.$ Hence, we have $$\phi_1(x)=e^{(\lambda_{12}+\lambda_{\overline{12}})x} (1+(\phi'_1(0)-\lambda_{12}-\lambda_{\overline{12}})x+Cx^2) .$$ From the condition $\phi_1(x)\phi_1(-x)=1,$ we get $$\phi'_1(0)=\lambda_{12}+\lambda_{\overline{12}}, \; \; C=0.$$ Therefore we conclude that $$g_1(x)=\pm \frac{k'e^{\lambda_1 x}}{x}, \; g_2(x)= \pm \frac{k'e^{\lambda_2 x}}{x},$$ where $\lambda_1=\lambda_{12}+\lambda_{\overline{12}},$ $\lambda_2=-\lambda_{12}+\lambda_{\overline{12}}.$ When $R_+ = \{ \alpha_1,\alpha_1+\alpha_2, 2\alpha_1+3\alpha_2, \alpha_1+2\alpha_2, \alpha_1+3\alpha_2, \alpha_2 \} $ is of type $G_2,$ we have the quadratic relation in the algebra $\B_W$ as follows: $$[\alpha_1][\alpha_1+\alpha_2]+[\alpha_1+\alpha_2][2\alpha_1+3\alpha_2] +[2\alpha_1+3\alpha_2][\alpha_1+2\alpha_2]$$ $$+ [\alpha_1+2\alpha_2][\alpha_1+3\alpha_2] + [\alpha_1 + 3\alpha_2][\alpha_2] =[\alpha_2][\alpha_1] .$$ This equation shows that the constants $(\lambda_{\gamma})_{\gamma \in R_+}$ are subject to the following constraints $$\lambda_{\alpha_1+\alpha_2}=3\lambda_{\alpha_1}+\lambda_{\alpha_2}, \lambda_{2\alpha_1+3\alpha_2}= 2\lambda_{\alpha_1}+\lambda_{\alpha_2}, \lambda_{\alpha_1+2\alpha_2}=3\lambda_{\alpha_1}+2\lambda_{\alpha_2}, \lambda_{\alpha_1+3\alpha_2}= \lambda_{\alpha_1}+\lambda_{\alpha_2} .$$ This means that $\lambda_{\gamma}=\lambda(\gamma^{\vee})$ for some $\lambda\in V^*.$ Consider the multiplication operators $${\bf e} = e^{\sum_{i=1}^r \lambda_{\alpha_i}\pi_i(\xi)}$$ and $${\bf e}_+=(\prod_{\beta \in R_+}\beta)e^{\sum_{i=1}^r \lambda_{\alpha_i}\pi_i(\xi)},$$ where $\pi_i$ is the fundamental dominant weight corresponding to $\alpha_i.$ For the operator $\DD_{\alpha}=k_{\alpha}(1-e^{\lambda_{\alpha}(\alpha,\xi)}s_{\alpha})/(\alpha,\xi),$ we have $$\DD_{\alpha}= k_{\alpha} {\bf e} \circ \partial_{\alpha} \circ {\bf e}^{-1}.$$ For the operator $\DD_{\alpha}=k_{\alpha}(1+e^{\lambda_{\alpha}(\alpha,\xi)}s_{\alpha})/(\alpha,\xi),$ we have $$\DD_{\alpha}= k_{\alpha} {\bf e}_+ \circ \partial_{\alpha} \circ {\bf e}_+^{-1}.$$ Namely, $\DD_{\alpha}$ is conjugate to $\partial_{\alpha}$ up to a constant $k_{\alpha}.$ Hence the operators $\DD_{\alpha}$ give rise to a representation of $\B_W$ from Lemma 1.1. If $R$ is of type $B_2,$ then $\B_W$ is a $64$-dimensional algebra defined by the following relations:\ [(i)]{} $\; [12]^2=\overline{[12]}^2=[1]^2=[2]^2=0,$\ [(ii)]{} $\; [12]\overline{[12]}=\overline{[12]}[12],$ $[1][2]=[2][1],$\ [(iii)]{} $\; [12][1]-[2][12]+[1]\overline{[12]}+\overline{[12]}[2]=0,$ $[1][12]-[12][2]+ \overline{[12]}[1]+[2]\overline{[12]}=0,$\ [(iv)]{} $\; [12][1]\overline{[12]}[1]+\overline{[12]}[1][12][1]+[1][12][1]\overline{[12]}+ [1]\overline{[12]}[1][12]=0,$\ [(v)]{} $\; [1][12][1][12]=[12][1][12][1].$ The relations above were considered in [@KM1] and [@MS]. The algebra defined by these relations is a finite-dimensional algebra with the Hilbert polynomial $(1+t)^4(1+t^2)^2.$ Milinski and Schneider [@MS] and Bazlov [@Ba] have shown that these relations are also satisfied in the algebra $\B_W.$ They also checked that the algebra $\B_W$ has dimension 64. Hence, the relations above exhaust the independent defining relations for the algebra $\B_W$ in $B_2$ case. Let $R$ be of type $B_2.$\ [(i)]{} The functions $f_{\alpha}$ must be as follows: $$f_1(x)=f_2(x)=\frac{A}{{\rm sn}(ax,k)},$$ $$f_{12}(x)=f_{\overline{12}}(x)=\frac{B}{{\rm sn}(\varepsilon a x,\tilde{k})},$$ where $A,$ $B,$ $a,$ $k$ are arbitrary constants, and $\tilde{k}=(1-k)/(1+k),$ $\varepsilon =(1+k)/\sqrt{-1}.$\ [(ii)]{} If one assumes the $W$-invariance $w \circ \DD_{\alpha} \circ w^{-1} = \DD_{w(\alpha)},$ $w\in W,$ then $$g_{\alpha}(x)=\pm f_{\alpha}(x) ,$$ where the choice of the signature is independent of $\alpha.$\ [(iii)]{} If the functions $f_{\alpha}(z)$ are chosen as in [(i)]{} and $g_{\alpha}(x)= \pm e^{\lambda(\alpha^{\vee})x}f_{\alpha}(x),$ $\lambda \in V^*,$ then the operators $\DD_{\alpha}$ give a representation of $\B_W.$ [*Proof.*]{} (i) This follows from the 4-term quadratic equations and [@BFV Theorem 6]. The relation $$[12][1]-[2][12]+\overline{[12]}[2]+[1]\overline{[12]}=0$$ implies $$f_{12}(x-y)f_1(x)-f_2(y)f_{12}(x-y)+f_{\overline{12}}(x+y)f_2(y)+f_1(x)f_{\overline{12}}(x+y)=0.$$ From the equations $$(f_{12}(x-y)+f_{\overline{12}}(-x+y))g_1(x)=0, \;(f_1(y)-f_2(y))g_{12}(x-y)=0,$$ we have $f_{12}=f_{\overline{12}}$ and $f_1=f_2.$ Hence the functions $f_{12}=f_{\overline{12}}$ and $f_1=f_2$ are the solutions found in [@BFV] in the invariant case, i.e., $$f_1(x)=f_2(x)=\frac{A}{{\rm sn}(ax,k)},$$ $$f_{12}(x)=f_{\overline{12}}(x)=\frac{B}{{\rm sn}(\varepsilon a x,\tilde{k})}.$$ (ii) The $W$-invariance shows $g_{12}(z)=g_{\overline{12}}(z)$ and $g_1(z)=g_2(z).$ Moreover, the functions $g_{\alpha}(z)$ must be odd functions. On the other hand, we may set $g_{\alpha}(z)=f_{\alpha}(z)\phi_{\alpha}(z)$ with $\phi_{\alpha}(z)\phi_{\alpha}(-z)=1$ from Lemma 1.2. Since both of $f_{\alpha}$ and $g_{\alpha}$ are odd functions, $\phi_{\alpha}$ must be even function. Hence, we have $\phi_{\alpha}(z)=\pm 1.$\ (iii) In this case, we can check that the operators $\DD_{\alpha}$ satisfy all the relations listed in Proposition 1.2 by direct computation. Representation of quadratic algebra =================================== [For a given family of functions $\varphi_{ij}(z)=-\varphi_{ji}(-z),$ $\psi_{ij}(z)=\psi_{ji}(z)\in \M_0,$ $1\leq i,j \leq n,$ $i\not=j,$ the algebra $\tilde{\E}_n(\varphi_{ij},\psi_{ij})$ is a $\C$-algebra generated by the symbols $\la ij\ra$ and functions $f(\xi)$ in $\M$ subject to the relations:\ (i) $\la ij \ra^2=\psi_{ij}(x_i-x_j),$\ (ii) $\la ij \ra \la kl \ra = \la kl \ra \la ij\ra$ for $\{ i,j \} \cap \{ k,l \} = \emptyset,$\ (iii) $\la ij \ra \la jk \ra + \la jk \ra \la ki\ra + \la ki \ra \la ij \ra=0,$\ (iv) $(\la ij \ra -\varphi_{ij}(x_i-x_j)) f(\xi)= f(s_{ij}\xi)(\la ij \ra-\varphi_{ij}(x_i-x_j)).$ ]{} [The algebra $\M^{S_n}$ of $S_n$-invariant functions is contained in the center of $\tilde{\E}_n(\varphi_{ij},\psi_{ij}).$ Hence $\tilde{\E}_n(\varphi_{ij},\psi_{ij})$ has a structure of the $\M^{S_n}$-algebra.]{} In this section we consider when the quadratic algebra $\tilde{\E}_n(\varphi_{ij},\psi_{ij})$ has a generalization of the Calogero-Moser representation. For $\lambda \in \C \setminus \Z+\Z \tau,$ define the function $\sigma_{\lambda}(z)=\sigma_{\lambda}(z|\tau)$ by the formula $$\sigma_{\lambda}(z)= \frac{\vartheta_1(z-\lambda)\vartheta_1'(0)}{\vartheta_1(z)\vartheta_1(-\lambda)},$$ where $\vartheta_1(z)$ is Jacobi’s theta function $$\vartheta_1(z)= -\sum_{n=-\infty}^{+\infty}\exp \left( 2\pi \sqrt{-1} \left( (z+\frac{1}{2})(n+\frac{1}{2}) + \frac{\tau}{2} (n+\frac{1}{2})^2 \right) \right) .$$ The algebra $\tilde{\E}_n(\varphi_{ij},\psi_{ij})$ has the generalized Calogero-Moser representation if and only if $\varphi_{ij}(z)=a/z$ and the functions $\psi_{ij}$ have one of the following forms:\ [(i)]{} $$\psi_{ij}(z)= \frac{A}{z^2}-K(\wp(bz)-\wp(\lambda_i-\lambda_j)),$$ [(ii)]{} $$\psi_{ij}(z)= \frac{A}{z^2}-K\frac{\sin^2(b(z-\lambda_i+\lambda_j))}{\sin^2(bz)\sin^2(b(\lambda_i-\lambda_j))},$$ [(iii)]{} $$\psi_{ij}(z)= \frac{A-K}{z^2}+\frac{K}{(\lambda_i-\lambda_j)^2}.$$ Here, $A=a^2,$ $K$ and $b$ are parameters. [*Proof.*]{} If the generalized Calogero-Moser representation $$\la ij \ra \mapsto \DD_{ij}= f_{ij}(x_i-x_j)+g_{ij}(x_i-x_j)s_{ij}$$ is well-defined for the algebra $\tilde{\E}_n(\varphi_{ij},\psi_{ij}),$ then $\varphi_{ij}(z)=f_{ij}(z)$ must be a rational function $a/z$ as we have seen in the proof of Proposition 1.1. The functions $g_{ij}$ are also determined from [@BFV Theorem 16]. Hence the operator $\DD_{ij}$ must be one of the following:\ (i) $$\DD_{ij}= \frac{a}{x_i-x_j} + k \sigma_{\lambda_i-\lambda_j}(b(x_i-x_j)) e^{(\alpha_i-\alpha_j)(x_i-x_j)}s_{ij} ,$$ (ii) $$\DD_{ij}= \frac{a}{x_i-x_j} + k \frac{\sin(b(x_i-x_j-\lambda_i+\lambda_j))}{\sin(b(x_i-x_j))\sin(b(\lambda_i-\lambda_j))}e^{(\alpha_i-\alpha_j)(x_i-x_j)}s_{ij},$$ (iii) $$\DD_{ij}= \frac{a}{x_i-x_j} + k \left(\frac{1}{x_i-x_j}-\frac{1}{\lambda_i-\lambda_j} \right)e^{(\alpha_i-\alpha_j)(x_i-x_j)}s_{ij}.$$ In case (i), we have $$\psi_{ij}(x_i-x_j)= \DD_{ij}^2= \frac{A}{(x_i-x_j)^2}-K(\wp(b(x_i-x_j))-\wp(\lambda_i -\lambda_j))$$ with $A=a^2,$ $K=k^2.$ We also have the desired result in cases (i) and (ii) in a similar way. [ The trigonometric solution (ii) is obtained from the elliptic solution (i) by taking the limit $\tau \rightarrow +\infty \sqrt{-1}$ and replacing $\lambda_i$ by $b\lambda_i.$ The rational solution (iii) is obtained from the trigonometric solution by taking the limit $b\rightarrow 0$ after replacing $K$ by $Kb^2.$]{} Under the assumption of Proposition 2.1, the functions $\varphi_{ij}(z)$ are determined to be the rational function $a/z.$ In the rest of this paper, we denote just by $\tilde{\E}_n(\psi_{ij})$ the quadratic algebra $\tilde{\E}_n(\varphi_{ij},\psi_{ij})$ with $\varphi_{ij}(z)=a/z.$ If we introduce a new set of generators $[ij]=\la ij \ra -a/(x_i-x_j),$ then the algebra $\tilde{\E}_n(\psi_{ij})$ is defined by the following relations:\ ${\rm (i)}'$ $[ij]^2=\psi_{ij}(x_i-x_j)-A/(x_i-x_j)^2,$\ ${\rm (ii)}'$ $[ij][kl]=[kl][ij]$ for $\{ i,j \} \cap \{ k,l \} = \emptyset,$\ ${\rm (iii)}'$ $[ij][jk]+[jk][ki]+[ki][ij]=0,$\ ${\rm (iv)}'$ $[ij]f(\xi)=f(s_{ij}\xi)[ij].$ Subalgebra generated by Dunkl elements ====================================== In this section, the functions $\psi_{ij}$ are assumed to be chosen as in Proposition 2.1 (i) with $K=b=1$ for simplicity. We define the Dunkl elements $\theta_i$ in the algebra $\tilde{\E}_n(\psi_{ij})$ by the formula $$\theta_i =\sum_{j\not= i} [ij] .$$ We can easily see the following from the defining quadratic relations for $\tilde{\E}_n(\psi_{ij}).$ The Dunkl elements $\theta_1,\ldots,\theta_n$ commute pairwise. In the rest of this section, we discuss the structure of the commutative subalgebra generated by the Dunkl elements $\theta_1,\ldots, \theta_n$ over $\M^{S_n}$ in the algebra $\tilde{\E}_n(\psi_{ij}).$ We use an abbreviation $x_{ij}:=x_i-x_j,$ $\lambda_{ij}:=\lambda_i-\lambda_j$ in the following. [([@FK Lemma 7.3])]{} For distinct $i_1,\ldots,i_k,$ one has the following relation in the algebra $\tilde{\E}_n(\psi_{ij})$ for $k\geq 3.$ $$\sum_{a=1}^k [i_a \; i_{a+1}][i_a \; i_{a+2}] \cdots [i_a \; i_k] \cdot [i_a \; i_1] [i_a \; i_2] \cdots [i_a \; i_{a-1}] = 0.$$ [*Proof.*]{} The proof is done by induction on $k.$ For $k=3,$ the relation (8) is just the 3-term relation $$[i_1\; i_2][i_2 \; i_3]+[i_2\; i_3][i_3\; i_1]+[i_3\; i_1][i_1\; i_2]=0 .$$ Let $Q_k(i_1,\ldots,i_k)$ denote the left-hand side of the above relation. By using the 3-term relation $$[i_a\; i_{k-1}][i_a\; i_k]=[i_{k-1}\; i_k][i_a\; i_{k-1}]-[i_a\; i_k][i_{k-1}\; i_k] ,$$ we get $$\begin{aligned} \lefteqn{\sum_{a=1}^k [i_a \; i_{a+1}][i_a \; i_{a+2}] \cdots [i_a \; i_k] \cdot [i_a \; i_1] [i_a \; i_2] \cdots [i_a \; i_{a-1}] } \\ &=& \sum_{a=1}^{k-2} [i_a \; i_{a+1}] \cdots [i_a \; i_{k-2}] \cdot \Big( [i_{k-1}\; i_k][i_a\; i_{k-1}]-[i_a\; i_k][i_{k-1}\; i_k] \Big) \cdot [i_a \; i_1] \cdots [i_a \; i_{a-1}] \\ & & {} + [i_{k-1}\; i_k][i_{k-1}\; i_1]\cdots [i_{k-1}\; i_{k-2}] + [i_k\; i_1][i_k\; i_2] \cdots [i_k\; i_{k-1}] \\ &=& [i_{k-1}\; i_k] Q_{k-1}(i_1,\ldots , i_{k-1})-Q_{k-1}(i_1,\ldots,i_{k-2},i_k)[i_{k-1}\; i_k] =0. \end{aligned}$$ For distinct $i_1,\ldots,i_k,m,$ one has the following relation in the algebra $\tilde{\E}_n(\psi_{ij})$ for $k\geq 2.$ $$(-1)^{k+1}\sum_{a=1}^k [i_a \; m][i_{a+1} \; m] \cdots [i_k \; m] \cdot [i_1 \; m] [i_2 \; m] \cdots [i_{a-1} \; m] [i_a \; m]$$ $$= \sum_{a=1}^k \wp(\lambda_{i_am}) [i_a \; i_{a+1}][i_a \; i_{a+2}] \cdots [i_a \; i_k ] \cdot [i_a \; i_1] [i_a \; i_2] \cdots [i_a \; i_{a-1}]$$ [*Proof.*]{} The proof is done by induction on $k.$ For $k=2,$ we have $$\begin{aligned} \lefteqn{[i_1\; m][i_2\; m][i_1\; m]+[i_2\; m][i_1\; m][i_2\; m]} \\ &=& \Big( [i_2\; m][i_1\; i_2]-[i_1\; i_2][i_1\; m] \Big) [i_1\; m] + [i_2\; m] \Big( [i_2\; m][i_1\; i_2]-[i_1\; i_2][i_1\; m] \Big) \\ &=& -[i_1\; i_2](\psi_{i_1 \; m}(x_{i_1 \; m})-Ax_{i_1 \; m}^{-2})+ (\psi_{i_2\; m}(x_{i_2\; m})-Ax_{i_2\; m}^{-2})[i_1\; i_2] \\ &=& \Big( \psi_{i_2\; m}(x_{i_2\; m}) - \psi_{i_1 \; m}(x_{i_2\; m})\Big) [i_1\; i_2] \\ &=& \Big( \wp(\lambda_{i_2\; m}) - \wp(\lambda_{i_1\; m}) \Big) [i_1\; i_2] . \end{aligned}$$ Let $P_k(i_1,\ldots,i_k;m)$ denote the left-hand side of the relation (9). Here we show only the relation $$P_k(1,2,\ldots,k;m) = \sum_{a=1}^k \wp(\lambda_{am}) [a \; a+1][a \; a+2] \cdots [a \; k ] \cdot [a \; 1] [a \; 2] \cdots [a \; a-1],$$ since the general relations can be proved in similar manner. By using the quadratic relation $[i_{k-1} \; m][i_k\; m] = [i_k\; m][i_{k-1}\; i_k] -[i_{k-1}\; i_k][i_{k-1}\; m]$ and the assumption of the induction, we obtain $$\begin{aligned} \lefteqn{P_k(1,\ldots,k;m)}\\ &=& [k-1\; k] \cdot P_{k-1}(1,\ldots,k-2,k-1;m)- P_{k-1}(1,\ldots,k-2,k;m)\cdot [k-1\; k] \\ &=& [k-1\; k] \cdot \sum_{a=1}^{k-1} \wp(\lambda_{am}) [a \; a+1] \cdots [a \; k-1 ] \cdot [a \; 1] \cdots [a \; a-1] \\ & & {} - \sum_{a=1}^{k-2} \wp(\lambda_{am}) [a \; a+1] \cdots [a \; k-2 ] [a \; k] \cdot [a \; 1] \cdots [a \; a-1] \cdot [k-1 \; k] \\ & & {} - \wp(\lambda_{km}) [k \; 1][k\; 2] \cdots [k \; k-2] [k-1 \; k] \\ &=& \sum_{a=1}^{k-2} \wp(\lambda_{am}) [a \; a+1] \cdots [a \; k-2 ] \Big( [k-1 \; k][a \; k-1] - [a \; k][k-1 \; k] \Big) [a \; 1] \cdots [a \; a-1] \\ & & {} + \wp(\lambda_{k-1 \; m})[k-1 \; k][k-1\; 1] \cdots [k-1 \; k-2] \\ & & {} + \wp(\lambda_{km}) [k \; 1][k\; 2] \cdots [k \; k-2] [k\; k-1] \\ &=& \sum_{a=1}^{k-2} \wp(\lambda_{am}) [a \; a+1] \cdots [a \; k-2 ] \Big( [a \; k-1][a \; k] \Big) [a \; 1] \cdots [a \; a-1] \\ & & {} + \wp(\lambda_{k-1 \; m})[k-1 \; k][k-1\; 1] \cdots [k-1 \; k-2] \\ & & {} + \wp(\lambda_{km}) [k \; 1][k\; 2] \cdots [k \; k-2] [k\; k-1] \\ &=& \sum_{a=1}^k \wp(\lambda_{am}) [a \; a+1][a \; a+2] \cdots [a \; k ] \cdot [a \; 1] [a \; 2] \cdots [a \; a-1]. \end{aligned}$$\ [**Example.**]{} $(k=4)$ $$\begin{aligned} \lefteqn{[1m][2m][3m][4m][1m]+ [2m][3m][4m][1m][2m]}\\ \lefteqn{+[3m][4m][1m][2m][3m]+ [4m][1m][2m][3m][4m]} \\ &=& -[1m][2m][34][3m][1m]+[1m][2m][4m][34][1m] \\ & & {} -[2m][34][3m][1m][2m]+[2m][4m][34][1m][2m] \\ & & {} -[34][3m][1m][2m][3m]+[4m][34][1m][2m][3m] \\ & & {} -[4m][1m][2m][34][3m]+[4m][1m][2m][4m][34] \\ &=& - [34] \Big( [1m][2m][3m][1m]+[2m][3m][1m][2m]+[3m][1m][2m][3m] \Big) \\ & & {} + \Big( [1m][2m][4m][1m]+[2m][4m][1m][2m]+[4m][1m][2m][4m] \Big) [34] \\ &=& -[34] \Big( \wp(\lambda_{1m}) [12][13] + \wp(\lambda_{2m}) [23][21] + \wp(\lambda_{3m}) [31][32] \Big) \\ & & + \Big( \wp(\lambda_{1m}) [12][14] + \wp(\lambda_{2m}) [24][21] + \wp(\lambda_{4m}) [41][42] \Big) [34] \\ &=& -\wp(\lambda_{1m}) [12] ([34][13]-[14][34]) - \wp(\lambda_{2m})([34][23]-[24][34])[21] \\ & & - \wp(\lambda_{3m}) [34][31][32] - \wp(\lambda_{4m})[41][42][43] \\ &=& -\wp(\lambda_{1m}) [12] [13][14] - \wp(\lambda_{2m}) [23][24][21] - \wp(\lambda_{3m}) [34] [31][32] - \wp(\lambda_{4m}) [41] [42][43] \end{aligned}$$ [Lemma 3.2 is a deformed version of [@FK Lemma 7.2] and [@Po Lemma 5.3]. Though the identity (9) looks similar to the one in [@Po Lemma 5.3], they are different formulas. In our case, $[ij]^2=\psi_{ij}(x_{ij})-Ax_{ij}^{-2}$ is not central, and $[ij]^2\not= \wp(\lambda_{ij}).$]{} For a subset $I\subset \{ 1,\ldots ,n \}$ with $\# I=2k,$ define the function $\phi(I)=\phi(x_i|i\in I)$ by the following formula: $$\phi(I):= \sum \prod_{i=1}^k \wp(x_{a_i b_i}) ,$$ where the summension is taken over the choice of pairs $(a_i,b_i),$ $1\leq i \leq k,$ such that $I=\{ a_1,\ldots,a_k, b_1,\ldots b_k \},$ $a_1< \cdots < a_k$ and $a_i<b_i.$ We also define the deformed elementary symmetric polynomial $E_k(I)=E_k(X_i\, |\, i \in I)$ by the recursion relations: $$E_0(I)=1, \; E_k(I\cup \{ j \})=E_k(I)+E_{k-1}(I)X_j+\sum_{i\in I} \wp(\lambda_{ij})E_{k-2}(I\setminus \{ i \}).$$ One has the following formula in the algebra $\tilde{\E}_n(\psi_{ij}):$ $$E_k(\theta_i \; | \; i\in I)= \sum_{l=0}^{[k/2]}\sum_{I_0 \subset I, \# I_0=2l} \phi(I_0) \sum_{(*)} [a_1 \; b_1] \cdots [a_{k-2l} \; b_{k-2l}] ,$$ where $(*)$ stands for the conditions that $a_i \in I \setminus I_0$; $b_i \not\in I$; $a_1,\ldots ,a_{k-2l}$ are distinct; $b_1\leq \cdots \leq b_{k-2l}.$ [ $$E_k(\theta_1,\ldots,\theta_n)= \left\{ \begin{array}{cc} \sum_{I_0\subset I, \# I_0=k} \phi(I_0) & \textrm{if $k$ is even,} \\ 0 & \textrm{if $k$ is odd.} \end{array} \right.$$ ]{} [*Proof of Theorem 3.1.*]{} Denote by $F_k(I)$ the right-hand side of the fomula (10). For $I\subset \{ 1, \ldots, n\}$ and $j\not\in I,$ we will show the recursive formula $$F_k(I\cup \{ j \})=F_k(I)+\theta_jF_{k-1}(I)+\sum_{i\in I} \wp(\lambda_{ij})F_{k-2}(I\setminus \{ i \}).$$ Let $J=\{ j_1=j,\ldots ,j_d \}$ be the set $\{ 1,\ldots , n \} \setminus I.$ For $L=\{ l_1, \ldots, l_m \} \subset \{ 1,\ldots , n \}$ and $r \not\in L,$ we define $$\lla L \; | \; r \rra := \sum_{w \in S_m} [l_{w(1)} \; r][l_{w(2)} \; r] \cdots [l_{w(m)} \; r] .$$ In order to show the formula above, we use the following decompositions which are similar to those used in [@Po]. In the following, the symbol $I_1\cdots I_d \subset_m I$ means that $I_1,\ldots,I_d \subset I$ are disjoint and $\# I_1 + \cdots + \# I_d=m.$ Here, some of $I_1, \ldots, I_d$ may be empty. Let us consider the decompositions: $$\begin{aligned} F_k(I) &=& \sum_{l=0}^{[k/2]}\sum_{I_0\subset_{2l} I}\phi(I_0) \sum_{I_1\ldots I_d \subset_{k-2l} I\setminus I_0} \lla I_1 \; | \; j_1 \rra \lla I_2 \; | \; j_2 \rra \cdots \lla I_d \; | \; j_d \rra \\ &=& A_1 + A_2, \\ F_k(I\cup \{ j \}) &=& \sum_{l=0}^{[k/2]}\sum_{I'_0\subset_{2l} I\cup \{ j \} }\!\!\!\! \phi(I'_0)\!\!\!\!\! \sum_{I'_2\ldots I'_d \subset_{k-2l} (I\cup \{ j \} ) \setminus I'_0} \!\!\!\!\!\!\!\!\!\!\!\!\! \lla I'_2 \; | \; j_2 \rra \lla I'_3 \; | \; j_3 \rra \cdots \lla I'_d \; | \; j_d \rra \\ &=& B_1 + B_2 + B_3, \\ \theta_jF_{k-1}(I) &=& \sum_{s\not=j}[js]\sum_{l=0}^{[(k-1)/2]}\sum_{I''_0\subset_{2l} I}\!\!\! \phi(I''_0) \!\!\! \sum_{I''_1\ldots I''_d \subset_{k-1-2l} I\setminus I''_0} \!\!\!\!\!\! \lla I''_1 \; | \; j_1 \rra \cdots \lla I''_d \; | \; j_d \rra \\ &=& C_1+C_2+C_3+C_4, \end{aligned}$$ where $A_1$ is the sum of terms with $I_1=\emptyset$; $A_2$ is the sum of terms with $I_1\not= \emptyset$; $B_1$ is the sum of the terms with $j\not\in I_0 \cup I'_2 \cup \cdots \cup I'_d$; $B_2$ is the sum of terms with $j\in I'_2 \cup \cdots \cup I'_d$; $B_3$ is the sum of terms with $j\in I''_0$; $C_1$ is the sum of terms with $s\in I\setminus (I''_0\cup I''_1 \cup \cdots \cup I''_d)$; $C_2$ is the sum of terms with $s\in I''_2 \cup \cdots \cup I''_d \cup J$; $C_3$ is the sum of terms with $s\in I''_0$; $C_4$ is the sum of terms with $s\in I''_1.$ Then we can see that $A_1=B_1,$ $A_2+C_1=0$ and $B_2=C_2$ by the same argument in [@Po]. Note that the formula in Lemma 3.2 holds only for $k\geq 2.$ For any subset $K=\{ k_1,\ldots,k_m \}$ with $j\not\in K,$ we have $$\begin{aligned} \lefteqn{\sum_{s\in K}\left( [js] \lla K \; | \; j \rra + \sum_{L\subset K\setminus \{ s \}} \wp(\lambda_{js})\lla L \; | \; s \rra \lla K\setminus L \setminus \{ s \} \; | \; j \rra \right)} \\ &=& \sum_{s\in K} \left( [js][sj]\lla K\setminus \{ s \} \; | \; j \rra + [js] \sum_{w\in S_m, k_{w(1)}\not=s} [ k_{w(1)}\; j] \cdots [k_{w(k)} \; j ] \right. \\ & & \left. + \wp(\lambda_{js})\lla K \setminus \{ s \} \; | \; j \rra + \sum_{L\subset K\setminus \{ s \},L\not=\emptyset} \wp(\lambda_{js})\lla L \; | \; s \rra \lla K\setminus L \setminus \{ s \} \; | \; j \rra \right) \\ &=& \sum_{s\in K}\wp(x_{js}) \lla K \setminus \{ s \} \; | \; j \rra \end{aligned}$$ from Lemma 3.2 and $[ij]^2=\psi_{ij}(x_{ij})-Ax_{ij}^{-2}= -(\wp(x_{ij})-\wp(\lambda_{ij})).$ This shows $$\begin{aligned} \lefteqn{C_3+C_4+\sum_{i\in I}\wp(\lambda_{ij})F_{k-2}(I\setminus \{ i \})} \\ & = & \sum_{l=1}^{[(k-1)/2]}\!\!\! \sum_{I''_0\subset_{2l} I} \sum_{s\in I''_0} \phi(\{ j \}\cup I''_0 \setminus \{ s \})\cdot [js] \!\! \sum_{I''_1\cdots I''_d \subset_{k-1-2l}I\setminus I_0''}\!\!\!\!\!\! \lla I''_1 \; | \; j_1 \rra \cdots \lla I''_d \; | \; j_d \rra \\ & + & \!\!\!\!\!\! \sum_{l=0}^{[(k-1)/2]}\!\!\!\!\! \sum_{I''_0\subset_{2l} I}\!\! \phi(I''_0) \!\!\! \sum_{I''_1\cdots I''_d \subset_{k-1-2l}I\setminus I''_0}\sum_{s\in I''_1} \wp(x_{js}) \lla I''_1 \setminus \{ s \} | \; j_1 \rra \cdots \lla I''_d \; | \; j_d \rra \\ & = & -\sum_{l=1}^{[(k-1)/2]}\sum_{I''_0\subset_{2l} I,j\in I''_0}\phi(I_0'') \sum_{I''_1\cdots I''_d \subset_{k-2l}I\setminus I''_0, I''_1\not=\emptyset} \lla I''_1 \; | \; j_1 \rra \cdots \lla I''_d \; | \; j_d \rra \\ & + & \sum_{l=0}^{[(k-1)/2]}\sum_{I''_0\subset_{2l+2} I,j\in I''_0}\phi(I_0'') \sum_{I''_1\cdots I''_d \subset_{k-2l-2}I\setminus I''_0} \lla I''_1 \; | \; j_1 \rra \cdots \lla I''_d \; | \; j_d \rra \\ & = & \sum_{l=1}^{[k/2]}\sum_{I''_0\subset_{2l} I,j\in I''_0}\phi(I_0'') \sum_{I''_2\cdots I''_d \subset_{k-2l}I\setminus I''_0} \lla I''_2 \; | \; j_2 \rra \cdots \lla I''_d \; | \; j_d \rra \\ & = & B_3. \end{aligned}$$ [**Example.**]{} One has the following formula for $E_3(\theta_1,\theta_2,\theta_3)$ in $\tilde{\E}_5(\psi_{ij}):$ $$\theta_1\theta_2\theta_3+\wp(\lambda_{23})\theta_1+\wp(\lambda_{13})\theta_2 + \wp(\lambda_{12})\theta_3=$$ $$\sum_{(**)}[a_1\; b_1][a_2\; b_2][a_3\; b_3]+\psi_{12}(x_{12})([34]+[35])+\psi_{13}(x_{13})([24]+[25])+ \psi_{23}(x_{23})([14]+[15]),$$ where $(**)$ stands for the condition that $\{ a_1,a_2,a_3 \} =\{ 1,2,3 \};$ $b_1,b_2,b_3\in \{ 4,5 \}$ and $b_1\leq b_2 \leq b_3.$ Degenerations ============= Some variants of the cohomology ring of the flag variety $$Fl_n= SL_n(\C)/\textrm{(upper triangular matrices)}$$ have the model as the commutative subalgebra in deformations of the quadratic algebra $\E_n.$ We see how the deformations of $\E_n$ used for the constructions of the model of the cohomology rings can be recovered as degenerations of our algebra $\tilde{\E}_n(\psi_{ij}).$ Let $T\subset SL_n(\C)$ be the torus consisting of the diagonal matrices. We identify the polynomial ring $R=\Z[x_1,\ldots,x_n]$ with the $T$-equivariant cohomology ring $H_T(\textrm{pt.}).$ The authors introduced the extended quadratic algebra $\tilde{\E}_n \langle R \rangle [t]$ to construct a model of the $T$-equivariant cohomology ring $H_T(Fl_n)$ in [@KM2]. In case $\psi_{ij}(z)=0$ for any distinct $i$ and $j,$ the algebra $\tilde{\E}_n(\psi_{ij}=0)$ is defined by the relations $[ij]^2=0,$ $[ij][kl]=[kl][ij]$ for $\{ i,j \} \cap \{ k,l \} = \emptyset,$ $[ij][jk]+[jk][ki]+[ki][ij]=0$ and $[ij]x_i=x_j[ij].$ Since these relations are same as the defining relations for the algebra $\tilde{\E}_n \langle R \rangle [t]|_{t=0}$ introduced in [@KM2], the $\C$-subalgebra of $\tilde{\E}_n(\psi_{ij}=0)$ generated by $[ij]$’s and $x_1,\ldots,x_n$ is isomorphic to $\tilde{\E}_n \langle R \rangle [t]|_{t=0}.$ The subsequent result shows that the elements $$\theta'_i:=x_i+\theta_i=x_i+\sum_{j\not=i}[ij], \; \; \; i=1,\ldots,n,$$ generate a commutative $R$-subalgebra of $\tilde{\E}_n(\psi_{ij}=0)\otimes_{R^{S_n}}R$ which is isomorphic to the $T$-equivariant cohomology ring $H_T(Fl_n).$ [([@KM2 Corollary 2.2])]{} Let $I$ be a subset of $\{ 1,\ldots,n \}.$ In the algebra $\tilde{\E}_n(\psi_{ij}=0),$ one has $$e_k(\theta'_i\; | \; i\in I) = \sum_{m=0}^k\sum_{I_0\subset_m I}(\prod_{i\in I_0}x_i) \sum_{(*)}[a_1\; b_1]\cdots [a_{k-m}\; b_{k-m}] ,$$ where $(*)$ stands for the conditions that $a_i \in I\setminus I_0$; $b_i\not\in I$; $a_1,\ldots,a_{k-m}$ are distinct; $b_1\leq \cdots \leq b_{k-m}.$ In particular, one has $$e_k(\theta'_1,\ldots,\theta'_n)=e_k(x_1,\ldots,x_n), \; \; \; 1\leq k \leq n.$$ [*Proof.*]{} The idea is similar to the proof of Theorem 3.1. Denote by $F'_k(I)$ the right-hand side of (11). For $j\not\in I,$ we will show that $$F'_k(I\cup \{ j \})=F'_k(I)+ F'_{k-1}(I)(x_j+\theta_j).$$ We use the same notation as the one used in the proof of Theorem 3.1. Let us consider the decompositions: $$\begin{aligned} F'_k(I) &=& \sum_{m=0}^{k}\sum_{I_0\subset_m I}(\prod_{i\in I_0}x_i) \sum_{I_1\ldots I_d \subset_{k-m} I\setminus I_0} \lla I_1 \; | \; j_1 \rra \lla I_2 \; | \; j_2 \rra \cdots \lla I_d \; | \; j_d \rra \\ &=& A'_1 + A'_2, \\ F'_k(I\cup \{ j \}) &=& \sum_{m=0}^{k}\sum_{I'_0\subset_m I\cup \{ j \} }\! (\prod_{i\in I'_0}x_i)\!\!\!\!\! \sum_{I'_2\ldots I'_d \subset_{k-m} (I\cup \{ j \} ) \setminus I'_0} \!\!\!\!\!\!\!\!\!\!\!\!\! \lla I'_2 \; | \; j_2 \rra \lla I'_3 \; | \; j_3 \rra \cdots \lla I'_d \; | \; j_d \rra \\ &=& B'_1 + B'_2 + B'_3, \\ F'_{k-1}(I)\theta_j &=& \sum_{m=0}^{k-1}\sum_{I''_0\subset_m I}\! (\prod_{i\in I''_0}x_i) \!\!\! \sum_{I''_1\ldots I''_d \subset_{k-1-m} I\setminus I''_0} \!\!\!\!\!\! \lla I''_1 \; | \; j_1 \rra \cdots \lla I''_d \; | \; j_d \rra \sum_{s\not=j}[js] \\ &=& C'_1+C'_2+C'_3+C'_4, \end{aligned}$$ where $A'_1$ is the sum of terms with $I_1=\emptyset$; $A'_2$ is the sum of terms with $I_1\not= \emptyset$; $B'_1$ is the sum of the terms with $j\not\in I_0 \cup I'_2 \cup \cdots \cup I'_d$; $B'_2$ is the sum of terms with $j\in I'_2 \cup \cdots \cup I'_d$; $B'_3$ is the sum of terms with $j\in I''_0$; $C'_1$ is the sum of terms with $s\in I\setminus (I''_0\cup I''_1 \cup \cdots \cup I''_d)$; $C'_2$ is the sum of terms with $s\in I''_2 \cup \cdots \cup I''_d \cup J$; $C'_3$ is the sum of terms with $s\in I''_0$; $C'_4$ is the sum of terms with $s\in I''_1.$ Moreover, we decompose $F'_{k-1}(I)x_j$ as follows: $$\begin{aligned} F'_{k-1}(I)x_j & = & \sum_{m=0}^{k-1}\sum_{I''_0\subset_m I} (\prod_{i\in I''_0}x_i) \!\!\! \sum_{I''_1\ldots I''_d \subset_{k-1-m} I\setminus I''_0} \!\!\!\!\!\! \lla I''_1 \; | \; j_1 \rra \cdots \lla I''_d \; | \; j_d \rra x_j \\ &=& D'_1+D'_2, \end{aligned}$$ where $D'_1$ is the sum of terms with $I''_1=\emptyset$ and $D'_2$ is the sum of terms with $I''_1 \not= \emptyset.$ As before, we can easily see that $A'_1=B'_1,$ $A'_2+C'_1=0$ and $B'_2=C'_2.$ It is also clear that $B'_3=D'_1.$ Since the relations $[ij]^2=0$ are assumed, the degenerate version of the formula (9), which is same as [@FK Lemma 7.2], holds in $\tilde{\E}_n(\psi_{ij}=0):$ $$\sum_{a=1}^k [i_a \; m][i_{a+1} \; m] \cdots [i_k \; m] \cdot [i_1 \; m] [i_2 \; m] \cdots [i_{a-1} \; m] [i_a \; m] =0, \; \; \textrm{for $k\geq 1.$}$$ This formula implies $C'_4=0.$ Finally, the following computation completes the proof: $$\begin{aligned} \lefteqn{C'_3+D'_2} \\ & = & \sum_{m=1}^{k-1}\sum_{I''_0\subset_m I} (\prod_{i\in I''_0}x_i) \!\!\! \sum_{I''_1\ldots I''_d \subset_{k-1-m} I\setminus I''_0} \!\!\!\!\!\! \lla I''_1 \; | \; j_1 \rra \cdots \lla I''_d \; | \; j_d \rra \sum_{s \in I''_0}[js] \\ & + & \sum_{m=0}^{k-1}\sum_{I''_0\subset_m I} (\prod_{i\in I''_0}x_i) \!\!\! \sum_{I''_1\ldots I''_d \subset_{k-1-m} I\setminus I''_0,I''_1\not=\emptyset} \!\!\!\!\!\! \lla I''_1 \; | \; j_1 \rra \cdots \lla I''_d \; | \; j_d \rra x_j \\ & = & \!\!\! -\sum_{m=1}^{k-1}\sum_{I''_0\subset_m I} (\prod_{i\in I''_0}x_i) \!\!\!\! \sum_{I''_1\ldots I''_d \subset_{k-1-m} I\setminus I''_0} \sum_{s \in I''_0} \lla I''_1 \; | \; j_1 \rra [sj] \lla I_2 \; | \; j_2 \rra \cdots \lla I''_d \; | \; j_d \rra \\ & + & \!\!\!\! \sum_{m=0}^{k-1}\sum_{I''_0\subset_m I} (\prod_{i\in I''_0}x_i) \!\!\!\!\! \sum_{I''_1\ldots I''_d \subset_{k-1-m} I\setminus I''_0,I''_1\not=\emptyset} \sum_{s\in I''_1}x_s\lla I''_1\setminus \! \{ s \} | j_1 \rra [sj] \lla I_2 | j_2 \rra \cdots \lla I''_d | j_d \rra \\ & = & 0. \end{aligned}$$ Let us consider another kind of degeneration. Consider the elliptic solution obtained in Proposition 2.1 (i). If we put $K=\kappa \delta^2$ and $\lambda_{ij}=\delta \Lambda_{ij},$ then we have $$\lim_{\delta \rightarrow 0} \psi_{ij}(x_{ij})=Ax_{ij}^{-2}+\kappa \Lambda_{ij}^{-2}.$$ In this situation, the functions $[ij]^2=\psi_{ij}(x_{ij})-Ax_{ij}^{-2}$ become central parameters $\kappa \Lambda_{ij}^{-2}.$ Then the $\C$-algebra generated by the brackets $[ij]$ in $\tilde{\E}_n(\psi_{ij}=\kappa \Lambda_{ij}^{-2})$ is isomorphic to the multiparameter deformation of $\E_n$ introduced in [@FK Section 15], which is denoted by $\E_n^p$ in [@Po], under the identification of the parameters $p_{ij}=p_{ji}=\kappa \Lambda_{ij}^{-2}.$ In this case, the functions $\phi(I)$ are constantly zero, so Theorem 3.1 is reduced to the following: [([@FK Conjecture 15.1], [@Po Theorem 3.1])]{} Assume that the functions $\psi_{ij}$ are chosen as in Proposition 2.1 (iii) with $K=\kappa \delta^2,$ $\lambda_{ij}=\delta \Lambda_{ij}.$ In the limit $\delta \rightarrow 0,$ one has $$E_k(\theta_i , i\in I ; p)= \sum_{(*)'} [a_1 \; b_1] \cdots [a_k \; b_k] ,$$ where $(*)'$ stands for the conditions that $a_i \in I$; $b_i \not\in I$; $a_1,\ldots ,a_k$ are distinct; $b_1\leq \cdots \leq b_k.$ [99]{} N. Andruskiewitsch and H.-J. Schneider, [*Finite quantum groups and Cartan matrices,*]{} Adv. Math., [**154**]{} (2000), 1-45. Y. Bazlov, [*Nichols-Woronowicz algebra model for Schubert calculus on Coxeter groups,*]{} J. Algebra, [**297**]{} (2006), 372-399. V. M. Buchstaber, G. Felder and A. P. Veselov, [*Elliptic Dunkl operators, root systems, and functional equations,*]{} Duke Math. J., [**76**]{} (1994), no. 3, 885-911. I. Cherednik, [*Double Affine Hecke algebras,*]{} London Math. Soc. Lecture Notes Series 319, Cambridge Univ. Press, Cambridge, 2005. C. F. Dunkl, [*Differential-difference operators associated to reflection groups,*]{} Trans. Amer. Math. Soc., [**311**]{} (1989), 167-183. S. Fomin and A. N. Kirillov, [*Quadratic algebras, Dunkl elements and Schubert calculus,*]{} Advances in Geometry, (J.-L. Brylinski, R. Brylinski, V. Nistor, B. Tsygan, and P. Xu, eds. ) Progress in Math., [**172**]{}, Birkhäuser, 1995, 147-182. A. N. Kirillov, [*On some quadratic algebras, II*]{}, Preprint. A. N. Kirillov and T. Maeno, [*Noncommutative algebras related with Schubert calculus on Coxeter groups,*]{} European J. Combin., [**25**]{} (2004), 1301-1325. A. N. Kirillov and T. Maeno, [*Extended quadratic algebra and a model of the equivariant cohomology ring of flag varieties,*]{} Proc. of 19th International Conference on Formal Power Series and Algebraic Combinatorics, Tianjin, China, 2007. Y. Komori, [*Elliptic Ruijsenaars operators and functional equations,*]{} J. Math. Phys., [**43**]{} (2002), 5637-5653. S. Majid, [*Free braided differential calculus, braided binomial theorem, and the braided exponential map,*]{} J. Math. Phys., [**34**]{}(10) (1993), 4843-4856. S. Majid, [*Noncommutitive differentials and Yang-Mills on permutation groups $S_N,$*]{} Hopf algebras in noncommutative geometry and physics, 189-213, Lecture Notes in Pure and Appl. Math., 239, Dekker, New York, 2005. A. Milinski and H.-J. Schneider, [*Pointed indecomposable Hopf algebras over Coxeter groups,*]{} Contemp. Math., [**267**]{} (2000), 215-236. W. D. Nichols, [*Bialgebras of type one,*]{} Comm. Algebra, [**6**]{} (1978), 1521-1552. A. Postnikov, [*On a quantum version of Pieri’s formula,*]{} Advances in Geometry, (J.-L. Brylinski, R. Brylinski, V. Nistor, B. Tsygan and P. Xu, eds.) Progress in Math., [**172**]{} Birkhäuser, 1995, 371-383. S. L. Woronowicz, [*Differential calculus on compact matrix pseudogroups (quantum groups),*]{} Commun. Math. Phys., [**122**]{} (1989), 125-170. Research Institute for Mathematical Sciences\ Kyoto University\ Sakyo-ku, Kyoto 606-8502, Japan\ e-mail: [kirillov@kurims.kyoto-u.ac.jp]{}\ URL: [http://www.kurims.kyoto-u.ac.jp/\~kirillov]{}\ Department of Electrical Engineering,\ Kyoto University,\ Sakyo-ku, Kyoto 606-8501, Japan\ e-mail: [maeno@kuee.kyoto-u.ac.jp]{} [^1]: Supported by Grant-in-Aid for Scientific Research.
--- abstract: 'Picat, a new member of the logic programming family, follows a different doctrine than Prolog in offering the core logic programming concepts: arrays and maps as built-in data types; implicit pattern matching with explicit unification and explicit non-determinism; functions for deterministic computations; and loops for convenient scripting and modeling purposes. Picat provides facilities for solving combinatorial search problems, including a common interface with CP, SAT, and MIP solvers, tabling for dynamic programming, and a module for planning. Picat’s planner module, which is implemented by the use of tabling, has produced surprising and encouraging results. Thanks to term-sharing and resource-bounded tabled search, Picat overwhelmingly outperforms the cutting-edge ASP and PDDL planners on the planning benchmarks used in recent ASP competitions.' title: Combinatorial Search With Picat --- \[firstpage\] Introduction ============ Picat is a simple, and yet powerful, logic-based multi-paradigm programming language. The desire for a logic-based general-purpose programming language that is as powerful as Python for scripting, and on a par with OPL [@Hentenryck02] and MiniZinc [@NethercoteSBBDT07] for modeling combinatorial problems, led to the design of Picat. Early attempts to introduce arrays and loops into Prolog for modeling failed to produce a satisfactory language: most noticeably, array accesses are treated as functions only in certain contexts; and loops require the declaration of global variables in ECLiPSe [@Schimpf02] and local variables in B-Prolog [@Zhou12]. Picat departs from Prolog in many aspects, including the successful introduction of arrays and loops. Picat uses pattern-matching rather than unification in the selection of rules. Unification might be a natural choice in Horn clause resolution [@kowalski1971] for theorem proving, but its power is rarely needed for general programming tasks. Pattern-matching rules are fully indexed, and therefore Picat can be more scalable than Prolog. Unification can be considered as an equation over terms [@Colmerauer84], and just like constraints over finite domains, Picat supports unification as an explicit call. Non-determinism, a powerful feature of logic programming, makes concise solutions possible for many problems, including simulation of non-deterministic automata, parsers of ambiguous grammars, and search problems. Nevertheless, non-determinism is not needed for deterministic computations. In Prolog, Horn clauses are backtrackable by default. As it is undecidable to detect determinism in general [@Debray88], programmers tend to excessively use the cut operator to prune unnecessary clauses. Picat supports explicit non-determinism, which renders the cut operator unnecessary. Rules are deterministic unless they are explicitly denoted as backtrackable. Picat supports functions, like many other logic-based languages, such as Curry [@Hanus13], Erlang [@Armstrong13], and Mozart-Oz [@RoyH2004]. In Prolog, it’s often that queries fail, but the system gives no clue about the source of the failure. Functions should be used instead of relations, unless multiple answers are required. Functions are more convenient to use than predicates because (1) functions are guaranteed to succeed with a return value; (2) function calls can be nested; and (3) the directionality of functions enhances the readability. Many combinatorial problems can be formulated as constraint satisfaction problems (CSPs). There are three kinds of systematic solvers for solving CSPs, namely, Constraint Programming (CP), Mixed Integer Programming (MIP), and SAT solving. CP uses constraint propagation to prune search spaces, and uses heuristics to guide search [@Rossi06]. MIP relies on LP relaxation and branch-and-cut to find optimal integer solutions [@Appa10]. SAT performs unit propagation and clause learning to prune search spaces, and employs heuristics and learned clauses to perform non-chronological backtracking [@MalikZ09]. No solver is superior all the time; sometimes, extensive experimentation is necessary to find a suitable solver. Picat provides a common interface with CP, SAT, and MIP solvers for solving CSPs. For each solver, Picat provides a separate module of built-ins for creating decision variables, specifying constraints, and invoking the solver. The common interface allows for seamless switching from one solver to another. The basic language constructs, such as arrays and loops, make Picat a powerful modeling language for these solvers. Tabling [@warren92] can be used to cache the results of certain calculations in memory and reuse them in subsequent calculations through a quick table lookup. As computer memory grows, tabling is becoming increasingly important for offering dynamic programming solutions for many problems. Picat’s tabling system is inherited from B-Prolog [@zhou08tab]. Picat has a planner module. For a planning problem, the programmer only needs to specify conditions on the final states and the set of actions, and to call the planner on an initial state to find a plan or an optimal plan. The planner, which is implemented by the use of tabling, performs a state-space search and tables every state that is encountered during search. A joint effort by the system and the programmer is needed to deal with the state explosion problem. The Picat system stores all structured ground terms in a table, so ground terms that are shared by states are only tabled once. The enhanced [*hash-consing*]{} technique [@ZhouH12] also stores hash codes in order to speed up computation of hash codes and equality tests of terms. The Picat system also performs [*resource-bounded tabled search*]{}, which prunes parts of the search space that cannot lead to acceptable plans. In order to exploit these techniques, the programmer needs to design a good representation for states that facilitates sharing and removes symmetries. For certain problems, the programmer can also employ domain knowledge and heuristics to help prune the search space. Picat’s planner has produced surprising and encouraging results. It overwhelmingly outperforms the cutting-edge ASP and PDDL planners on many benchmarks used in recent ASP and IPC competitions. The Picat encodings of the benchmarks, which are as compact as the ASP and PDDL encodings, are available at [picat-lang.org](picat-lang.org). This paper gives an overview of Picat’s facilities for combinatorial search. It also offers a glimpse of the language features. The readers are referred to [@PicatGuide; @Kjellerstrand14] for the details of the language. \[sec:overview\]An Overview of Picat ==================================== Picat follows a different doctrine than Prolog in offering the core logic programming concepts. This section gives a brief overview of Picat’s basic language constructs. The facilities for combinatorial search, including tabling, solver modules for CSPs, and a module for planning, will be described later. Other features of Picat, which are not covered in this overview, include assignments, global maps, action rules for defining event-driven actors, a simple module system, modules for everyday programming tasks ([basic]{}, [math]{}, [io]{}, [util]{}, and [os]{}), and a module for probabilistic reasoning and learning with PRISM [@prism:website]. Logic Variables and Data Types ------------------------------ A logic variable is a value holder, and a value is a term, which can be another variable. In addition to the basic data types in Prolog, Picat also provides strings, arrays, and maps. A double-quoted string is represented as list of single-character atoms, and all of the built-ins on lists, such as the concatenation function `++`, can also be applied to strings. An *array* takes the form `{t_1,\ldots,t_{n}}`. In the current implementation, an array is a special structure with the name `{}`. A *map* is a hash-table that is represented as a structure, containing a set of key-value pairs. Picat allows function calls in arguments. For this reason, it requires structures to be preceded with a dollar sign in order for them to be treated as data. For example, `$student(mary,cs,3.8)` is a structure, not a function call. Special structures, such as [(A,B)]{} and [(A;B)]{}, as well as head patterns, are not required to have a dollar sign. For each type, Picat provides a set of built-in functions and predicates. The index notation `X[I]`, where $X$ references a compound value and $I$ is an integer expression, is a special function that returns the component of $X$ at index $I$. The index of the first element of a list or a structure is 1. Picat also allows OOP notations for accessing attributes and for calling predicates and functions. The notation `A_1.f(A_2,\ldots,A_k)` is the same as `f(A_1,A_2,\ldots,A_k)`, unless $A_1$ is an atom, in which case $A_1$ must be a module qualifier for $f$. The notation $A.Attr$, where $Attr$ is not in the form $f(\ldots)$, is the same as the function call `get(A,Attr)`. A structure is assumed to have two attributes called `name` and `length`. Pattern-matching Rules and Explicit Non-determinism --------------------------------------------------- In Picat, predicates and functions are defined with pattern-matching rules. Picat has two types of rules: the non-backtrackable rule $Head, Cond\ $`=>`$\ Body$, and the backtrackable rule $Head, Cond\ $`?=>`$\ Body$. In a predicate definition, the $Head$ takes the form $p(t_1,\ldots,t_n)$, where $p$ is called the predicate name, and $n$ is called the arity. The condition $Cond$, which is an optional goal, specifies a condition under which the rule is applicable. For a call $C$, if $C$ matches $Head$ and $Cond$ succeeds, then the rule is said to be *applicable* to $C$. For a head in which a variable occurs more than once, such as [p(X,X)]{}, a call matches the pattern only if the arguments are identical. Unlike the pattern matching that is used in concurrent logic languages [@SHA89], a call fails rather than freezes when it contains insufficiently instantiated arguments. A pattern can contain *as-patterns* in the form `V@Pattern`, where $V$ is a new variable in the head, and $Pattern$ is a non-variable term. The as-pattern `V@Pattern` is the same as `Pattern` in pattern matching, but after pattern matching succeeds, $V$ is made to reference the term that matches $Pattern$. As-patterns can be used to avoid re-constructing existing terms. When applying a rule to call $C$, Picat rewrites $C$ into $Body$. If the used rule is non-backtrackable, then the rewriting is a commitment, and the program can never backtrack to $C$. However, if the used rule is backtrackable, then the program will backtrack to $C$ once $Body$ fails, meaning that $Body$ will be rewritten back to $C$, and the next applicable rule will be tried on $C$. Pattern matching does not change the status of the variables in a call. In order to bind a variable $X$ in a call to a value $Y$, users can call the unification $X=Y$. While it is not illegal to bind variables in $Cond$, $Cond$ normally contains only tests, and all unification calls should be written in $Body$. For example, member(X,[Y|_]) ?=> X=Y. member(X,[_|L]) => member(X,L). The first rule is backtrackable. This predicate can be used to retrieve elements from a given list one by one through backtracking. Functions --------- A function call always succeeds with a return value, unless an exception occurs. Functions are defined with non-backtrackable rules in the form $F$`=`$Exp, Cond\ $`=>`$\ Body$, where $F$ is a function pattern in the form $f(t_1,\ldots, t_n)$, and $Exp$ is an expression. When both $Cond$ and $Body$ are [true]{}, the rule can be written as $F$`=`$Exp$. Functions are compiled into predicates. A function call never fails due to failures in pattern matching. If no rule is applicable to a function call, then the system throws an [unresolved\_function\_call]{} exception. Loops and List Comprehension ---------------------------- Picat allows loops in rule bodies. Loops are compiled into tail-recursive predicates. A `foreach` statement takes the form aa = aaa = aaa = aaa = aaa = aaa = aaa `foreach (E_1 in D_1, Cond_1, \ldots, E_n in D_n, Cond_n)`\ $Goal$\ `end` where each iterator, $E_i\ in\ D_i$, can be followed by an optional condition $Cond_i$. Within each iterator, $E_i$ is an iterating pattern, and $D_i$ is an expression that gives a compound value. The `foreach` statement means that $Goal$ is executed for every possible combination of values $E_1 \in D_1$, $\ldots$, $E_n \in D_n$ that satisfies the conditions `Cond_1`, $\ldots$, `Cond_n`. Picat adopts the following simple scoping rule: [*variables that occur only in a loop, but do not occur before the loop in the outer scope, are local to each iteration of the loop*]{}. For example, p(A) => q(X), foreach (I in 1 .. A.length) A[I] = (X,Y) end. The loop unifies each element [A\[I\]]{} of array [A]{} with a tuple [(X,Y)]{}, where [X]{} is global and is the same for every iteration, and [Y]{} is local and is new to each iteration. A list comprehension, which takes the following form, is a special functional notation for creating lists: aa = aaa = aaa = aaa = aaa = aaa = aaa `[T : E_1 in D_1, Cond_1, \ldots, E_n in D_n, Cond_n]` where $T$ is an expression. This list comprehension means that for every tuple of values $E_1 \in D_1$, $\ldots$, $E_n \in D_n$, if the conditions are true, then the value of $T$ is added into the list. Picat supports the assignment operator [:=]{}. The assignment $X$ [:=]{} $Y$, where $X$ is a variable, does not actually assign the value of $Y$ to $X$. It creates a new variable for $X$ to hold the value of $Y$. After the assignment, whenever $X$ is accessed in the body, the new variable is accessed. With assignments, a list comprehension can be easily compiled into a [foreach]{} loop that uses an assignment to accumulate the list. Loops are convenient for scripting and modeling purposes. Figure \[fig:loops\] gives three example functions that would be difficult to write without using loops or list comprehension. power_set([]) = [[]]. power_set([H|T]) = P1++P2 => P1 = power_set(T), P2 = [[H|S] : S in P1]. perm([]) = [[]]. perm(Lst) = [[E|P] : E in Lst, P in perm(Lst.delete(E))]. matrix_multi(A,B) = C => C = new_array(A.length,B[1].length), foreach (I in 1..A.length, J in 1..B[1].length) C[I,J] = sum([A[I,K]*B[K,J] : K in 1..A[1].length]) end. A Common Interface With CP, SAT, and MIP ======================================== Picat provides three solver modules, including `cp`, `sat` and `mip`. Each of the three solver types has its strengths and weaknesses. In reality, extensive experimentation is required in order to determine a proper model and to find a suitable solver. All of the three modules implement the same interface, which makes it seamless to switch from one solver to another. The Common Interface -------------------- The common interface consists of built-ins for creating decision variables, specifying constraints, and invoking the imported solver. In order to use a solver, users must first import the module. A decision variable is a logic variable with a domain. The domain constraint [$Vs$ :: $D$]{} narrows the domains of the variables $Vs$ to $D$. $Vs$ is a variable, a list of variables, or an array of variables. $D$ is an expression that gives a list of integers. An arithmetic constraint takes the form of $E_1\ R\ E_2$, where $E_1$ and $E_2$ are two arithmetic expressions, and $R$ is one of the following constraint operators: `#=` (equal), `#!=` (not equal), `#>=`, `#>`, `#=<` (`#<=`), and `#<`. An arithmetic expression is made from integers, domain variables, and built-in arithmetic functions. A basic Boolean expression is made from constants (0 and 1), Boolean variables, and the following operators: `#/\` (and), `#\/` (or), `#~` (not), `#^` (xor), `#<=>` (equivalent), and `#=>` (implication). An extended Boolean expression can also include arithmetic and domain constraints as operands. In particular, the constraint `B #<=> (E1 #= E2)` is called a [*reification*]{} constraint, which uses a Boolean variable [B]{} to indicate the satisfiability of the arithmetic constraint `E1 #= E2`. A *table constraint*, or an *extensional constraint*, over a tuple of variables specifies a set of tuples that are allowed ([table\_in]{}) or disallowed ([table\_notin]{}) for the variables. The interface also contains the commonly used global constraints, such as the [all\_different]{}, [element]{}, [circuit]{}, and [cumulative]{} constraints. The built-in predicate `solve(Options, Vars)` calls the imported solver to label the variables $Vars$ with values, where $Options$ is a list of options for the solver. When the option [min($E$)]{} or [max($E$)]{} is included, the solver returns an optimal answer. Figure \[fig:queens\] gives a program for the N-queens problem. import cp. queens(N, Q) => Q = new_list(N), Q :: 1..N, all_different(Q), all_different([$Q[I]-I : I in 1..N]), all_different([$Q[I]+I : I in 1..N]), solve([ff],Q). Implementation of the Solver Modules ------------------------------------ An underlying solver is used for each of the solver modules: the [cp]{} module uses a solver inherited from B-Prolog; the [sat]{} module uses Lingeling[^1] on Linux and MiniSat[^2] on Windows; the [mip]{} module uses GLPK[^3]. For the [cp]{} module, constraints are compiled into propagators that are defined in the AR (Action Rules) language [@zhou06ar], which are compiled further into abstract machine instructions. The abstract machine provides native support for fast propagation. In particular, it stores propagators on the stack for fast context switching and provides specialized instructions for encoding commonly used propagators [@zhou06ar]. The solver, which has competed in numerous solver competitions since 2005, is robust and efficient. For example, Picat solves the N-queens problem for N=1500 in less than 10 seconds on an Intel i5 machine. For the [sat]{} module, constraints are compiled into a logic formula in the conjunctive normal form (CNF) for the underlying SAT solver. Picat employs the so called [*log-encoding*]{} for compiling domain variables and constraints. For a domain variable, $\lceil log_2(n)\rceil$ Boolean variables are used, where $n$ is the maximum absolute value of the domain. If the domain contains both negative and positive values, then another Boolean variable is used to encode the sign. Each combination of values of these Boolean variables represents a valuation for the domain variable. If there are holes in the domain, then disequality ($\neq$) constraints are generated in order to disallow assignments of those hole values to the variable. Equality and disequality constraints are flattened to two types of primitive constraints in the form of $x>y$ and $x+y=z$, which are compiled further into logic comparators and adders in CNF. For other types of constraints, clauses are generated in order to disallow conflict values for the variables. The same log-encoding is used by the FlatZinc SAT compiler [@Huang08]. Log-encoding has less propagation power than [*direct*]{} and [*support*]{} encodings for certain constraints [@Gavanelli07], but is much more compact than other encodings, including the [*order*]{} encoding which is adopted by the Sugar [@TamuraTKB09] and the BEE [@MetodiC12] compilers. The [sat]{} module has solved many problems that are hard to solve with the [cp]{} module. The MIP solver is still the first choice for many Operations Research applications [@Appa10]. For the [mip]{} module, constraints are compiled into inequality ($\le$) constraints. The compilation follows the standard textbook recipe. For example, the constraint `X #!= Y` is first translated to `X #=< Y-1 #\/ X #>= Y+1`, which is then translated to `B1 #\/ B2`, where B1 #<=> (X #=< Y-1) B2 #<=> (X #>= Y+1) The reification constraint `B #<=> (X #=< Y)` is compiled to `X-Y-M1*(1-B) #=< 0` and `Y-X+1-M2*B #=< 0`, where [M1]{} and [M2]{} are constants: M1 = ubd(X)-lbd(Y)+1 M2 = ubd(Y)-lbd(X)+2 where [lbd(X)]{} is the lower bound of the domain of [X]{}, and [ubd(X)]{} is the upper bound. Tabling for Dynamic Programming =============================== The idea of tabling is to store tabled calls and their answers in a table, and to use the answers to resolve subsequent variant calls. This idea has been used in functional and logic programming for several decades, dating back to [@Michie68] and [@Tamaki86]. As computer memory grows and advanced implementation techniques are invented, tabling is becoming increasingly important for offering dynamic programming solutions for many problems. Picat’s tabling system is inherited from B-Prolog. In order to have all of the calls and answers of a predicate or a function tabled, users just need to add the keyword `table` before the first rule. Picat supports mode-directed tabling for dynamic programming problems [@GuoG08]. For a tabled predicate, users can give a *table mode declaration* in the form [table($M_{1},M_{2},\ldots,M_{n}$)]{}, where each $M_{i}$ is one of the following: a plus-sign (+) indicates input, a minus-sign (-) indicates output, `max` indicates that the corresponding argument is maximized, and `min` indicates that the corresponding argument is minimized. The last mode, $M_{n}$, can be `nt`, which indicates that the argument is not tabled. Two types of data can be passed to a tabled predicate as an `nt` argument: (1) global data that are the same to all of the calls of the predicate, and (2) data that are functionally dependent on the input arguments. When a table mode declaration is provided, Picat only tables the current best answer for each tuple of input arguments. Picat uses linear tabling [@zhou08tab] to iteratively evaluate looping calls until an optimal answer is found. Mode-directed tabling assumes that the objective function grows or declines monotonically. For example, the following tabled predicate searches for a path with the maximum total sum from top to bottom in a triangle. table (+,+,max,nt) path(Row,Col,Sum,Tri),Row==Tri.length => Sum=Tri[Row,Col]. path(Row,Col,Sum,Tri) ?=> path(Row+1,Col,Sum1,Tri), Sum = Sum1+Tri[Row,Col]. path(Row,Col,Sum,Tri) => path(Row+1,Col+1,Sum1,Tri), Sum = Sum1+Tri[Row,Col]. The triangle, which is represented as an array of arrays, is passed as an [nt]{} argument. If the current row is at the bottom of the triangle ([Row==Tri.length]{}), then the leaf value is returned. Otherwise, it makes a non-deterministic choice between two branches, one going straight down, and the other going down to the adjacent number. This program is compact, and runs very fast. For the 100-row triangle that is provided by the Euler project,[^4] this program finds an answer in only 0.01 second on an Intel i5 machine. The above program can be generalized for classic planning. Given an initial state, a set of final states, and a set of possible actions, the classic planning problem is to find a plan that transforms the initial state to a final state. Figure \[fig:plan\] shows the framework of a tabled planner. table (+,-,min) plan(S,Plan,Cost),final(S) => Plan=[],Cost=0. plan(S,Plan,Cost) => action(S,S1,Action,ActionCost), plan(S1,Plan1,Cost1), Plan = [Action|Plan1], Cost = Cost1+ActionCost. The call [plan(S,Plan,Cost)]{} binds [Plan]{} to an optimal plan that can transform state [S]{} to a final state. The predicate [final(S)]{} succeeds if [S]{} is a final state, and the predicate [action]{} encodes the set of actions in the problem. The tabled program performs a state-space graph search: for a state that occurs in multiple branches in the search tree, the tabled program only expands it once. This framework demonstrated a surprisingly good performance on the Sokoban problem [@ZhouD13], which was a benchmark used in the ASP and IPC competitions. The same framework was also used in a program for the Petrobras logistic problem [@BartakZ14]. The above framework performs depth-unbounded search. For many planning problems, branch and bound is useful for finding optimal solutions. Another argument can be added to the [plan]{} predicate in order to indicate the current resource limit. If the resource limit is negative, then the current branch can be pruned. The problem is determining which mode to use for the resource-limit argument. If it is treated as an input argument with the mode (+), then calls with the same state and different resource limits are no longer variants, and will be resolved separately. If the resource limit is passed as an [nt]{} argument, then the framework no longer guarantees the completeness or soundness, because the [nt]{} argument is disregarded in variant checking, and once a call is completed with a failure it will fail forever, no matter how big the resource limit is. This problem is nicely fixed by the [*resource-bounded tabled search*]{} technique, which will be described in the next section. The [planner]{} Module of Picat =============================== Planning has been a target problem for logic programming since its inception. The first logic programming language, PLANNER [@Hewitt69], was designed as “a language for proving theorems and manipulating models in a robot”, and planning has been an important problem domain for Prolog [@Kowalski79; @warplan]. Nevertheless, Prolog is not recognized as an effective tool for planning. Answer Set Programming (ASP), which is based on the satisfiability approach to planning [@KautzS92; @Rintanen12], has had more successes than Prolog in solving planning problems [@Lifschitz02; @gekakasc12a]. Other logic-based languages, including action languages [@DFP11] and transaction logic [@FodorK10], have also been designed for planning. The [planner]{} module of Picat is based on the framework given in Figure \[fig:plan\]. For a planning problem, users only need to specify conditions on the final states and the set of actions, and call one of the search predicates in the module on an initial state in order to find a plan or an optimal plan. The module provides predicates for both [*resource-unbounded*]{} search and [*resource-bounded*]{} search. The following two predicates perform resource-bounded search: - `plan(S,Limit,Plan,PlanCost)`: This predicate, if it succeeds, binds $Plan$ to a plan that can transform state $S$ to a final state. $PlanCost$ is the cost of $Plan$, which cannot exceed $Limit$, a given non-negative integer. - `best_plan(S,Limit,Plan,PlanCost)`: This predicate iteratively uses [plan/4]{} to search for an optimal plan, starting with the resource limit 0 and incrementally increasing the limit until a plan is found, or until the resource limit exceeds $Limit$, in which case the call fails. The implementation of [plan/4]{} follows the framework in Figure \[fig:plan\]. The resource limit argument is treated in such a way that it is tabled but not used in variant checking. This predicate searches for a plan by performing [*resource-bounded*]{} search, which only expands a state if the state is new and its resource limit is non-negative, or if the state has previously failed but the current occurrence has a higher resource limit than before. The implementation of [best\_plan]{} also takes advantage of the tabled states and their tabled resource limits. Unlike the IDA\* search algorithm [@Korf85], which starts a new round from scratch, Picat reuses the states that were tabled in the previous rounds: when the current state does not have a higher resource limit than the most recent occurrence, Picat immediately fails the state. The [planner]{} module also provides a function, named [current\_resource()]{}, which returns the resource limit of the current call to [plan/4]{}. This amount can be used to check against a heuristic value. If the heuristic estimate of the cost to travel from the current state to a final state is greater than the resource limit, then the current state should be failed. Figure \[fig:ricochet\] gives a program for the Ricochet Robots problem [@ButoLR05]. Given an $N\times N$ grid board with predefined horizontal and vertical barriers between some of the adjacent board positions, a set of robots of distinct colors on different board positions, and a target position, the goal of the game is to guide a robot of a given color to the target position via a sequence of robot moves. A robot can move horizontally or vertically from its current position. Once a direction is chosen, the robot moves in that direction until encountering an obstacle, i.e. a barrier, another robot, or the edge of the board. This problem is one of the benchmarks used in the ASP’13 Competition [@ASP13]. The ASP encoding for the Potassco solver is given in [@Gebser13]. A state is represented by a structure of the following format: aa = aaa = aaa = aaa = aaa = aaa = aaa where the first argument is a cons that holds the current position and the target position of the target robot, and the second argument is a sorted list of positions of the other robots. A state is final if the current position and the target position are the same. Note that colors of robots are not included in the representation, which makes non-target robots indistinguishable during search. This representation facilitates sharing, because lists are sorted and their common suffices are only tabled once. This representation also breaks symmetries. Two configurations of non-target robots are treated as the same if they only differ by robots’ colors. This kind of symmetry is not easy to remove when only flat facts are used, as in ASP and PDDL. The actions are specified with two rules. The first rule chooses a stopping position for the target robot, and moves the target robot there. The predicate [choose\_move\_dest]{} non-deterministically chooses one of the four directions, and returns the position right before the first obstacle in that direction. On backtracking, it chooses an alternative direction. The second rule chooses a non-target robot to move. import planner. main => init_state(S0), best_plan(S0,Plan), writeln(Plan). final(s([Loc|Loc],_)) => true. action(s([From|To],ORobotLocs),NextState,Action,ActionCost) ?=> NextState = $s([Stop|To],ORobotLocs), Action = [From|Stop], ActionCost = 1, choose_move_dest(From,ORobotLocs,Stop). action(s(FromTo@[From|_],ORobotLocs),NextState,Action,ActionCost) => NextState = $s(FromTo,ORobotLocs2), Action = [RFrom|RTo], ActionCost = 1, select(RFrom, ORobotLocs,ORobotLocs1), choose_move_dest(RFrom,[From|ORobotLocs1],RTo), ORobotLocs2 = insert_ordered(ORobotLocs1,RTo). The program can be improved by using a heuristic function. At the end of each rule for [action]{}, the following condition can be added: current_resource() > heuristic_val(NextState) This ensures that the resource limit is greater than the estimated number of steps required to transform [NextState]{} to a final state. For example, the current state is at least three steps away from the final state if the target robot is not in the same row or the same column, and the target position has no obstacle around it. Picat has demonstrated a surprising performance on many benchmarks. For the four planning benchmarks used in the ASP’13 competition ([*Nomystery*]{}, [*Ricochet*]{}, [*Sokoban*]{}, and [*Solitaire*]{}), Picat is one to three orders of magnitude faster than Potassco, the winner of the competition. FastDownward, a winner of IPC’11, also competed in the ASP’11 Model&Solve competition. The competition results on the planning benchmarks showed that FastDownward was not as competitive as the best-performing ASP solvers. On the Ricochet benchmark, both Picat and Potassco solved all 30 instances that were used in the ASP competition; on average, Potassco took 49.5 seconds per instance, while Picat took 9.3 seconds when no heuristic was used, and 2.2 seconds when the above heuristic was used. Conclusion ========== This paper has presented the Picat language, focusing on its modeling and solving power for combinatorial problems. Lorenz Schiffmann wrote the following in his review of an alpha release of Picat in June 2013, which nicely summarizes the features of Picat: [*The Picat language is really cool; it’s a very usable mix of logic, functional, constraint, and imperative programming. Scripts can be made quite short but also easily readable. And the built-in tabling is really cool for speeding up recursive programs. I think Picat is like a perfect Swiss army knife that you can do anything with.*]{} Future work includes engineering an optimizing SAT compiler; applying tabled planning to more domains, including model-checking domains; automatic translation of action languages, such as PDDL and HTN, to Picat; and program analyzers in Picat, both for Picat itself, and for other languages. Acknowledgements {#acknowledgements .unnumbered} ================ As acknowledged in the User’s Guide [@PicatGuide], many people have contributed in one form or another to the Picat project. The author thanks Jonathan Fruhman, Hakan Kjellerstrand, and Yanhong Annie Liu for reviewing this paper. This work was supported in part by the NSF under grant number CCF1018006. , [Pitsoulis, L.]{}, [and]{} [Williams, H. P.]{} 2010. . Springer. 2013\. . Pragmatic Press. 2014\. Using tabled logic programming to solve the [Petrobras]{} planning problem. In [*ICLP*]{}. , [Lehmann, K. A.]{}, [and]{} [Ramenzoni, V.]{} 2005. Ricochet robots - a case study for human complex problem solving. . 1984\. Equations and inequations on finite and infinite trees. In [*[Proceedings of FGCS]{}*]{}. ICOT, 85–99. 1989\. Static inference of modes and data dependencies in logic programs.  [*11,*]{} 3, 418–450. , [Formisano, A.]{}, [and]{} [Pontelli, E.]{} 2011. Perspectives on logic-based approaches for reasoning about actions and change. In [*LNCS*]{}. Vol. 6565. 259–279. 2010\. Tabling for transaction logic. In [*PPDP*]{}. 199–208. 2007\. The log-support encoding of [CSP]{} into [SAT]{}. In [*CP*]{}. 815–822. , [Jost, H.]{}, [Kaminski, R.]{}, [Obermeier, P.]{}, [ Sabuncu, O.]{}, [Schaub, T.]{}, [and]{} [Schneider, M.]{} 2013. Ricochet robots: A transverse [ASP]{} benchmark. In [*LPNMR*]{}. , [Kaminski, R.]{}, [Kaufmann, B.]{}, [and]{} [ Schaub, T.]{} 2012. . Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan and Claypool Publishers. 2008\. Simplifying dynamic programming via mode-directed tabling.  [*38,*]{} 1, 75–94. 2013\. Functional logic programming: From theory to [Curry]{}. In [*Programming Logics*]{}. 123–168. 2002\. Constraint and integer programming in [OPL]{}.  [*14*]{}, 2002. 1969\. Planner: A language for proving theorems in robots. In [*IJCAI*]{}. 295–302. 2008\. Universal [Booleanization]{} of constraint models. In [*CP*]{}. 144–158. 2013\. competition 2013. 1992\. Planning as satisfiability. In [*ECAI*]{}. 359–363. 2014\. My first look at [Picat]{} as a modeling language for constraint solving and planning. In [*www.hakank.org*]{}. 1985\. Depth-first iterative-deepening: An optimal admissible tree search.  [*27,*]{} 1, 97–109. 1979\. . North Holland, Elsevier. 1971\. Linear resolution with selection function.  [*2,*]{} 3–4, 227–260. 2002\. Answer set programming and plan generation.  [*138,*]{} 1-2, 39–54. 2009\. satisfiability: from theoretical hardness to practical success.  [*52,*]{} 8, 76–82. 2012\. Compiling finite domain constraints to [SAT]{} with [BEE]{}.  [*12,*]{} 4-5, 465–483. 1968\. “memo” functions and machine learning. , 19–22. , [Stuckey, P. J.]{}, [Becket, R.]{}, [Brand, S.]{}, [Duck, G. J.]{}, [and]{} [Tack, G.]{} 2007. Minizinc: Towards a standard [CP]{} modelling language. In [*CP*]{}. 529–543. 2012\. Planning as satisfiability: Heuristics.  [*193*]{}, 45–86. , [van Beek, P.]{}, [and]{} [Walsh, T.]{} 2006. . Elsevier. 2004\. . MIT Press. , [Zhou, N.-F.]{}, [Kameya, Y.]{}, [and]{} [Yizumi, Y.]{} 2012. The [PRISM]{} user’s manual. . 2002\. Logical loops. In [*[ICLP]{}*]{}. 224–238. 1989\. The family of concurrent logic programming languages.  [*21*]{}, 412–510. 1986\. . In [*[ICLP]{}*]{}. 84–98. , [Taga, A.]{}, [Kitagawa, S.]{}, [and]{} [Banbara, M.]{} 2009. Compiling finite linear [CSP]{} into [SAT]{}.  [*14,*]{} 2, 254–272. 1974\. : A system for generating plans. Tech. Rep. DCL Memo 76, University of Edinburgh. 1992\. Memoing for logic programs.  [ *35*]{}, 93–111. 2006\. Programming finite-domain constraint propagators in action rules.  [*6,*]{} 5, 483–508. 2012\. The language features and architecture of [B-Prolog]{}.  [*12,*]{} 1-2, 189–218. 2013\. A tabled [Prolog]{} program for solving [Sokoban]{}.  [*124,*]{} 4, 561–575. 2014\. (picat-lang.org). 2012\. Efficient tabling of structured data with enhanced hash-consing.  [*12,*]{} 4-5, 547–563. , [Sato, T.]{}, [and]{} [Shen, Y.-D.]{} 2008. Linear tabling strategies and optimizations.  [*8,*]{} 1, 81–109. [^1]: fmv.jku.at/lingeling [^2]: minisat.se/ [^3]: www.gnu.org/software/glpk/ [^4]: http://projecteuler.net/problem=67
[**Axions without Peccei-Quinn Symmetry**]{}\ [**Adam Latosinski$^1$, Krzysztof A. Meissner$^{1,2}$ and Hermann Nicolai$^3$**]{} [ ]{} Introduction ============ The solution of the strong CP problem by means of the Peccei-Quinn mechanism [@PQ] is commonly assumed to require the presence of a [*chiral*]{} $U(1)_{PQ}$ symmetry (Peccei-Quinn symmetry) which is not part of the standard model (SM), as well as an independent new scale $ \geq \! \cO (10^{10})\,\GeV$ beyond the SM. When spontaneously broken, the PQ symmetry gives rise to a (pseudo-)Goldstone boson, the [*axion*]{} [@W1; @W2]. The latter is usually described by a pseudoscalar field transforming by constant shifts under $U(1)_{PQ}$. The absence of CP violation in the strong interactions is then explained by the fact that any contribution to the $\theta$ parameter can be absorbed into such a shift, so the problem is solved if the axion vacuum expectation value dynamically adjusts itself to zero [@VW]. To accommodate the extra $U(1)_{PQ}$ the available models realizing this idea invariably need to introduce (so far unobserved) new particles and large scales beyond the SM, such as new heavy quarks or non-standard Higgs fields [@Kim; @Dine]. In [@MN] a minimal extension of the SM was proposed, based on the hypothesis that quantum mechanically broken conformal symmetry stabilizes the electroweak hierarchy, with only the right-chiral neutrinos $\nu_R^i$ and one complex scalar field \[phi\] (x) = (x) () as new ingredients (for alternative models based on conformal symmetry, see [@CW]; a similarly minimalistic scenario without exact conformal symmetry had already been developed in great detail in [@Shap]). The field $\phi$ is a singlet under the SM gauge symmetries and couples only to right-chiral neutrinos, see (\[L\]) below. If $\phi$ acquires a vacuum expectation value by (possibly radiatively induced) spontaneous symmetry breaking, a Majorana mass term is generated for the right-chiral neutrinos. The phase $a(x)$ then gives rise to a (pseudo-)Goldstone particle (called ‘Majoron’) associated with the spontaneous breaking of global $U(1)_L$ lepton number symmetry [@Mo1]. The crucial feature of our proposal is that it requires all mass scales to arise from the quantum mechanical breaking of classical conformal invariance. Therefore in any consistent implementation of this scheme [*there cannot exist intermediate scales of any kind between the electroweak scale and the Planck scale.*]{} This holds in particular true for the masses of the light neutrinos whose smallness is naturally explained here with appropriate neutrino Yukawa couplings $\sim \cO(10^{-5})$ [^1] and without the need to introduce a large Majorana mass ‘by hand’. In this paper we show that likewise, and contrary to widely held expectations, no extra large scale is required for the solution of the strong CP problem either. As argued in [@MN] the Majoron has several features in common with the axion, and the smallness of its couplings can be tied to the smallness of neutrino masses. In this Letter, we go one step further and propose that the Majoron actually [*is*]{} the axion, with [*computable*]{} effective couplings to SM particles, and the neutrino Yukawa couplings as the only unknown parameters (a possible link between light neutrinos and the invisible axion had already been suggested in [@Mo3; @LPY]). In other words, we claim that lepton number symmetry $U(1)_L$ is transmuted, via electroweak parity violation and neutrino mixing, into a $U(1)$ symmetry that, in relation to the strong interactions, is indistinguishable from the standard axial Peccei-Quinn symmetry at low energies. We present exact expressions for the (UV finite) two-loop integrals describing the coupling of the axion to photons and (light) quarks; the main technical novelty here is the consistent use of the off-diagonal neutrino propagators (\[nuProp\]) below. From the quark couplings one can estimate the coupling of the axion to gluons, which comes out naturally tiny. On general grounds the effective couplings of $a(x)$ can only be of a very restricted type. Because Goldstone bosons interact only via derivatives, the perturbative effective action at low energies contains only terms $\propto\X^\mu \partial_\mu a$, where $\X^\mu$ are local expressions in the SM quantum fields. At lowest order there are only three candidates for $\X^\mu$: ($i$) a Chern-Simons current, which by partial integration is equivalent to a coupling $a \, {\rm Tr}\, W_{\mu\nu}\tW^{\mu\nu}$ (where $W_\mu$ can be any SM gauge connection), ($ii$) a vector current $\cJ_V^\mu$ and ($iii$) an axial current $\cJ_A^\mu$. Being mediated by the weak interactions the fermionic bilinears contributing to $\X^\mu$ and involving charged SM fermions all appear in ‘V – A’ form. Therefore, whenever $\partial_\mu\cJ_V^\mu\approx 0$ by some approximate[^2] conservation law, $a(x)$ couples like a [*pseudoscalar*]{} to photons, gluons, quarks and electrons. Neutrino Lagrangian and propagators =================================== We refer to [@EPP; @Pok] for basic properties of the SM, and here only quote the Yukawa couplings \[L\] \_[Y]{}&=& (\^iY\_[ij]{}\^E E\^j + \^iY\_[ij]{}\^D D\^j+\^i\^\* Y\_[ij]{}\^U U\^j&& + \^i\^Y\_[ij]{}\^N\^j+N\^[i T]{} \^[-1]{}Y\^M\_[ij]{} N\^j+[h.c.]{}) and the neutrino terms in the Lagrangian, see (\[Lneutrino\]) below. Here $Q^i$ and $L^i$ are the left-chiral quark and lepton doublets, $U^i$ and $D^i$ the right-chiral up- and down-like quarks, while $E^i$ are the right-chiral electron-like leptons, and $N^i\equiv\nu^i_R$ the right-chiral neutrinos (we suppress all indices except the family indices $i,j=1,2,3$). $\Phi$ is the usual Higgs doublet, and $\phi$ is the new complex scalar field introduced in (\[phi\]). As is well known, one can use global redefinitions of the fermion fields to transform the Yukawa matrices $Y_{ij}^E$, $Y_{ij}^U$ and $Y_{ij}^M$ to real diagonal matrices. By contrast, the matrices $Y_{ij}^D$ and $Y^\nu_{ij}$ may exhibit (strong) mixing. Besides the standard (local) $SU(3)_c\times SU(2)_w \times U(1)_Y$ symmetries, the Lagrangian (\[L\]) admits two [*global*]{} $U(1)$ symmetries, baryon number symmetry $U(1)_B$ and lepton number symmetry $U(1)_L$. The latter is associated with the Noether current \[JL\] \^\_L := \^i\^L\^i +\^i\^E\^i+\^i\^N\^i -2 i \^ The fact that $\phi$ carries lepton charge is crucial for the proposed transmutation of $U(1)_L$ into a PQ-like symmetry. For the computation of loop diagrams it is convenient to employ $SL(2,\mathbb{C})$ spinors [@MN]. With $\nu^i_L \equiv \frac12(1-\g^5)\nu^i\equiv \bn^{i\da}$ and $\nu^i_R \equiv\frac12 (1+\ga^5)\nu^i \equiv N^i_\a$, the neutrino part of the free Lagrangian reads (see [@BW] for conventions) \[Lneutrino\] = 2 ( \^[i]{} \_ \^[i]{} + N\^[i]{} \_ \^[i]{} ) + m\_[ij]{} \^[i]{} N\^j\_ + 12 M\_[ij]{} N\^[i]{} N\^j\_ + c.c. after spontaneous breaking of conformal and electroweak symmetries. Consequently, the (complex) Dirac and Majorana mass matrices are given by $m_{ij} = Y^\nu_{ij}\langle H \rangle$ and $M_{ij}= Y^M_{ij} \langle \vp \rangle$, respectively (where $\langle H\rangle^2\equiv \langle \Phi^\dagger\Phi\rangle$). Rather than diagonalize the fields w.r.t. these mass terms, we work with [*non-diagonal propagators*]{} and the interaction vertices from (\[L\]). Defining (p) &:=& \^[-1]{} we obtain the matrix propagators (in momentum space) \[nuProp\] \^i\_\^j\_&=& \^[ij]{} \_ \^i\_|\^j\_ &=& \^[ij]{} \_ N\^i\_N\^j\_&=& \^[ij]{} \_ N\^i\_|[N]{}\^j\_ &=& \^[ij]{} \_ \^i\_N\^j\_&=& \^[ij]{} \_ \^i\_|[N]{}\^j\_ &=& -\^[ij]{} \_ , together with their complex conjugate components. Evidently, these propagators allow for maximal mixing in the sense that every neutrino component can oscillate into any other (also across families). For the UV finiteness of the diagrams to be computed below it is essential that some of the propagator components fall off like $\sim p^{-3}$, unlike the standard Dirac propagator. Taking $M_{ij}$ diagonal it is not difficult to recover the mass eigenvalues as predicted by the standard see-saw formula [@Min; @seesaw; @Yan; @Mo2]. With the above propagators and the (extended) SM Lagrangian we can now proceed to compute various effective low energy couplings involving the ‘axion’ $a$ which are mediated by neutrino mixing via two or three-loop diagrams. Here we present only the results for photon-axion and quark-axion couplings, cf. the diagrams depicted below. Further results and detailed derivations will be given in a forthcoming publication [@LMN]. Photon-axion vertex =================== For the low energy effective action we need only retain contributions where all particles circulating inside the loops are much heavier than the external particles. As our first example we determine the effective coupling of the axion to photons via the two-loop diagram in Fig. 1. For small axion momentum $q^\mu= k_1^\mu - k_2^\mu$ it is possible to derive a closed form expression for the two-loop integral and for arbitrary mixing matrices [@LMN]. Setting $\mu=\langle\vp\rangle$ in (\[phi\]) and denoting by $M_j$ the eigenvalues of the (diagonal) matrix $M_{ij}$, a lengthy calculation gives the expected kinematical factor $\eps^{\mu\nu\lambda\rho}F_{\mu\nu} F_{\lambda\rho} \propto \eps^{\mu\nu\lambda\rho}k_{1\lambda} k_{2\rho}$ with coefficient function && \_[i,j]{} \_0\^1 dx \_0\^1 dy \_0\^1 dz\_0\^1 dt x(1-x)y\^2 z(1-z) t\^3\ && { + }where $m_{e_i} \equiv (m_e, m_\mu, m_\tau)$ and \^2\_[ij]{}(x,y,z,t) := xyzt M\_j\^2 + (1-y)The above integral is cumbersome to evaluate in general form, but for small photon momenta $k_1^\mu\approx k_2^\mu$ we get (7) \_[i.j]{} ()\^2 Of course, the precise value of the effective low energy coupling depends on the (unknown) values of the Yukawa mass matrices $m_{ij}$ and $M_{ij}= M_j \delta_{ij}$. For $M_j \! \sim \! M\!\sim\!\langle\vp\rangle$ the axion-photon vertex is well approximated by \[fa\] \_[eff]{}\^[a]{} = 1[4f\_a]{} a F\^ \_ , f\_a\^[-1]{} = ()\^2 with the standard see-saw relation $\sum m_\nu \sim \sum |m_{ij}|^2/M$. Substituting numbers we find $f_a= \cO( 10^{16} \, \GeV)$ which is outside the range of existing or planned experiments [@OSQAR]. Thus the smallness of axion couplings gets directly tied to the smallness of the light neutrino masses via (\[fa\]). Quarks and gluons ================= The effective low energy couplings to light quarks can be analyzed in a similar way. With $P_L\equiv \frac12(1-\gamma^5)$ we parametrize these couplings as[^3] \[qa\] \_[eff]{}\^[aqq]{} = \_a ( c\_[ij]{}\^[aUU]{} |[u]{}\^i\^P\_L u\^j + c\_[ij]{}\^[aDD]{} |[d]{}\^i\^P\_L d\^j ) Again one can obtain an exact formula for the (UV finite) two-loop integrals; e.g. for the up-like quarks we get \[ca\] c\^[aUU]{}\_[ij]{} &=& \_[k,r,s]{}\ && \_0\^1dx \_0\^1dy \_0\^1dt \_0\^1dz x(1-x)y\^3(1-z)t\^3 && with the CKM matrix $V^{ij}$. A similar (but not the same) formula is obtained for $c^{aDD}_{ij}$ [@LMN]. In principle, there are also contributions from diagrams with $Z$-boson exchange, but these can be disregarded for the effective low energy Lagrangian because they involve a purely neutrino triangle with one light neutrino (which is lighter than any external quark). To estimate the integral, we set $m_{e_i} = m_{d_i}=0$ in (\[ca\]) (which still leaves a convergent integral that can be calculated exactly [@LMN]). Because the CKM matrix is unitary, both $c^{aUU}_{ij}$ and $c^{aDD}_{ij}$ then become proportional to $\delta_{ij}$ to leading order. Keeping only one lepton flavor in (\[ca\]) we here quote the result only for two limiting cases: for $M_j \!\sim \! M \gg \! M_W$ we get c\^[aUU]{}\_[ij]{} = \_[k,l]{} \_[ij]{} If instead $M \!\sim\! M_W$ the exact result replaces the square bracket by $0.71$. Note that the Majorana mass $M$ is much closer to the weak scale in [@Shap; @MN] than in the usual see-saw scenario, favoring the second value. By the approximate conservation of the up and down quark vector currents, we can now drop the vectorlike contribution in the effective Lagrangian which thus becomes purely axial to leading order, [*viz.*]{} \[qa1\] \_[eff]{}\^[aqq]{} \_a ( g\_[aUU]{}\^[-1]{} |[u]{}\^j\^5 \^u\^j + g\_[aDD]{}\^[-1]{} |[d]{}\^j\^5 \^d\^j ) At subleading order off-diagonal contributions to $c_{ij}^{aUU}$ and $c_{ij}^{aDD}$ will appear with both vector and axial vector interactions. The numerical values of the effective coupling constants can be read off from the above results. Their precise values are subject to the same caveats as mentioned before (\[fa\]). With the same assumptions on the Yukawa mass matrices as for (\[fa\]) we get \[qa2\] g\_a\^[-1]{}g\_[aUU]{}\^[-1]{} \~g\_[aDD]{}\^[-1]{} \~(10\^[-3]{}) If $M$ is not very much larger than the weak scale $M_W$, we get $g_{aUU} \sim 10^{18}\ \GeV$ for $\sum m_\nu\approx 1\ \eV$. The axion-gluon coupling involves various three-loop diagrams, now with all six quarks in the loop [@MN]. For a rough estimate we can shortcut this calculation by integrating the effective vertex (\[qa1\]) by parts, using the anomalous conservation of the axial (color singlet) quark current[^4] (see e.g. [@Bertlmann]) \_( i |[q]{}\^5 \^q ) = [Tr]{} G\^ \_ with the gluonic topological density $\cQ(x)$ (in principle there could appear extra terms $\propto m_q \bar{q}\gamma^5 q$ on the r.h.s., but Goldstone’s Theorem assures us that such non-derivative terms must drop out in the final result (\[ga\])). Summing over the six quark flavors and now also the three leptons we thus obtain \[ga\] \_[eff]{}\^[agg]{} = a [Tr]{} G\^ \_ 18 g\_a\^[-1]{} a When the quark mass matrix $m_q$ is complex there is an extra contribution to this term from the anomalous chiral rotation required to render the quark mass matrix real, resulting in a shift \[ta\] 18 g\_a\^[-1]{} a 18 g\_a\^[-1]{} 18 g\_a\^[-1]{} a + [arg]{} m\_q Because $a$ is a Goldstone boson this shift does not affect any other terms in the effective Lagrangian, but merely replaces $a$ by $\ta$ in (\[ga\]). Axion potential =============== Being a Goldstone boson, the axion cannot acquire a mass in perturbation theory; likewise its vacuum expectation value remains undetermined in perturbation theory. However, non-perturbative effects can generate a potential for the axion and thereby lift the vacuum degeneracy. To compute it we use the formula F = with $F\equiv18g_a^{-1}\ta\cQ$. Except for possible contributions from the weak interactions which we ignore, there is no $G\tG$ condensate and we have $\langle\cQ(x)\rangle =0$ (likewise $\langle \cQ^n \rangle$ vanishes for all odd $n$). Hence the axion potential is \[axpot\] V\_(a) = 12 m\_ \^2 + (\^4) It is important that this potential is written as a function of the [*shifted*]{} axion field $\ta$ introduced in (\[ta\]). The axion mass is therefore m\_ = 18 g\_a\^[-1]{} \^[12]{} We conclude that (1) indeed $\theta\equiv\langle \ta \rangle = 0$ as required for the solution of the strong CP problem, and (2) an axion mass term is generated by non-perturbative effects. Although the value of the $(G\tG)^2$ condensate is apparently not known, we can estimate $m_{\text{axion}} \sim 18 g_a^{-1} \Lambda_{QCD}^2 \sim 10^{-8}\ \eV$, which may be still compatible with the axion being a (cold) dark matter candidate, at least according to standard reasoning [@Mu; @Sikivie], and bearing in mind the considerable uncertainties in these numbers. From (\[qa2\]) it is evident that the viability of this dark matter scenario requires the Majorana scale $M$ to be not much larger than $M_W$, in contrast to the standard see-saw proposal [@Min; @seesaw; @Yan]. This is a main new feature of the present proposal: if true, it could be interpreted as additional evidence for a hidden conformal symmetry of the SM [@Bardeen; @MN; @CW], such that the observed diversity of scales in particle physics could be explained via quantum mechanically (or even quantum gravitationally) induced logarithmic effects [@MN1]. The main virtue of the present proposal is that it provides a [*single*]{} source of explanation for axion couplings and neutrino masses, tying together in a most economical manner features of the SM previously thought to be unrelated. Given the known SM parameters, and parametrizing the unknown physics in terms of just the Yukawa mass matrices, all relevant couplings are entirely calculable in terms of UV finite diagrams, and [*naturally*]{} come out to be [*very small*]{} without the need for any fine tuning. Finally, we note that all results in this Letter can be equivalently obtained if we take the scalar field $\phi(x)$ in (\[phi\]) to be [*real*]{}, absorbing the phase $a(x)$ into a redefinition of the lepton fields. This point will be discussed in much more detail in [@LMN]. The redefinition also shows that the apparent periodicity of $a(x)$ in (\[phi\]) is spurious because the redefined Lagrangian involves the field $a(x)$ only through its derivatives. Rather, the periodicity parameter for $a$ is set by the effective action (\[ga\]) and the fact that the gluon term is a topological density (see e.g. [@DiV]). [**[Acknowledgments:]{}**]{} AL and KAM thank the AEI for hospitality and support during this work. We are also grateful to Pierre Fayet for his incisive comments on a first version of this work. [99]{} R.D. Peccei and H. Quinn, Phys. Rev. Lett. [**38**]{} (1977) 1440; Phys. Rev. [**D16**]{} (1977) 1791. S. Weinberg, Phys. Rev. Lett. [**40**]{} (1978) 223. F. Wilczek, Phys. Rev. Lett. [**40**]{} (1978) 279. C. Vafa and E. Witten, Phys. Rev. Lett. [**53**]{} (1984) 535. J.E. Kim, Phys. Rev. Lett. [**43**]{} (1979) 103; M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl. Phys. [**B166**]{} (1980) 493. A.P. Zhitniskii, Sov. J. Nucl. [**31**]{} (1980) 260; M. Dine, W. Fischler and M. Srednicki, Phys. Lett.  [**104B**]{} (1981) 199. K.A. Meissner and H. Nicolai, Phys. Lett. [**B648**]{} (2007) 312; Eur.Phys. J. [**C 57**]{} (2008) 493. M. Holthausen, M. Lindner and M.A. Schmidt, Phys. Rev. [**D82**]{}:055002 (2010); L.Alexander-Nunneley and A. Pilaftsis, JHEP [**1009**]{}:021 (2010); A.G. Dias and A.F. Ferrari, [arXiv:1006.5672\[hep-th\]]{}; R. Foot, A. Kobakhidze and R. Volkas, Phys. Rev. [**D82**]{}:035005 (2010); S. Ito, N. Okada and Y. Orikasa, Phys. Rev. [**D80**]{}: 115007 (2009); and references therein. M. Shaposhnikov, arXiv:0708.3550 \[hep-th\] and references therein. Y. Chikashige, R.N. Mohapatra and R.D. Peccei, Phys. Lett. [**98**]{} (1981) 265 R.N. Mohapatra and G. Senjanovic, Z. Phys. [**C 17**]{} (1983) 53 P. Langacker, R.D. Peccei and T. Yanagida, Mod. Phys. Lett. [**A1**]{} (1986) 541. O. Nachtmann, [*Elementary Particle Physics: Concepts and Phenomena*]{}, Springer Verlag (1999). S. Pokorski [*Gauge Field Theories*]{}, Cambridge Univ. Press, 2nd edition (2000). J. Bagger and J. Wess, [*Supersymmetry and Supergravity*]{}, Princeton University Press, 1984. P. Minkowski, Phys. Lett. [**B67**]{} (1977) 421. M. Gell-Mann, P. Ramond and R. Slansky, in Supergravity, P. van Nieuwenhuizen and D.Z. Freedman (eds.) (North-Holland) (1979) 315. T. Yanagida, Prog.Theor.Phys. [**64**]{} (1980) 1103. R.N. Mohapatra and G. Senjanovic, Phys. Rev. Lett. [**44**]{} (1980) 912 A. Latosinski, K.A. Meissner and H. Nicolai, in preparation. P. Pugnat et al., [arXiv:0712.3362 \[hep-ex\]]{}. R. A. Bertlmann, [*Anomalies in quantum field theory*]{}, Clarendon Press, Oxford, UK (1996). V. Mukhanov, [*Physical Foundations of Cosmology*]{}, Cambridge University Press (2005). P. Sikivie, Lect. Notes Phys.  [**741**]{} (2008) 19; astro-ph/0610440; arXiv:0910.5914\[astro-ph.CO\]. W.A. Bardeen, [*On Naturalness in the Standard Model*]{}, preprint FERMILAB-CONF-95-391-T. K.A. Meissner and H. Nicolai, Phys. Rev. [**D80**]{} (2009) 086005. P. Di Vecchia, [*The physics of the $\theta$ angle*]{}, NORDITA preprint. Fig.1. Axion-photon-photon and axion-quark-quark effective couplings [^1]: Recall that the appearance of a similar ratio for the charged leptons is an experimental fact: $m_e/m_\tau < 10^{-5}$. [^2]: By this we mean neglecting all terms involving neutrinos or the scalar field $\phi$ in the relevant currents, as well as baryon or lepton number violating ‘sphaleron-like’ contributions, because these will give negligible contributions to all processes considered in this paper. [^3]: While we use capital letters $U,D,...$ in (\[L\]) to designate [*chiral*]{} spinors, we use small letters $u,d,...$ for the full (non-chiral) spinors here and below. [^4]: The actual result for the effective coupling (\[ga\]) follows from a UV finite, hence non-anomalous 3-loop diagram [@MN]. Within the present scheme, it is ultimately the [*conformal anomaly*]{} which accounts for the non-vanishing coupling in (\[ga\]).
--- abstract: 'In the last two decades the Anti-de Sitter/Conformal Field Theory correspondence (AdS/CFT) has emerged as focal point of many research interests. In particular, it functions as a stepping stone to a still missing full quantum theory of gravity. In this context, a pivotal question is if and how cosmological physics can be studied using AdS/CFT. Motivated by string theory, braneworld cosmologies propose that our universe is a four-dimensional membrane embedded in a bulk five-dimensional AdS spacetime. We show how such a scenario can be microscopically realized in AdS/CFT using special field theory states dual to an “end-of-the-world brane” moving in a charged black hole spacetime. Observers on the brane experience cosmological physics and approximately four-dimensional gravity, at least locally in spacetime. This result opens a new path towards a description of quantum cosmology and the simulation of cosmology on quantum machines.' author: - Stefano Antonini - Brian Swingle bibliography: - 'references.bib' title: '**Cosmology at the end of the world**' --- = 30pt Introduction ============ The achievement of a quantum mechanical description of spacetime is one of the most challenging issues facing modern physics. The Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence provides a promising starting point to accomplish this goal. As a concrete realization of the holographic principle[@thooft; @susskind], it relates the observables of a full quantum gravity theory living in a ($d+1$)-dimensional spacetime (the bulk) to those of a $d$-dimensional dual field theory without gravity associated to the boundary of the bulk spacetime[@review]. However, at this stage AdS/CFT must still be regarded as a toy model of the physical world, since the asymptotically AdS spacetimes it describes are different from the universe we observe. In this paper we exhibit a viable way to study certain Friedmann-Lemaître-Robertson-Walker (FLRW) cosmologies using the AdS/CFT correspondence, realizing a recently proposed new perspective on braneworld cosmology and quantum gravity in a cosmological universe[@bhmicrostate]. AdS/CFT associates different states of the CFT to different geometries of the bulk spacetime. In particular, certain highly excited pure states of the CFT have a dual geometry hosting a black hole and a dynamical end-of-the-world (ETW) brane[@bound1; @bound2; @bound3; @bound4; @bound5]. This bulk configuration is built from a two-sided wormhole spacetime by replacing the left boundary with the ETW brane (see Figure \[penrose\]). The original wormhole spacetime describes an entangled state of two CFTs, and the ETW brane corresponds to a complete measurement of one CFT to leave a pure state of the remaining CFT. Hence, observables in the CFT must probe the physics behind the horizon of the black hole, including the remaining part of the left asymptotic region[@bth1; @bth2; @bth3; @bth4] and the evolution of the ETW brane[@hartmalda; @bhmicrostate]. ![**Brane trajectory.** Maximally extended Penrose diagram for the AdS-Reissner-Nordström black hole. Only the time and radial directions are represented, while the other spatial dimensions are suppressed. Light rays move at an angle of $45^\circ$. More patches of the AdS-RN spacetime are glued together here. The ETW brane oscillates inside and outside the two horizons of the black hole, cutting off the left asymptotic region. The red region, generally part of the maximally extended charged black hole, is not present in our spacetime.[]{data-label="penrose"}](penrose.pdf) The presence of the brane opens an interesting scenario: when the bulk spacetime is 5-dimensional, an observer living on the 4-dimensional ETW brane would interpret the motion of the brane as the evolution of a FLRW universe, where the radial position plays the role of the scale factor[@friedmann; @padilla1; @padilla2; @cosmoholors1; @cosmoholors2; @cosmoholors3]. Hence, holographic duality allows to describe such “braneworld cosmologies” using CFT observables associated with the right asymptotic region of the black hole[@bhmicrostate]. Holographic cosmologies have been considered previously[@cosmoholo1; @cosmoholo2; @cosmoholo3], also in the context of de Sitter holography[@dsds] and braneworld holography[@cosmoholors1; @cosmoholors2; @cosmoholors3; @cosmoholors4]. However, in the latter cases the holographic CFT lives on the brane and is coupled to gravity, whereas we are proposing a non-perturbative CFT description of the entire spacetime including the brane. Now, a braneworld cosmological model can only be realistic if, for an observer living on the brane, gravity is effectively 4-dimensional and localized on the brane. The Randall-Sundrum model[@RS1; @RS2] provides a mechanism for gravity localization on 4-dimensional branes embedded in AdS$_5$ bulk, and the mechanism can be generalized to different brane geometries[@karchrandall] and to the presence of black holes[@resonances2; @resonances]. If the brane is not too close to the black hole horizon, gravity is localized, but only locally in spacetime: an experimentalist on the brane would observe ordinary gravity only on non-cosmological scales, and for a limited range of time. In this work we show that, in the presence of a charged black hole in the bulk (AdS-Reissner-Nordström)[@dias; @huang; @2horizons; @btz; @RN; @tempestr], such gravity localization is achievable without losing the dual CFT description of the spacetime. This proof-of-principle result, not attainable in the simplest setup of an AdS-Schwarzschild black hole and a pure tension brane[@bhmicrostate], opens the door to a new formulation of cosmology within AdS/CFT. Our choice of units is $\hbar=c=\varepsilon_0=k_B=1$. Euclidean analysis {#euclsec} ================== The spacetime described above is dual to a CFT state that can be prepared in principle using a quantum computer. Theoretically, it is convenient to describe it starting from a Euclidean path integral. Extending previous works[@tfdrn; @bhmicrostate], the dual state is a fixed charge boundary state of the CFT on a spatial sphere $\mathbb{S}^{d-1}$ evolved for an imaginary (Euclidean) preparation time $\tau_0$: $$\ket{\Psi}=\textrm{e}^{-\tau_0 H}\ket{B,Q}. \label{state}$$ The ETW brane is associated to the past boundary condition in the path integral. In order to have a well defined CFT state and a sensible Euclidean geometry (see Figure \[eucl\]), $\tau_0$ must be positive[@bhmicrostate]. In Euclidean signature, the brane is the bulk extension of the CFT boundary[@bound2; @bound3; @bhmicrostate]. Its trajectory starts at the asymptotic boundary $r=\infty$ at $\tau=-\tau_0$, reaches a minimum radius $r=r_0$ and ends again on the boundary at $\tau=\tau_0$. Because the state is highly excited, the bulk geometry will also typically contain a black hole. The total Euclidean periodicity is given by the inverse temperature $\beta$ of the black hole, determined by its size and charge. The brane trajectory excises part of this total range of Euclidean time, leaving only the interval $[-\tau_0,\tau_0]$ where the CFT is defined. Thus, the preparation time is determined by $2\tau_0=\beta-2\Delta\tau$, where $\Delta\tau$ is the total Euclidean time needed for the brane to cover half of its trajectory. ![**Euclidean brane trajectory.** The Euclidean path integral on this spacetime computes the norm of the CFT state. The red region is cut off by the ETW brane.[]{data-label="eucl"}](eucl.pdf){height="3.5cm"} The bulk physics is specified by the Euclidean action, which for the Einstein-Maxwell system considered here is $I=I_{bulk}+I_{ETW}$. $I_{bulk}$ is the Einstein-Maxwell action with a negative cosmological constant $\Lambda=-d(d-1)/(2L^2_{AdS})$, including a Gibbons-Hawking-York term and an electromagnetic boundary term (needed if the charge is held fixed[@hawkingboundary; @1-; @phase]) for the asymptotic boundary. The brane action reads: $$I_{ETW}=-\frac{1}{8\pi G}\int_{ETW}d^dx\sqrt{h}\left[K-(d-1)T\right]+I^{em}_{ETW}, \label{actmain}$$ where $G$ is the $d$-dimensional Newton constant, $K$ is the trace of the extrinsic curvature of the brane, $T$ is the tension, $h$ is the determinant of the metric induced on the brane and $I^{em}_{ETW}$ is an electromagnetic boundary term. The variation of the bulk action leads to the Einstein-Maxwell equations with a negative cosmological constant, whose solution is the AdS-Reissner-Nordström (AdS-RN) metric (in Euclidean signature): $$ds^2=f(r)d\tau^2+\frac{dr^2}{f(r)}+r^2d\Omega_{d-1}^2 \label{linel}$$ with (for $d>2$) $$f(r)=1+\frac{r^2}{L^2_{AdS}}-\frac{2\mu}{r^{d-2}}+\frac{Q^2}{r^{2(d-2)}} \label{00}$$ where $\mu$ and $Q$ are the mass and charge parameters of the black hole. They can be expressed as a function of the inner (Cauchy) and outer (event) horizons of the black hole, solutions of the equations $f(r_+)=f(r_-)=0$. The black hole is called extremal when $r_-=r_+$, corresponding to the maximum admissible charge for a fixed mass. Varying the brane action, the equation $$K_{ab}-Kh_{ab}=(1-d)Th_{ab} \label{brane}$$ is obtained, with $a,b=0,...,d-1$. By parameterizing the trajectory of the brane with $r=r(\tau)$, equation (\[brane\]) yields $$\frac{dr}{d\tau}=\pm \frac{f(r)}{Tr}\sqrt{f(r)-T^2r^2} \label{traj}$$ where the sign depends on if the brane is expanding or contracting. For $d>2$ and $T<T_{crit}=1/L_{AdS}$, a numerical evaluation gives two solutions for the equation $f(r)=T^2r^2$, corresponding to the minimum radius of the brane: $r_0^+>r_+$ and $r_0^-<r_-$. Between the two solutions the square root takes imaginary values. Since the brane is contracting from and expanding to $r=\infty$, the minimum radius is $r_0=r_0^+$. The role of the solution $r_0^-$ will be clear later. Given values of $d$, $r_+$, $r_-$ and $T$, it is possible to evaluate the Euclidean time $\Delta\tau$ and therefore the preparation time $\tau_0$. We find that, for a black hole sufficiently close to extremality, i.e. for $r_-\to r_+$, it is possible to approach the critical value of the tension $T_{crit}$ while retaining a sensible Euclidean solution, i.e. with $\tau_0>0$. At the same time, when $T\to T_{crit}$ the ratio between the minimum radius of the brane and the outer horizon radius becomes very large. We further remark (see Appendix \[actcomp\]) that this non-extremal black hole solution is always dominant in the thermodynamic ensemble when $\tau_0>0$. This result was not feasible in the AdS-Schwarzschild case[@bhmicrostate] and is first main result of this work. Lorentzian analysis and braneworld cosmology ============================================ In order to study cosmological physics, we must find the Lorentzian geometry associated with the Euclidean picture outlined above. Consider the state (\[state\]), obtained taking the $\tau=0,\pm\beta/2$ slice of the thermal circle (where the brane reaches its minimum radius $r_0$), and evolve it in Lorentzian time. The corresponding geometry is a maximally extended AdS-RN black hole with the ETW brane cutting off part of the left asymptotic region. The effect of the transition to Lorentzian time on equation (\[traj\]) is to flip the sign of the radicand, thus the RHS is now real only if $r_0^-<r<r_0^+$. The minimum radius in Euclidean signature is a maximum radius in Lorentzian signature. Therefore the brane expands and contracts crossing the two horizons of the black hole, but, differently from the AdS-Schwarzschild case, it never reaches the singularity (see Figure \[penrose\]). Defining the brane proper time $d\lambda^2=[f(r)-r'^2/f(r)]dt^2$, the metric induced on the brane takes the form $$ds_{ETW}^2=-d\lambda^2+r^2(\lambda)d\Omega_{d-1}^2$$ which is a closed FLRW metric where the brane radius $r(\lambda)$ plays the role of the scale factor, ranging between $r_0^-$ and $r_0^+$ and satisfying the Friedmann equation $$\left(\frac{\dot{r}}{r}\right)^2=-\frac{1}{r^2}+\frac{2\mu}{r^d}-\frac{Q^2}{r^{2d-2}}+\left(T^2-\frac{1}{L^2_{AdS}}\right) \label{friedmain}$$ where the dot indicates a derivative with respect to the proper time. For $d=4$, this result (already obtained in refs.[@padilla2; @cosmoholors3]) shows that an observer comoving with the brane interprets the motion of the brane in the bulk as the expansion and contraction of a closed FLRW universe in the presence of radiation (with energy density proportional to the mass of the black hole), a negative cosmological constant $\Lambda_4=3(T^2-1/L^2_{AdS})$ (which is very small when $T\to T_{crit}$) and stiff matter[@stiff; @cosmoholors3] with negative energy density (proportional to the charge of the black hole). From the bulk point of view, the existence of a minimum radius is due to the repulsive nature of the RN singularity, while from the braneworld point of view, the (repulsive) stiff matter term is responsible for the absence of the cosmological singularity, replaced by a Big Bounce[@stiff]. In principle, gluing together more patches of the AdS-RN bulk spacetime, we can obtain a cyclical cosmology. However, the instability of the Cauchy horizon against even small perturbations suggests that, in the region near the bounce, the present description of the brane evolution is not reliable.[@instability] When the charge of the black hole vanishes, we recover the AdS-Schwarzschild description of a Big Bang-Big Crunch braneworld cosmology. We emphasize that the cosmological interpretation is not appropriate in the region where the Big Bounce takes place, because a 4-dimensional description of gravity localized on the brane is achievable only when the brane sits far from the black hole horizon. Gravity localization {#gravloc} ==================== The remaining issue is whether observers on the brane see approximately four-dimensional gravity. Randall and Sundrum showed[@RS1; @RS2] that a 4-dimensional Minkowski brane embedded in a warped AdS$_5$ spacetime supports a normalizable graviton zero-mode bound on the brane, reproducing 4-dimensional gravity. The bound mode is lost in the presence of an AdS-Schwarzschild black hole in the bulk[@resonances2], but a resonant quasi-bound mode with a finite lifetime persists when the brane is static and far from the black hole horizon[@resonances]. This is a metastable mode that, after a finite lifetime, “leaks” into the bulk and falls into the black hole horizon. The shorter the spatial scale of the gravitational perturbation, the longer it is bound on the brane. Gravity is therefore locally localized both in space and in time, meaning that it will look 4-dimensional to an observer living on the brane, but only on spatial and time scales smaller than the cosmological ones. Clarkson and Seahra[@resonances] focused their attention mainly on the small black hole case ($r_H<L_{AdS}$). Since sensible Euclidean solutions with $r_0/r_+\gg 1$ are achievable only if the black hole is large, our generalization of their work to the AdS-RN spacetime is focused on the $r_+\gg L_{AdS}$ case (see Appendix \[gravlocdet\]). Let us consider a linear perturbation of the metric $\delta g_{\mu\nu}=g_{\mu\nu}-g_{\mu\nu}^0$, $\mu,\nu=0,...,d$. As a tensor on a spatial slice at constant radius ($t=const$, $r=const$), it can be decomposed into scalar, vector and tensor components[@kodama]. The graviton mode of interest is the tensor component which has $\delta g_{t\mu}=\delta g_{r\mu}=0$. We also use the transverse-traceless (TT) gauge condition $\delta g^\mu_\mu=0=\nabla^\mu\delta g_{\mu\nu}$. It is useful to introduce the adimensional coordinates and parameters $y=r/r_+$, $\tilde{t}=t/r_+$, $\gamma=L_{AdS}/r_+$, $q=Q/r_+^{d-2}$ and to decompose the metric perturbation in terms of tensor harmonics $\mathbb{T}^{(k)}_{ij}$ (with $i,j=1,...,d-2$) on the unit ($d-1$)-sphere. They satisfy $\Delta_{d-1} \mathbb{T}^{(k)}_{ij}=-k^2\mathbb{T}^{(k)}_{ij}$, where $\Delta_{d-1}$ is the ($d-1$)-dimensional covariant Laplacian on the unit ($d-1$)-sphere and $k^2=l(l+d-2)-2$, with $l=1,2,...$ generalized angular momentum. Defining the tortoise coordinate $dr^*=dy/f(y)$ and studying the problem in the frequency domain, the linearized Einstein equations can be recast in the form of a one-dimensional Schrödinger equation for each tensor mode: $$-\partial_{r^*}^2\psi_{k,\omega}(r^*)+V_k\left[y(r^*)\right]\psi_{k,\omega}(r^*)=\omega^2\psi_{k,\omega}(r^*) \label{schro}$$ where the potential reads: $$V_k(y)=f(y)\left[\frac{d^2-1}{4\gamma^2}+\frac{d^2-4d+11+k^2}{4y^2}+\frac{(d-1)^2\left(1+\frac{1}{\gamma^2}+q^2\right)}{4y^d}-\frac{(d-1)(3d-5)q^2}{4y^{2d-2}}\right]. \label{potential}$$ This potential diverges for $r^*=r^*_\infty$ (where $y(r^*_\infty)=\infty$) and vanishes exponentially for $r^*\to -\infty$, at the black hole horizon. ![**Potential $\mathbf{V_k[y(r^*)}$\] - Large black hole.** $r_+=100$, $r_-=99.9$, $L_{AdS}=1$. The potential diverges for $r^*_\infty=4.94\cdot 10^{-5}$ and vanishes exponentially at the horizon $r^*\to -\infty$.[]{data-label="fig4"}](potential.pdf){height="3.5cm"} Now we need to find a boundary condition on the ETW brane, which cuts off the radial coordinate at $r^*=r^*_b<r^*_\infty$. If the brane is moving in the bulk, this is a non-trivial task. We assume that the brane is moving adiabatically with respect to the time scale of the perturbation, and consider it effectively static at a position $r^*=r^*_b$, verifying later the reliability of the adiabatic approximation. Under this assumption, the linearized version of equation (\[brane\]) provides the boundary condition $\partial_{r^*}\psi_{k,\omega}|_{r^*=r^*_b}=(d-1)f(y)/(2y)\cdot \psi_{k,\omega}|_{r^*=r^*_b}$, with $y=y(r^*)$. Requiring this boundary condition is equivalent to add to the potential (\[potential\]) a negative delta at the position of the brane, whose depth is $(d-1)f(y)/y$. In the original Randall-Sundrum model, such a delta guarantees the existence of the zero bound mode. The rapid vanishing behaviour of the potential (\[potential\]) at the black hole horizon implies that only a metastable, quasi-bound mode with complex frequency $\omega=\bar{\omega}+i\Gamma/2$ and pure infalling boundary condition at the black hole horizon (typical of quasi-normal modes) can be present in our setup. In the large black hole case ($\gamma\ll 1$) this is the only resonant mode[@resonances]. Focusing on the $d=4$, $\gamma\ll 1$ case of interest, we use the trapping coefficient method[@resonances] to find the real and imaginary parts of the frequency of the quasi-bound mode. For a given value of the charge of the black hole, there are three control parameters: the size of the black hole $\gamma$, the position of the brane $r^*(y_b)$, and the angular momentum $l$. We find that, when the brane is sufficiently far from the horizon of the black hole, $\bar{\omega}\gg\Gamma$ (i.e. the mode is long-lived) and the graviton is very well localized on the brane (see Figure \[mode\]). Such a localization is lost if the brane radius becomes too large. Increasing the size of the black hole, the brane must sit farther from the horizon to retain the long-lived quasi-bound mode. Gravity localization is also more efficient for higher values of the angular momentum $l$. Increasing $l$ means probing shorter distances on the brane. Therefore, this result is in accordance with our expectation to obtain a 4-dimensional effective description of gravity only locally. The last step is to verify that the adiabatic approximation is reasonable. One must compare the time scale of oscillation $t_o=1/\bar{\omega}$ with the “Hubble time” $T_H=y(t)/y'(t)$, determined by the Lorentzian version of equation (\[traj\]). The adiabatic approximation is reliable if $t_o\ll T_H$ and if the condition $(y')^2/f^2(y)\ll 1$ is satisfied. For given values of charge and tension, the approximation is arbitrarily good when approaching the inversion point in the trajectory of the brane, where $y'$ vanishes. But it holds also for a significant portion of the trajectory of the brane, as quantified by the ratio between the proper time spent to cover the part of trajectory where the approximation is reliable and the total amount of proper time for the entire trajectory. As an example, for the set of parameters considered in Table \[timescales\], if we accept $t_o\sim T_H/10$ as the threshold for the validity of the adiabatic assumption, the latter holds for $\sim 55\%$ of the total amount of brane proper time. The decay time $t_d=2/\Gamma$ is almost always considerably larger than the Hubble time, meaning that our analysis loses significance before the quasi-bound mode leaks into the bulk. We finally remark that the time of oscillation of the mode is very close to the one expected for a 4-dimensional metric perturbation of the Einstein-static universe $t_{GR}=y_b/\sqrt{f(y_b)(l+2)l}$. The local localization of gravity on the Euclidean-sensible braneworld solution is the second main result of this work. Discussion ========== In this work we proved that it is possible to build a holographic braneworld cosmology model which admits locally a 4-dimensional effective description of gravity. An interesting open question is if this framework can be extended to different cosmological models, eventually including a de Sitter phase that could match our current observational data, and if gravity localization is achievable on even larger scales in such setups. Understanding gravity localization beyond the adiabatic approximation would also be interesting. The holographic dual description presents many non-trivial issues to be explored. Determining which CFT observables are able to probe the physics behind the horizon of the black hole, and therefore the braneworld cosmology, is crucial. If the brane is not too far from the black hole horizon, one possibility is the entanglement entropy of large spatial regions of the CFT[@bhmicrostate; @rt1; @rt2], while additional information can be extracted from the holographic complexity, even if its CFT interpretation is not completely clear yet. It is also important to identify field theories with the right boundary states to make our construction work and verify the stability of the solutions in the full theory. For example, one should check that scalar fields do not condense in the near-extremal black hole background (or that the desired properties persist in such a condensed state). Given such a theory, it should be possible to simulate it on an quantum computer[@eternal_wormhole; @building_tfd; @product_spectrum; @bound4], thereby opening up the possibility to experimentally study holographic braneworld cosmology. Although many questions still need to be answered, the model presented in this paper shows the possibility to holographically describe a FLRW cosmology using AdS/CFT, bringing us one step closer to the understanding of quantum gravity in a cosmological universe. {#section .unnumbered} This work was supported in part by the U.S. Department of Energy, Office of Science, Office of High Energy Physics QuantISED Award DE-SC0019380 and by the Simons Foundation via the It From Qubit collaboration. We thank M. Van Raamsdonk, C. Waddell, D. Wakeham, M. Rozali, S. Cooper, R. Sundrum, R. Bousso, and J. Maldacena for useful discussions. Euclidean action and Einstein-Maxwell equations {#euclidean} =============================================== The total Euclidean action for the bulk spacetime is given by $$I_{bulk}=-\frac{1}{16\pi G}\int_{\mathcal{M}}d^{d+1}x\sqrt{g}(R-2\Lambda-4\pi G F_{\mu\nu}F^{\mu\nu})+I_{GHY}+I_{bound}^{em} \label{bulkaction}$$ where $\mathcal{M}$ is the ($d+1$)-dimensional spacetime manifold, $g$ is the determinant of the bulk metric, $R$ is the Ricci scalar, $\Lambda=-d(d-1)/(2L^2_{AdS})$ is the cosmological constant, $I_{GHY}$ is the Gibbons-Hawking-York term for the asymptotic boundary and $$I_{bound}^{em}=-\int_{\partial \mathcal{M}_\infty} d^dx\sqrt{\gamma}F^{\mu\nu}n_\mu A_\nu \label{bound}$$ with $\gamma$ determinant of the metric induced on the asymptotic boundary $\partial \mathcal{M}_\infty$ and $n_{\mu}$ dual vector normal to the boundary. The term $I_{bound}^{em}$ is needed[@hawkingboundary; @1-; @phase] because we keep the charge fixed instead of the potential when we vary the action (\[bulkaction\]). This choice is preferable since we are interested in controlling the charge in order to approach the extremal black hole case and obtain a sensible Euclidean solution for a near-critical brane. The electromagnetic four-potential $A_\mu=(A_t,\vec{0})$ for a point charge in the origin reads: $$A_t=\sqrt{\frac{d-1}{8\pi G(d-2)}}\left(\frac{Q}{r^{d-2}}-\frac{Q}{r_+^{d-2}}\right) \label{pot}$$ where we chose the outer (event) horizon of the black hole $r=r_+$ to be the zero-potential surface and $Q$ is the charge parameter of the black hole, related to the point charge $\tilde{Q}$ (charge of the black hole) by[@dias; @1-; @phase] $$Q=\sqrt{\frac{8\pi G}{(d-1)(d-2)}}\frac{\tilde{Q}}{V_{d-1}} \label{chpar}$$ with $V_{d-1}$ volume of the ($d-1$)-dimensional unit sphere. The electromagnetic tensor is standardly defined as $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$. The action for the End-of-the-World brane is reported in equation (\[actmain\]), where $I_{ETW}^{em}$ has the same form (\[bound\]) of the electromagnetic boundary term for the asymptotic boundary, with $\gamma\to h$ and $n_\mu$ dual vector normal to the ETW brane. The extrinsic curvature appearing there is defined as $$K_{ab}=\nabla_\mu n_{\nu}\textrm{e}^\mu_a\textrm{e}^\nu_b,\hspace{2cm} \textrm{e}^\mu_a=\frac{dx^\mu}{dy^a}. \label{extrinsic}$$ A variation of the total action for the bulk and the brane leads to a set of three equations: equation (\[brane\]) (describing the motion of the brane in the bulk spacetime), the Einstein-Maxwell equations for the bulk $$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}(R-2\Lambda)=8\pi G T_{\mu\nu}^{bulk} \label{ein}$$ with $R_{\mu\nu}$ Ricci tensor, and the Maxwell equation $$\nabla_\mu\left(\sqrt{-g}F^{\mu\nu}\right)=0.$$ The bulk stress-energy tensor appearing in equation (\[ein\]) is the electromagnetic stress-energy tensor: $$T_{\mu\nu}^{bulk}=g^{\rho\sigma}F_{\mu\rho}F_{\nu\sigma}-\frac{1}{4}g_{\mu\nu}F_{\rho\sigma}F^{\rho\sigma}. \label{stress}$$ The static and spherically symmetric metric reported in equation (\[linel\]) is a solution of the Einstein-Maxwell equations (\[ein\]) and the mass parameter $\mu$ appearing in equation (\[00\]) (which is valid only for $d>2$) is related to the ADM mass of the black hole by $$\mu=\frac{8\pi G M}{(d-1)V_{d-1}}.$$ In order to approach, in our numerical analysis, the extremal black hole case for a fixed event horizon size by controlling the Cauchy horizon radius it is useful to express the mass and the charge parameters in terms of $r_+$ and $r_-$: $$\begin{aligned} &\mu=\frac{L^2_{AdS}\left[r_+^{2(d-2)}-r_-^{2(d-2)}\right]+r_+^{2d-2}-r_-^{2d-2}}{2L^2_{AdS}(r_+^{d-2}-r_-^{d-2})}; \label{massd3}\\[15pt] &Q^2=r_+^{d-2}r_-^{d-2}\left[\frac{L^2_{AdS}\left(r_+^{d-2}-r_-^{d-2}\right)+r_+^d-r_-^d}{L^2_{AdS}\left(r_+^{d-2}-r_-^{d-2}\right)}\right].\label{charged3}\end{aligned}$$ It is also convenient to eliminate the mass parameter from the expression of the $\tau\tau$ component of the metric $f(r)$: $$f(r)=1+\frac{r^2}{L^2_{AdS}}-\frac{r_+^{d-2}}{r^{d-2}}\left(1+\frac{r_+^2}{L^2_{AdS}}\right)+\frac{Q^2}{r^{d-2}}\left(\frac{1}{r^{d-2}}-\frac{1}{r_+^{d-2}}\right) \label{00charge}$$ which can be rewritten, using the adimensional coordinates introduced in Section \[gravloc\], in a form more functional for the gravity localization analysis $$f(y)=\frac{\left[y^4+\left(\gamma^2+1\right)y^2-\gamma^2q^2\right]\left(y^2-1\right)}{\gamma^2y^4}. \label{adimmetric}$$ For completeness, we report here also the form of the $\tau\tau$ component of the metric for the BTZ charged black hole (i.e. for $d=2$) as a function of the ADM mass $M$ and the charge of the black hole $\tilde{Q}$, in units such that $G_3=1/8$[@btz]: $$f(r)=\frac{r^2}{L_{AdS}^2}-M-\tilde{Q}^2\ln\left(\frac{r}{L_{AdS}}\right).$$ Finally, the Ricci scalar for a spherically symmetric metric of the form (3) takes in general the form $$R=-\frac{r^2f''(r)+2r(d-1)f'(r)+(d-1)(d-2)f(r)-(d-1)(d-2)}{r^2}. \label{ricciscalar}$$ Substituting equation (\[00\]), we obtain, for $d>2$ $$R=-\frac{d(d+1)}{L^2_{AdS}}-\frac{(d-3)(d-2)Q^2}{r^{2d-2}}. \label{scalar2}$$ Euclidean analysis - details ============================ In order to avoid a conical singularity, the Euclidean time $\tau$ must be periodic with periodicity given by $$\beta=\frac{2\pi}{k_g}. \label{periodicity}$$ $k_g$ is the surface gravity, which for the static and spherically symmetric black holes reads $$k_g=\left.\frac{f'(r)}{2}\right|_{r=r_H}$$ where $r_H$ is the event horizon radius. For a Reissner-Nordström black hole $r_H=r_+$. Using equation (\[00charge\]), for $d>2$ we find the Euclidean periodicity $$\beta=\frac{4\pi L^2_{AdS}r_+^{2d-3}}{(d-2)L^2_{AdS}\left[r_+^{2(d-2)}-Q^2\right]+dr_+^{2d-2}}. \label{beta}$$ The spherically symmetric ETW brane can be parameterize with $r=r(\tau)$. Analogously to the AdS-Schwarzschild case[@bhmicrostate], the one-form dual to the unit vector normal to the brane is given by $$n_\mu=\gamma(-r',1,0,0),\hspace{2cm} \gamma=\sqrt{\frac{f(r)}{f^2(r)+(r')^2}} \label{normalvector}$$ with $r'=dr/d\tau$. The metric induced on the brane reads $$\begin{aligned} &h_{\tau\tau}=f(r)+\frac{(r')^2}{f(r)};\\ &h_{\phi_i\phi_i}=g_{\phi_i\phi_i} \hspace{1cm} i=1,..,d-1 \end{aligned}$$ where $\phi_i$ are the coordinates on the sphere directions. Using the definition of the extrinsic curvature (\[extrinsic\]) and equation (\[normalvector\]), the $\tau\tau$ component of equation (\[brane\]) leads to equation (\[traj\]), that we report here: $$\frac{dr}{d\tau}=\pm\frac{f(r)}{Tr}\sqrt{f(r)-T^2r^2}. \label{tautau}$$ The $+$ ($-$) sign corresponds to the contracting (expanding) phase. Indeed, as it is clear from Figure \[eucl\], during the contraction the Euclidean time decreases (clockwise) from $\tau=-\tau_0$ to $\tau=-\beta/2$, therefore the $dr/d\tau>0$; during the expansion the time decreases from $\tau=\beta/2$ to $\tau=\tau_0$, and $dr/d\tau<0$. It is easy to show that, if equation (\[tautau\]) is satisfied, the other components of equation (\[brane\]) are identically fulfilled. The minimum radius $r_0$ is defined by the largest zero of the RHS of equation (\[tautau\]), as discussed in Section \[euclsec\]. For a vanishing tension, clearly $r_0=r_+$, since $f(r_+)=0$ by definition of horizon. It is not difficult to show that for $d>2$ and a critical brane, i.e. for $T=T_{crit}=1/L_{AdS}$, the equation $f(r)=T^2r^2$ can be solved analytically, leading to two solutions $r_0^+>r_+$ and $r_0^-<r_-$, and therefore $r_0=r_0^+>r_+$. A numerical evaluation shows that for $0<T<T_{crit}$ the latter property still holds. The ratio $r_0/r_+$ increases when the tension increases. In particular, if the black hole is big (meaning $r_+\gg L_{AdS}$), it is possible to obtain a very large ratio $r_0/r_+$ by approaching the critical value of the tension (see Figure \[fig3\](b)). If the black hole is small, the ratio is never large, preventing, as we observed, gravity localization on the brane. ![**Minimum brane radius - Small black hole.** The minimum radius of the brane $r_0$ grows with the tension $T$, but it never becomes much larger than the black hole horizon. $d=4$, $r_+=1$, $r_-=0.99$, $L_{AdS}=1$.](s1.pdf){height="3.5cm"} Since we are interested in time-symmetric solutions, we can require the brane to reach its minimum radius for $\tau=\pm\beta/2$, defining $r_0=r(\tau=\pm\beta/2)$. Up to a constant $\beta/2$ (which cancels out when adding up the Euclidean times necessary for the expanding and the contracting phases), the brane locus is given by $$\tau(r)=\int_{r_0}^rd\hat{r}\frac{T\hat{r}}{f(\hat{r})\sqrt{f(\hat{r})-T^2\hat{r}^2}}. \label{taur}$$ The total Euclidean time necessary for the brane to go from $r=r_0$ to $r=\infty$, i.e. to cover half of its trajectory, is $$\Delta\tau=\int_{r_0}^\infty dr \frac{Tr}{f(r)\sqrt{f(r)-T^2r^2}}. \label{deltatau}$$ The Euclidean preparation time $\tau_0$ is given by the residual Euclidean periodicity: $$\tau_0=\frac{\beta-2\Delta\tau}{2}=\frac{2\pi L^2_{AdS}r_+^{2d-3}}{(d-2)L^2_{AdS}\left(r_+^{2(d-2)}-Q^2\right)+dr_+^{2d-2}}-\Delta\tau.$$ For a critical brane we obtain: $$\Delta\tau_{crit}=\frac{1}{L_{AdS}}\int_{r_0}^\infty dr \frac{r}{\left(1+\frac{r^2}{L^2_{AdS}}-\frac{2\mu}{r^{d-2}}+\frac{Q^2}{r^{2(d-2)}}\right)\sqrt{1-\frac{2\mu}{r^{d-2}}+\frac{Q^2}{r^{2(d-2)}}}}$$ and this expression is logarithmically divergent at infinity. For $T>T_{crit}$, the square root $\sqrt{f(r)-T^2r^2}$ becomes imaginary for large values of $r$. Thus, $\Delta\tau$, and therefore the preparation time $\tau_0$, are well defined only for $T<T_{crit}$. Our numerical evaluation shows that, given a value for the outer horizon radius $r_+$ and for the brane tension $T<T_{crit}$, it is always possible to obtain a positive preparation time (i.e. a sensible Eucliden solution, and a dual description in terms of an Euclidean-time-evolved charged boundary state of a CFT living on the boundary of the right asymptotic region of the black hole) by increasing the size of the inner horizon radius $r_-$, which means by increasing the charge of the black hole. For $T\to T_{crit}$, we need $r_-\to r_+$ in order to retain a sensible Euclidean solution, i.e. the black hole must approach extremality. This feature guarantees that, for a large black hole sufficiently close to extremality, we can find a sensible Euclidean solution with the minimum radius of the brane (which is the maximum radius in Lorentzian signature) consistently larger than the black hole event horizon. As we have pointed out, this is a necessary condition in order to achieve gravity localization on the brane. In the AdS-Schwarzschild case, which is the $Q\to 0$ limit of our analysis, this result is clearly impossible to obtain. Therefore, for an uncharged black hole gravity localization and positive preparation time mututally exclude each other, and a holographic braneworld cosmology picture is not feasible.[@bhmicrostate] Action comparison {#actcomp} ================= In order to understand if the non-extremal AdS-RN solution we studied is dominant in the Euclidean path integral, we must compare its on-shell action with other possible phases contributing to the path integral. The dominant phase will be the one with smallest action. As we have already pointed out, we will focus our attention on the fixed charge case. This choice corresponds to a canonical ensemble, where the temperature and the charge are held fixed. Note that this implies that (differently from the fixed potential case) the variation of the potential does not vanish on the boundary. For this reason, we need to add the electromagnetic boundary terms (\[bound\]) for the asymptotic boundary and the brane to the action. For fixed charge, the other possible phase is represented by the extremal AdS-RN black hole with the same charge as the corresponding non-extremal solution (for a complete treatment of the AdS-RN phase structure for fixed charge and fixed potential without the ETW brane see refs.[@1-; @phase]). The CFT state corresponding to the fixed charge ensemble is the Euclidean-time-evolved boundary state with fixed charge reported in equation (\[state\]). It is clear that the two Euclidean solutions contributing to the gravity path integral dual to the state (\[state\]) must have the same charge and the same preparation time $\tau_0$. We will use this property in order to match the two geometries in the asymptotic region, procedure needed in order to regularize the Euclidean action and compare the two actions.[@1-; @phase; @bhmicrostate] We will now briefly review some useful properties of the extremal AdS-RN black hole. Extremal AdS-RN black hole {#extremal-ads-rn-black-hole .unnumbered} -------------------------- For the extremal AdS-Reissner-Nordström black hole, the two solutions $r_+$ and $r_-$ of $f_e(r)=0$ coincide, defining the extremal horizon radius $r_e$. Therefore, also the relationship $\left.f'_e(r)\right|_{r=r_e}=0$ must hold. By requiring the two conditions to be satisfied, we obtain for the extremal mass and charge parameters: $$\begin{aligned} &2\mu_e=2r_e^{d-2}+\frac{2d-2}{d-2}\frac{r_e^d}{L^2_{AdS}};\label{extm}\\ &Q^2_e=r_e^{2d-4}+\frac{d}{d-2}\frac{r_e^{2d-2}}{L^2_{AdS}}.\label{extch}\end{aligned}$$ We remark how, by fixing the charge of the black hole, the extremal horizon radius is uniquely determined. Another important feature of the extremal black hole is that, since $\left.f'_e(r)\right|_{r=r_e}=0$, the Euclidean periodicity (\[periodicity\]) diverges, i.e. the temperature of the extremal black hole vanishes. Nonetheless, it has been demonstrated [@1-; @phase; @tempestr] that an arbitrary Euclidean periodicity can be chosen for the extremal RN black hole without running into a conical singularity. Physically, this means that an extremal RN black hole can be in thermodynamic equilibrium with a thermal bath at an arbitrary temperature. In the following we will use this feature, fixing a total Euclidean periodicity $\beta_e$ for the extremal black hole that will be determined by matching the geometries of the non-extremal and the extremal black holes in the asymptotic regions. Given $\beta_e$, the extremal horizon radius $r_e$ and the expression (\[extch\]) for the extremal charge, the same Euclidean analysis carried out in Section \[euclidean\] can be applied to the extremal case. Before explicitly evaluating the difference of the two Euclidean actions for the non-extremal and extremal black holes, one more remark is needed. In general, the phase structure is more complicated than the one we are going to study. In particular, for a given small charge $Q<Q_{crit}$ and some range of temperatures, three different non-extremal black holes with the same temperature but different horizon radius can exist [@1-]. Indeed, for $Q<Q_{crit}$, there exist turning points for the Euclidean periodicity $\beta$ as a function of the external horizon $r_+$, which disappear for $Q>Q_{crit}$. The critical charge can be obtained from the condition $\partial_{r_+}\beta=0=\partial_{r_+}^2\beta$, and reads [@1-]: $$Q^2_{crit}=\frac{1}{(d-1)(2d-3)}\left[\frac{(d-2)^2}{d(d-1)}\right]L_{AdS}^{2d-4}.$$ The coexistence of different non-extremal black holes implies the necessity of studying which one of them has the smallest action, before a comparison with the extremal case can be made. To do so, we should fix an Euclidean periodicity $\beta$ and a charge $Q$, find the different corresponding values of $r_+$ and compare the actions for each of them. Nonetheless, since, as we have already pointed out, we are interested in solutions involving near-critical branes, and this requires to have a black hole near extremality in order to have a positive preparation time, we are safely into the region where only one non-extremal phase exists [@1-]. Therefore, we are allowed to choose $r_+$ and $r_-$ (and therefore the charge) independently, and compare directly the resulting action with the corresponding extremal action with the same charge. Bulk action {#bulk-action .unnumbered} ----------- Let us evaluate the three terms of the bulk action (\[bulkaction\]) separately in the non-extremal case. First, using equation (\[pot\]), the only non vanishing components of the electromagnetic tensor are $F_{\tau r}=-F_{r\tau}$, and we obtain $$F_{\mu\nu}F^{\mu\nu}=-\frac{(d-1)(d-2)}{4\pi G}\frac{Q^2}{r^{2d-2}}. \label{ff}$$ Using equation (\[ricciscalar\]) for the Ricci scalar, the relationship between $\Lambda$ and $L_{AdS}$, and noting that $\sqrt{g}=r^{d-1}$, the first term of the bulk action reads $$I_{bulk}^{(1)}=\frac{dV_{d-1}}{8\pi G L^2_{AdS}}\int drd\tau r^{d-1}-\frac{(d-2)V_{d-1}Q^2}{8\pi G}\int drd\tau\frac{1}{r^{d-1}}.$$ Recalling that part of the Euclidean geometry is cut off by the ETW brane parameterized by $\tau(r)$, we can rewrite this integral as the integral in the entire spacetime minus the excised part. In order to avoid divergences, we must also introduce a cut-off $R$ in the asymptotic region. After a few steps, the result turns out to be $$\begin{split} I_{bulk}^{(1)}=&\frac{V_{d-1}}{8\pi GL^2_{AdS}}\left[\beta(R^d-r_+^d)-2\left.(r^d\tau(r))\right|_{r_0}^R+2\int_{r_0}^Rdrr^d\frac{\partial\tau}{\partial r}\right]\\[15pt] &+\frac{V_{d-1}Q^2}{8\pi G}\left[\beta\left(\frac{1}{R^{d-2}}-\frac{1}{r_+^{d-2}}\right)-2\left.\left(\frac{\tau(r)}{r^{d-2}}\right)\right|_{r_0}^R+2\int_{r_0}^Rdr\frac{1}{r^{d-2}}\frac{\partial\tau}{\partial r}\right] \end{split} \label{totbulk}$$ where $\beta$ is given by equation (\[beta\]). The Gibbons-Hawking-York term $I_{GHY}$ gives a vanishing contribution to the action difference for asymptotically AdS spacetimes[@1-; @phase; @counterterm] and therefore we can neglect it. The electromagnetic boundary term finally reads $$I_{bound}^{em}=-\frac{(d-1)V_{d-1}Q^2\tau_0}{4\pi G}\left(\frac{1}{R^{d-2}}-\frac{1}{r_+^{d-2}}\right). \label{expbound}$$ Brane action {#brane-action .unnumbered} ------------ Tracing equation (\[brane\]) we obtain $$K-(d-1)T=T.$$ Parameterizing again the brane with $\tau(r)$, and using equation (\[tautau\]), the determinant of the metric induced on the brane reads: $$\sqrt{h}=\pm\frac{1}{T}f(r)r^{d-2}\frac{\partial\tau}{\partial r} \label{detmetricbrane}$$ where the $+$ ($-$) sign is for the contracting (expanding) phase. Considering the contributions of both these two phases and using the expression (\[normalvector\]) for the dual vector normal to the brane, the total action for the ETW brane reads $$\begin{split} I_{ETW}=&-\frac{V_{d-1}}{4\pi G}\int_{r_0}^Rdrf(r)r^{d-2}\frac{\partial\tau}{\partial r}\\[10pt] &+\frac{(d-1)V_{d-1}Q^2}{4\pi G r_+^{d-2}}\tau(r)\Big|_{r_0}^R-\frac{(d-1)V_{d-1}Q^2}{4\pi G}\int_{r_0}^Rdr\frac{1}{r^{d-2}}\frac{d\tau}{dr}. \end{split} \label{etwem}$$ Action difference {#action-difference .unnumbered} ----------------- Using the definition (\[taur\]), we note that $\tau(r_0)=0$. Additionally, $\lim_{R\to\infty}\tau(R)=\Delta\tau$ and $\beta-2\Delta\tau=2\tau_0$. Since, after subtracting the extremal action from the non-extremal one, we will take the limit $R\to\infty$, for simplicity of notation we will substitute now $\tau(R)\to\Delta\tau$ and $\beta-2\tau(R)\to 2\tau_0$, even if the limit has not been taken yet. Using the above cited relationships, the total action for the non-extremal black hole can be cast in the form $$\begin{split} I_{tot}=&\frac{V_{d-1}}{8\pi GL^2_{AdS}}\left\{2\tau_0R^d-\beta r_+^d-\frac{2(d-2)L^2_{AdS}Q^2}{R^{d-2}}\tau_0+\frac{(d-2)L^2_{AdS}Q^2}{r_+^{d-2}}\beta\right.\\ &\left.+2\int_{r_0}^Rdr\left[\left(r^d-\frac{(d-2)L^2_{AdS}Q^2}{r^{d-2}}-L^2_{AdS}r^{d-2}f(r)\right)\frac{\partial\tau}{\partial r}\right]\right\}. \end{split} \label{act}$$ We must now subtract from this action the equivalent action for the extremal black hole after matching the geometries in the asymptotic region, and then take the limit $R\to\infty$. The total action for the extremal black hole is given by equation (\[act\]) where all the quantities are substituted by their extremal counterparts ($r_+\to r_e$, $\tau_0\to\tau_0^e$, $\beta\to\beta_e$, $r_0\to r_0^e$, $f(r)\to f_e(r)$, $Q\to Q_e$). As we have already mentioned, the Euclidean analysis carried out for the non extremal black hole is still valid by substituting $r_+\to r_e$, and the total Euclidean periodicity $\beta_e$ can be chosen arbitrarily [@tempestr]. Since we are interested in the fixed charge case, we can choose a charge, which will be the same for both the black holes ($Q=Q_e$), using equation (\[charged3\]) and selecting values for the outer and inner horizon radii ($r_+$ and $r_-$) of the non-extremal black hole. The extremal horizon radius $r_e$ is then uniquely determined by equation (\[extch\]). Therefore $r_e$ can be written as a function of $r_+$ and $r_-$ only. We will use this feature in our numerical analysis. The choice of the (arbitrary) Euclidean periodicity for the extremal black hole is determined by the matching of the two geometries in the asymptotic region. In particular, we must match the proper Euclidean preparation times in the asymptotic region (i.e. for $r=R$): $$2\tau_0\sqrt{f(R)}=2\tau^e_0\sqrt{f_e(R)}. \label{matching}$$ At lowest order in $1/R$, equation (\[matching\]) gives $$\tau_0^e\sim\tau_0(1+A) \label{match}$$ where, we have defined: $$A=\frac{L^2_{AdS}}{R^d}(\mu_e-\mu)=\frac{L^2_{AdS}}{R^d}\left(r_e^{d-2}+\frac{d-1}{d-2}\frac{r_e^d}{L^2_{AdS}}-\frac{r_+^{d-2}}{2}-\frac{r_+^d}{2L^2_{AdS}}-\frac{Q^2}{2r_+^{d-2}}\right).$$ We emphasize that, at the asymptotic boundary (i.e. for $R\to\infty$), $\tau_0^e=\tau_0$, as we expect for two geometries dual to the state (\[state\]). The total Euclidean periodicity for the extremal black hole can then be set to be $$\beta_e=2\tau_0^e+2\Delta\tau_e=2\tau_0(1+A)+2\Delta\tau_e \label{tempe}$$ where $\Delta\tau_e$ is defined in perfect analogy with the non-extremal case. Using equation (\[tempe\]) and the matching condition (\[match\]) and taking the limit $R\to\infty$, after some manipulation the difference between the non-extremal and the extremal action finally takes the form $$\begin{split} \Delta I\equiv&\frac{8\pi GL^2_{AdS}}{V_{d-1}}\lim_{R\to\infty}(I-I_{ext})=\\[10pt] &-\frac{2r_e^d\tau_0}{d-2}-2L^2_{AdS}r_e^{d-2}\tau_0+L^2_{AdS}r_+^{d-2}\tau_0+r_+^d\tau_0-\beta r_+^d+\frac{L^2_{AdS}Q^2\tau_0}{r_+^{d-2}}\\[10pt] &+2r_e^d\Delta\tau_e+\frac{(d-2)L^2_{AdS}Q^2\beta}{r_+^{d-2}}-\frac{2(d-2)L^2_{AdS}Q^2\tau_0}{r_e^{d-2}}-\frac{2(d-2)L^2_{AdS}Q^2\Delta\tau_e}{r_e^{d-2}}\\[10pt] &-2L^2_{AdS}\int_{r_0^e}^{r_0}dr\left\{\left[-r^{d-2}+r_e^{d-2}\left(1+\frac{r_e^2}{L^2_{AdS}}\right)+\frac{Q^2}{r_e^{d-2}}-\frac{(d-1)Q^2}{r^{d-2}}\right]\frac{Tr}{f_e(r)\sqrt{f_e(r)-T^2r^2}}\right\}\\[10pt] &+2L^2_{AdS}\int_{r_0}^\infty dr\left\{\left[-r^{d-2}+r_+^{d-2}\left(1+\frac{r_+^2}{L^2_{AdS}}\right)+\frac{Q^2}{r_+^{d-2}}-\frac{(d-1)Q^2}{r^{d-2}}\right]\frac{Tr}{f(r)\sqrt{f(r)-T^2r^2}}\right.\\[10pt] &\left.- \left[-r^{d-2}+r_e^{d-2}\left(1+\frac{r_e^2}{L^2_{AdS}}\right)+\frac{Q^2}{r_e^{d-2}}-\frac{(d-1)Q^2}{r^{d-2}}\right]\frac{Tr}{f_e(r)\sqrt{f_e(r)-T^2r^2}}\right\}. \end{split} \label{deltai}$$ When such a difference is negative, the non-extremal black hole phase has smaller action and is therefore dominant in the ensemble. A numerical evaluation shows that, for $d=4$ and for a range of parameters such that the Euclidean solution for the non-extremal black hole is sensible (i.e. $\tau_0>0$), $\Delta I$ is always negative. Therefore, differently from the AdS-Schwarzschild case[@bhmicrostate], when the non-extremal AdS-RN solution admits a dual CFT description, it is also always dominant in the gravity path integral. Lorentzian analysis and braneworld cosmology - details ====================================================== The initial condition for the Lorentzian time evolution is given by the $\tau=0,\pm\beta/2$ slice of the Euclidean geometry, where the brane reaches its minimum radius $r_0$. The time coordinate is analytically continued $\tau\to -it$, with $t$ Lorentzian time. Further time evolution will be in Lorentzian signature. The brane locus is then given by $$t(r)=\int_{r_0}^rd\hat{r}\frac{T\hat{r}}{f(\hat{r})\sqrt{T^2\hat{r}^2-f(\hat{r})}}. \label{schw}$$ We explained that, in the Lorentzian case, the range of the radial coordinate is $r_0^-<r_0<r_0^+\equiv r_0$. Let us now focus on the Friedmann equation (considered in equation (\[friedmain\])) describing the evolution of the brane in terms of its proper time $\lambda$. By defining $L(r)=\ln r$ and $L_+=L(r_+)$, it can be rewritten as $$\dot{L}^2+V(L)=T^2$$ where $$V[L(r)]=\frac{f(r)}{r^2}. \label{poten}$$ In this new coordinate, the motion of the brane can be regarded as the one of a particle with energy $T^2$ in the presence of a potential $V(L)$. The potential (\[poten\]) naturally vanishes for $r=r_+$ and $r=r_-$. It has a local maximum and a local minimum in $r_M$ and $r_m$ respectively. For the local minimum $r_-<r_m<r_+$ and $V[L(r_m)]<0$, while for the local maximum $r_M>r_+$ and $V[L(r_M)]>1/L^2_{AdS}=T_{crit}^2$. Additionally $\lim_{r\to\infty}V[L(r)]=T_{crit}^2$, while $\lim_{r\to 0}V[L(r)]=\infty$. The latter behaviour confirms that the brane radius cannot vanish, i.e. the brane undergoes a bounce for a minimum radius $r_0^-$. Differently from the small black hole case ($r_+\lesssim L_{AdS}$), in the large black hole case ($r_+\gg L_{AdS}$) the minimum of the potential is $V[L(r_m)]\sim 0$ and its local maximum is $V[L(r_M)]\sim T_{crit}^2$, the latter meaning that the value of the potential at the peak is almost indistinguishable from its asymptotic value. As an example we report in Figure \[lorpot\] a plot of the potential $V[L(r)]$ as a function of the radius of the brane $r$ in the small and in the large black hole cases. The brane trajectory will be determined by the value of its tension (i.e. the energy of the particle). In particular, analogously to the AdS-Schwarzschild case [@bhmicrostate], we can distinguish in general five different cases: 1. $\mathbf{T^2>V[L(r_M)]}.$ The equation $f(r_0)=T^2r_0^2$ has only one positive solution ($r_0^-$), which is by definition the position of the brane at $\lambda=0$. Therefore, the brane radius decreases from $r=\infty$ to $r=r_0^-$ for negative time and then increases monotonically from $r=r_0^-$ to $r=\infty$ for positive time. 2. $\mathbf{T^2=V[L(r_M)]}.$ The brane has constant radius $r=r_M$. This corresponds to an Einstein static universe. 3. $\mathbf{T^2_{crit}<T^2<V[L(r_M)]}$ **- Large $\mathbf{r}$ branch.** The equation $f(r_0)=T^2r_0^2$ admits three solutions. In particular, if we order such solutions as $r_0^{(3)}>r_0^{(2)}>r_0^{(1)}$, the quantity $\sqrt{T^2r^2-f(r)}$ is real for $r>r_0^{(3)}$ and for $r_0^{(1)}<r<r_0^{(2)}$. Thus, for large $r$, the behaviour is similar to the first case, with the brane trajectory starting from $r=\infty$, shrinking to $r=r_0^{(3)}$ for $\lambda=0$ and expanding again to $r=\infty$. 4. $\mathbf{T^2_{crit}<T^2<V[L(r_M)]}$ **- Small $\mathbf{r}$ branch.** This is the same situation as in the previous case, but with $r_0^{(1)}<r<r_0^{(2)}$. The brane expands from $r=r_0^{(1)}$ to $r=r_0^{(2)}$ at $\lambda=0$, and contracts again to $r=r_0^{(1)}$. 5. $\mathbf{T^2<T^2_{crit}}.$ In this case, as we have already pointed out, the equation $f(r_0)=T^2r_0^2$ has two solutions, $r_0^+$ and $r_0^-$. The brane expands from $r=r_0^-$, emerges from the horizon, reaches its maximum radius $r=r_0^+$ for $\lambda=0$, and shrinks again to $r=r_0^-$. This trajectory is completed in a finite amount of proper time, as we will see. We remark that, as we have pointed out, our Euclidean analysis is feasible only for $T<T_{crit}$, condition needed to have a well-defined preparation time $\tau_0$. Therefore only the last situation is of interest for our analysis. Nonetheless, a continuously expanding brane with $T>T_{crit}$ (first and third case) implies, from a braneworld cosmology point of view, a 4-dimensional Friedmann universe with a positive cosmological constant, which undergoes for late times a de Sitter expanding phase. This cosmological scenario is clearly more in accordance with the behaviour of our observed universe with respect to the Big Bounce cosmology of the last case. Therefore, an interesting future development of this work could be its extension to over-critical branes, which probably requires a different class of CFT states in the dual theory. Focusing on the last case, a few final remarks are worth of attention. First, the choice of the sign of the tension $T$ determines which vector normal to the brane we are considering [@bound5], i.e. which one of the two sides of the brane we are retaining. The choice $T>0$ corresponds to a Lorentzian geometry which contains the complete right asymptotic region and part of the left one, ending on the ETW brane (see Figure \[penrose\]). Additionally, the total brane proper time needed to complete the trajectory from $r_0^-$ to $r_0^+$ and back is given by $$\lambda_{tot}=2\int_{r_0^-}^{r_0^+}d\hat{r}\frac{1}{\sqrt{T^2\hat{r}^2-f(\hat{r})}} \label{propertot}$$ and is clearly finite. This expression is the one used in Table \[timescales\] in order to quantify the portion of the trajectory of the brane where the adiabatic approximation is reliable. Finally, from the definition of proper time, we find $$\dot{t}=\frac{dt}{d\lambda}=\pm\frac{Tr}{f(r)}.$$ Referring again to Figure \[penrose\], $f(r)$ has a definite (negative) sign in region II ($r_-<r<r_+$). This feature implies that the Schwarzschild time is monotonic with respect to the proper time during the brane trajectory in that region. Thus, if in the contraction phase the brane crossed the outer horizon, for instance, from the left asymptotic region I’ (i.e. it crossed the line $r=r_+$, $t=-\infty$), it must cross the inner horizon to the right internal region III (i.e. it must cross the line $r=r_-$, $t=\infty$), cutting off completely the left internal region. As far as we know, this was first noted in ref[@instability]. The existence of the minimum radius $r_0^-$, where the brane reverses again its motion, suggests that, gluing together more spacetime patches as in Figure 1, we can obtain a cyclical Big Bounce cosmology (with the caveat that gravity is not always localized on the brane during the trajectory). However, it is not clear if the effective semiclassical analysis we used to derive the brane trajectory is reliable when the brane is close to or inside the inner horizon. Indeed, the instability of the Cauchy horizon against even small excitations of bulk fields implies that curvature sigularities may appear[@instability], preventing a smooth evolution of the brane trajectory. Gravity Localization and trapping coefficient method {#gravlocdet} ==================================================== Before clarifying the technical details of the gravity localization analysis, let us qualitatively understand what conditions we expect to be necessary in order to obtain an effective 4-dimensional description of gravity on the ETW brane. 1. The radius of the brane must be much larger than the event horizon size and the AdS radius: $r_b\gg r_+,L_{AdS}$. Since gravity in a 5-dimensional spacetime is unavoidably influenced by the presence of the fifth dimension, a 4-dimensional description of gravity is achievable only in a region where the gravitational effects directed along the fifth dimension and due to the presence of the black hole are weak. If the brane is too close to the black hole horizon, the gravitational attraction of the black hole is strong and a graviton localized on the brane will certainly “leak” in the fifth dimension, eventually falling into the black hole horizon. On the other hand, if the brane is far enough from the black hole horizon and deep in the asymptotically AdS region ($r_b\gg L_{AdS}$), we recover a setup very similar to Randall-Sundrum II (RS II) model, with the brane embedded in a region of the spacetime which is nearly AdS. Thus we can expect to recover, at least locally, gravity localized on the brane. It is somehow surprising that gravity localization is lost if the radius of the brane is too large, as we will see. 2. The time scale of the motion of the brane must be much smaller than the time scale of oscillation of a graviton: $1/H\ll t_o$. This condition is needed to treat the motion of the brane as adiabatic with respect to gravitational perturbations. In other words, under this condition we can consider the brane as effectively static during the propagation of a graviton. As we will see, this assumption turns out to be pivotal in our analysis. Indeed, in the case of a moving brane, it is non trivial to define a boundary condition for the graviton wavefunction at the position of the brane. 3. $\frac{[r'(t)]^2}{f(r)}\ll f(r)$ or equivalently $\frac{[y'(t)]^2}{f(y)}\ll f(y)$. It is clear looking at the definition of the brane proper time that this condition guarantees that the properties of the graviton are interpreted, up to a constant redshift factor $\sqrt{f(r_b)}=\sqrt{f(y_b)}$, in the same way by a bulk observer (from the perspective of which we develop our analysis) and by an observer comoving with the brane. We remark that, differently from the AdS-Schwarzschild case, if the black hole is large, the first condition can be satisfied for part of the brane trajectory while still retaining a positive preparation time, provided that the brane is near-critical and the black hole near-extremal. The presence of the black hole unavoidably alters the RS II model, therefore we expect gravity localization to be only a local phenomenon, and valid only for a limited range of time. TT perturbations and Schrödinger equation ----------------------------------------- The first step to find the Schrödinger equation reported in Section \[gravloc\] is to derive the linearized Einstein equations about the AdS-RN background. Since we are considering only the tensor component of the metric perturbation, the perturbations of the electromagnetic field (which couple only to the scalar and the vector perturbations of the metric[@kodama]) can be neglected. Therefore, the only terms arising from the perturbation of the electromagnetic stress-energy tensor will be the ones linear in the metric perturbation and with an unperturbed electromagnetic field. For a TT perturbation the linearized Einstein equations read $$\delta R_{\mu\nu}=\left(\frac{2}{d-1}\Lambda+\frac{(d-2)Q^2}{r^{2d-2}}\right)\delta g_{\mu\nu} \label{pert}$$ with $$\delta R_{\mu\nu}=-\frac{1}{2}\Box_{d+1}\delta g_{\mu\nu}+R_{\rho\mu\sigma\nu}\delta g^{\rho\sigma}-\frac{1}{2}R_{\mu\rho}\delta g^\rho_\nu-\frac{1}{2}R_{\nu\rho}\delta g^\rho_\mu\equiv \Delta_L\delta g^\rho_\mu$$ where $\Box_{d+1}$ is the ($d+1$)-dimensional covariant d’Alembertian, $R_{\mu\nu\rho\sigma}$ is the Riemann tensor and $\Delta_L$ is the Lichnerowicz operator. Introducing the adimensional coordinates discussed in Section \[gravloc\] (that we will use in the rest of this section), using the explicit expression for the Christoffels and the components of the Riemann and Ricci tensors and defining the covariant Laplacian $\Delta_{d-1}$ on the unit ($d-1$)-sphere, after a long calculation the spatial components (all the other components are identically satisfied) of equation (\[pert\]) can be recast in the form $$\begin{split} -\frac{1}{f(y)}\partial_{\tilde{t}}^2\delta g_{kl}+f(y)\partial_y^2\delta g_{kl}+\left[f'(y)-\frac{(5-d)f(y)}{y}\right]\partial_y\delta g_{kl}+\frac{1}{y^2}\Delta_{d-1}\delta g_{kl}\\ +2\left[\frac{2f(y)}{y^2}-\frac{d-1}{y^2}\right]\delta g_{kl} =\left[\frac{2d}{\gamma^2}-\frac{2(d-2)q^2}{y^{2d-2}}\right]\delta g_{kl}. \end{split} \label{waveeq2}$$ We can now decompose the metric perturbation in terms of the tensor harmonics on the unit ($d-1$)-sphere (defined in Section \[gravloc\]): $$\delta g_{kl}=\sum_k y^{\frac{5-d}{2}}\phi_k(\tilde{t},y)\mathbb{T}_{kl}^{(k)} \label{decomp}$$ and introduce the tortoise coordinate $dr^*=dy/f(y)$, in terms of which the horizon of the black hole is mapped to $r^*\to -\infty$ and the asymptotic boundary to a finite value $r^*=r^*_\infty$. Equation (\[waveeq2\]) reduces then to a wave equation for each of the modes $\phi_k(\tilde{t},y)$: $$-\partial_{\tilde{t}}^2\phi_k(\tilde{t},y)=-\partial_{r^*}^2\phi_k(\tilde{t},y)+V_k\left[y(r^*)\right]\phi_k(\tilde{t},y) \label{wave}$$ where the potential $V_k\left[y(r^*)\right]$ is given by equation (\[potential\]) and is in accordance with the results of ref[@kodama]. In the frequency domain, equation (\[wave\]) finally takes the form of the Schrödinger equation (\[schro\]), which encodes the effects of the radial extra dimension on a transverse-traceless graviton. By finding the expression of the tortoise coordinate $r^*$ as a function of the adimensional radial coordinate $y$ in the limit of large $y$, it is easy to show that the potential (10) diverges at the asymptotic boundary as $V(r^*)\sim 15/[4(r^*_\infty-r^*)^2]$. This behaviour is very similar to the one typical of the RS II potential[@RS1; @RS2; @karchrandall]. On the other hand, at the black hole horizon ($r^*\to -\infty$) the potential vanishes exponentially: $V(r^*)\sim \exp(2k_gr^*)$ where $k_g$ is the above defined surface gravity of the black hole. Since the normalizable zero-graviton mode bound on the brane present in the RS II model is a marginally bound mode, and since our potential vanishes exponentially while RS’s one exhibited a power-law behavior deep into the bulk, we cannot expect to find a normalizable bound mode as well. But the presence of the brane still allows the existence of a quasi-bound mode, with complex frequency $\omega=\bar{\omega}+i\Gamma/2$, very well localized close to the brane. Its finite lifetime (determined by the imaginary part of the frequency $\Gamma/2$) implies that it will stay on the brane for an interval of time, after which it will eventually leak into the bulk black hole. Up to this point, we only studied the perturbed Einstein equations, without taking into account the presence of the ETW brane. To study the evolution of a TT graviton living on the brane, and to understand whether or not it is possible for this graviton to remain bound on the brane at least locally and for a reasonable amount of time, we need to enforce a boundary condition for the graviton wavefunction at the position of the brane. At each instant of time, the brane is effectively a hypersurface at fixed $y$ (i.e. at fixed $r^*$), therefore the previous analysis can be applied to study a gravitational perturbation on the brane. In particular, the Schrödinger equation allows us to understand if the graviton is bound on the brane or if it necessarily leaks into the bulk. If the brane is expanding or contracting, it is not clear how to implement such a boundary condition. But if the time scale of the perturbation is much smaller than the time scale of the motion of the brane, the linearization of equation (\[brane\]) provides the boundary condition needed. In other words, we must work under an adiabatic approximation for the motion of the brane with respect to the time scale of the perturbation, so that we can approximately consider the ETW brane to be static at $y=y_b$, or $r^*=r^*_b$. Additionally, since all the calculations will be carried out in the bulk coordinates assuming the brane to be static, in order for an observer comoving with the brane to interpret correctly the graviton, we must also verify that the redshift between the bulk coordinates and the comoving coordinates on the brane is compatible with the assumption of a static brane. From the definition of proper time, this means that $y'^2\ll f^2(y)$ must hold. Equation (\[brane\]) can be traced and rewritten as $$K_{ab}=\tilde{T}h_{ab} \label{extrbound}$$ where $\tilde{T}=r_+T$ and the extrinsic curvature is computed using the adimensional coordinates. For a static brane, the normal dual vector is given by $n_\mu=\delta_{\mu,y}/\sqrt{f(y)}$ and $e^\mu_a=\delta^\mu_a$. Linearizing equation (\[extrbound\]) and using the decomposition (\[decomp\]) of the metric perturbation, we obtain the boundary condition $$\partial_y\psi_{k,\omega}\Big|_{y=y_b}=\left.\frac{4\tilde{T}y-(5-d)\sqrt{f(y)}}{2y\sqrt{f(y)}}\psi_{k,\omega}\right|_{y=y_b}.$$ Using the condition $y'^2\ll f^2(y)$, equation (\[traj\]) guarantees that $f(y)\sim \tilde{T}^2y^2$. The latter result and the definition of the tortoise coordinate finally lead to the boundary condition reported in Section \[gravloc\], i.e. $$\partial_{r^*}\psi_{k,\omega}\Big|_{r^*=r^*_b}=\left.\frac{(d-1)\sqrt{f\left[y(r^*)\right]}}{2y(r^*)}\psi_{k,\omega}\right|_{r^*=r^*_b}. \label{finalbound}$$ As we have already discussed, imposing this boundary condition is equivalent to add to the potential (10) a negative delta term, which “traps” the graviton, allowing gravity localization. Before reviewing the trapping coefficient method introduced in ref.[@resonances] and give more details about our numerical results, we remark that in the small black hole case ($\gamma \gtrsim 1$) the potential (10) presents a peak for $r^*\sim 0$ before diverging at $r^*=r^*_\infty$ (see Figure \[potsmall\]). This feature disappears for large black holes. If the brane radius was able to reach values consistently larger than the position of the peak, it would have implied not only the presence of a quasi-bound zero mode, but also the existence of “overtones”[@resonances] trapped in the resonance cavity between the peak and the brane position. Nonetheless, as we have already shown, in the small black hole case the ratio between the maximum brane radius and the horizon of the black hole cannot be large, implying that no quasi-bound modes exist. Therefore, the small black hole case is not of our interest and we will focus our attention on the $\gamma\ll 1$ case. ![**Potential $\mathbf{V_k[y(r^*)]}$ - Small black hole.** $r_+=0.1$, $r_-=0.099$, $L_{AdS}=1$. The potential diverges for $r^*_\infty=15.3$ and vanishes exponentially at the horizon $r^*\to -\infty$. Differently from the large black hole case (see Figure \[fig4\]), the potential exhibits a peak for $r^*\sim 0$.[]{data-label="potsmall"}](s6.pdf){height="3.5cm"} Trapping coefficient method --------------------------- The quasi-normal modes, already studied in ref.[@resonances] for the Schwarzschild-AdS spacetime in the presence of a static brane, are modes with a finite lifetime determined by the imaginary part of the frequency $\Gamma/2$ and purely infalling boundary condition at the black hole horizon, i.e. $\psi_{k,\omega}\sim \exp(i\omega r^*)$ for $r^*\to -\infty$. This finite lifetime can be understood as a probability of tunneling through the potential barrier from the delta potential at the position of the brane toward the black hole horizon. The absence of the resonant cavity in the potential (10) for the large black hole case implies that only one resonant mode, supported by the attractive delta potential, exists, that we will call quasi-bound mode. Our analysis is based on the trapping coefficient method introduced in ref.[@resonances], that we will review and apply to the AdS-Reissner-Nordström case. Since the potential vanishes approaching the horizon, the wavefunction in that region must take the form of a plane wave: $$\psi_{k,\omega}(r^*)\sim \frac{1}{2}A_h\textrm{e}^{-i\delta(\omega)}\left[\textrm{e}^{-i\omega r^*}+S(\omega)\textrm{e}^{i\omega r^*}\right] \label{horsol}$$ where $S(\omega)=\textrm{e}^{2i\delta(\omega)}$ is the scattering matrix and $\delta(\omega)$ is the scattering phase shift. The black hole horizon is located at $r^*=-\infty$, and we want purely infalling solutions, therefore in that limit $\psi_{k,\omega}$ must be a left-moving plane wave. Since we are imposing two boundary conditions (equation (\[finalbound\]) at the brane and the pure infalling one at the horizon), we expect to find a discrete set of solutions to the Schrödinger equation (the quasi-normal modes), corresponding to a discrete set of frequencies $\omega_n$. By considering complex frequencies, we can obtain the purely infalling solutions by requiring the scattering matrix to have a pole at the quasi-normal mode frequencies $\omega_n$[^1]. It can be shown [@resonances; @scattering] that the leading order Laurent expansion of $S(\omega)$ is given by: $$S(\omega)\sim \textrm{e}^{2i\delta_0(\omega)}\frac{\omega-\omega_n^*}{\omega-\omega_n}$$ where $\delta_0(\omega)$ is a slowly varying real function of $\omega$. Using the definition $\omega_n=\bar{\omega}_n+i\Gamma_n/2$, the scattering phase shift can be expressed as: $$\delta(\omega)\sim \delta_0(\omega)+\arcsin\left[\frac{\Gamma_n}{\sqrt{4(\omega-\bar{\omega}_n)^2+\Gamma_n^2}}\right]. \label{flipping}$$ For real $\omega$ and if $\Gamma_n$ is small with respect to $\bar{\omega}_n$, $\delta(\omega)$ varies of a value which is approximately $\pi$ when $\omega$ is varied across the real part of the resonance frequency, flipping the sign of the wavefunction (\[horsol\]). We can now define the trapping coefficient $$\eta(\omega)\equiv\frac{A_b}{A_h} \label{trapping}$$ where $A_b$ is the magnitude of the wavefunction $\psi_{k,\omega}$ at the brane and $A_h$ its magnitude at the horizon of the black hole (effectively, in a region where the potential is almost vanishing). Intuitively, since we expect the quasi-bound mode to be almost localized on the brane due to the presence of the attractive delta potential at $r^*=r_b^*$, the trapping coefficient will present a peak at the frequency of the quasi-bound mode. In the case of a normalizable bound mode (with vanishing imaginary part of the frequency), clearly the wavefunction would vanish at the horizon, giving an infinite trapping coefficient. For a purely infalling solution we can write in general[@resonances]: $$\psi_{k,\omega}(r^*)=N(\omega)\begin{cases} A_h\Re\left[\textrm{e}^{i\delta(\omega)}\textrm{e}^{i\omega r^*}\right] \hspace{0.5cm} r^*\to-\infty\\[10pt] R(\omega)\Re\left[\textrm{e}^{i\delta(\omega)}\textrm{e}^{i\theta(\omega)}\right]=-R(\omega)\sin\left(\delta(\omega)+\theta(\omega)-\frac{\pi}{2}\right)\equiv A_b\hspace{0.5cm} r^*=r^*_b \end{cases}$$ where $\Re$ indicates the real part and $R(\omega)$ and $\theta(\omega)$ are slowly-varying functions of the frequency. For every frequency $\omega$ we can choose the normalization $N(\omega)$ of the wavenfunction such that $A_b=1$, and of course the trapping coefficient will be unchanged. This allows us to set $\psi(r^*_b)=1$ in our numerical analysis, obtaining: $$\psi_{k,\omega}(r^*)=\begin{cases} -\frac{A_h}{R(\omega)\sin\left(\delta(\omega)+\theta(\omega)-\frac{\pi}{2}\right)}\Re\left[\textrm{e}^{i\delta(\omega)}\textrm{e}^{i\omega r^*}\right]=-\frac{1}{\eta(\omega)}\Re\left[\textrm{e}^{i\delta(\omega)}\textrm{e}^{i\omega r^*}\right]\hspace{0.5cm} r^*\to-\infty\\[10pt] 1 \hspace{0.5cm} r^*=r^*_b \end{cases}$$ where the trapping coefficient therefore reads: $$\eta(\omega)=\frac{R(\omega)\sin\left(\delta(\omega)+\theta(\omega)-\frac{\pi}{2}\right)}{A_h}.$$ If the relation $$\delta_0(\omega)+\theta(\omega)\sim\frac{\pi}{2},\frac{3\pi}{2} \label{condlorentzian}$$ holds, the square of the trapping coefficient takes the form: $$\xi(\omega)\equiv\eta^2(\omega)\sim\frac{R^2(\omega)}{A_h^2}\frac{\Gamma_n^2}{4(\omega-\bar{\omega}_n)^2+\Gamma_n^2}. \label{lorentziancurve}$$ This Lorentzian (Breit-Wigner) peak is centred at the real part of the frequency of the quasi-bound mode, while its half-width at half-maximum $\Gamma/2$ gives the imaginary part of the frequency, and therefore the lifetime of the mode. If the condition (\[condlorentzian\]) is not satisfied, the shape of the peak is more complicated. Nonetheless, at least for the zero mode of our interest, it has been shown that the condition (\[condlorentzian\]) holds in a good approximation[@resonances] and our numerical analysis confirms this result. We mention here the possibility of refining such an approximation by subtracting from the plot of $\xi(\omega)$ as a function of the frequency a baseline function accounting for the slow variation of $\delta_0(\omega)$ and $\theta(\omega)$ with the frequency. For our purposes, such a procedure (which leads to some difficulties due to the arbitrariness of the choice of the baseline[@resonances]), turns out to be not necessary. It is therefore possible, in a reasonable approximation, to find the real and imaginary parts of the frequencies of the quasi-bound mode (we remind that for the large black hole case we expect only the $n=0$ mode to be present, supported by the negative delta potential at the position of the brane) using the following procedure (trapping coefficient method): 1. Compute numerically a family of solutions of the Schrödinger equation (\[schro\]), parameterized by real-valued frequencies $\omega$ and with boundary condition (\[finalbound\]), requiring also $\psi_{k,\omega}(r_b^*)=1$; 2. Find numerically the maximum of the wavefunction in the region where the potential is almost vanishing (i.e. near the horizon) for a range of values of $\omega$; 3. Plot the square of the inverse of these maxima as a function of $\omega$: a peak will be present at the real part of the frequency of the (purely infalling) quasi-normal mode; 4. Eventually subtract a baseline function; 5. Fit the data with the Breit-Wigner distribution (\[lorentziancurve\]) to find real and imaginary parts of the frequency of the quasi bound mode. Numerical results ----------------- We will find the real and imaginary parts of the frequency and the height of the peak $R/A_h$ for three sizes of the black hole: $r_+=10,100,10000$ (with $L_{AdS}=1$ and $r_-$ such that the black holes are near extremality), corresponding to $\gamma=0.1,0.01,0.0001$ respectively. For each case we will consider different positions $y_b$ for the brane and values for the angular momentum $l$. The results of the fits are reported in Table \[fitresults\]. From our analysis the following picture emerges (we remind that $\gamma=L_{AdS}/r_+$ and $y_b=r_b/r_+$): - For fixed $\gamma$ and $l$, increasing the distance of the brane $y_b$ the real part of the frequency $\bar{\omega}$ decreases and the peak of the square of the trapping coefficient $\xi$ becomes sharper and higher, indicating that the imaginary part of the frequency $\Gamma/2$ decreases as well and gravity localization is more efficient. This is evident from the data reported in Table \[timescales\] (where $t_o=1/\bar{\omega}$ and $t_d=2/\Gamma$). But if the brane is very far, $\bar{\omega}$ approaches zero. When it happens, the peak becomes broader (indicating $\Gamma\sim \bar{\omega}$) and is not approximable with a Lorentzian curve anymore. At this point, the peak is very high, indicating a short-lived (large ratio between imaginary and real part of the frequency) but very well localized quasi-bound state. After that, for even farther brane, the quasi-bound mode is no more present (there is no peak, and the trapping coefficient never becomes very big). Additionally, for low values of the angular momentum $l$, the real part of the frequency obtained is not in accordance with the one expected from a 4-dimensional perturbation of an Einstein static universe (given by $\omega_{GR}=\sqrt{f(y)\cdot(l+2)l}$)[@resonances], even when the brane is far and the peak is narrow. For higher values of $l$ instead, i.e. for smaller scale perturbations, the GR expected value and the one obtained numerically are closer. We report here the data showing these behaviors only in the $\gamma=0.01$ case, but we verified that the same analysis applies also for $\gamma=0.1,0.0001$ as well as in the small black hole case. The smaller is the black hole, the farther is the position of the brane where gravity localization is lost. We remark how this behaviour is in contradiction with the conclusions of ref.[@resonances], where it is argued that the real part of the frequency approaches a constant value and the RS II normalizable bound zero-graviton mode is recovered in the far brane limit. However, it is worth noting that the loss of localization for the small black hole case, which is the one mainly studied in ref.[@resonances], occurs for a position of the brane much farther than the ones explored in Figure 11 of ref.[@resonances] - For fixed $l$ and $y_b$, decreasing $\gamma$ (i.e. increasing the size of the black hole) the real part of the frequency grows and the peak becomes broader, indicating a shorter-lived quasi-bound mode. The Lorentzian approximation of the peak is also less precise. At the same time, we remind that the same maximum radius of the brane $r_0$ is reached for a smaller value of the tension $T$ if the black hole is larger. - Holding $\gamma$ and $y_b$ fixed and increasing the angular momentum $l$ the real part of the frequency grows. In the small black hole case, we verified that the imaginary part of the frequency decreases, exactly how described in ref.[@resonances] Differently, in the large black hole case, the imaginary part grows, but much slower than the real part. The peaks are therefore narrower and the ratio between the imaginary and the real part decreases. At the same time, since the imaginary part of the frequency increases, the height of the peaks decreases with increasing $l$, indicating that, even if they are able to undergo a larger number of oscillations before decaying, these smaller-scale modes have a shorter lifetime than the larger-scale ones (even if localization is still very efficient). These characteristics can be observed comparing the peaks for different values of $l$ in all the three cases, but are particularly evident in the very large black hole cases $\gamma=0.0001$. We remark that even for a very far brane, which does not support localization of gravity for instance for $l=1$, increasing the angular momentum the quasi-bound state is recovered. For example, for $\gamma=0.01$ and $l=1$ the quasi bound state is lost for $y_b\sim 66$, but for $l=5$ it is still present when the brane radius is $y_b=99.94$. - Keeping the size of the black hole and the scale of the perturbation constant while increasing the brane radius, i.e. increasing $l$ and $y_b$ of the same factor holding $\gamma$ fixed, the peak becomes narrower and gravity localization more efficient: remaining on a local enough scale, if the brane is farther we are in a setup more similar to the RS II scenario. $\gamma$ $y_b$ $l$ $r_-$ $R/A_h$ $\bar{\omega}$ $\Gamma$ $\omega_{GR}$ ---------- ------- ------- ------- ------------------- -------------------- -------------------- ------------------- 0.1 17.34 1 9.99 $68.22$ 17.22 0.7354 17.32 0.1 17.34 5 9.99 $61.41$ 58.93 0.9186 59.16 0.01 17.34 1 99.9 $71.51$ 168.2 67.22 173.2 0.01 34.62 1 99.9 $202.5$ 147.5 16.86 173.2 0.01 60.22 1 99.9 $464.8$ 71.70 5.580 173.2 0.01 63 1 99.9 $499.2$ 52.40 5.060 173.2 0.01 66 1 99.9 528.8 12.56 4.835 173.2 0.01 66.15 1 99.9 $\sim 519.6$ $\sim 4$ $\sim 5$ 173.2 0.01 17.34 5 99.9 70.74 586.4 68.55 591.6 0.01 30.00 5 99.9 160.9 585.0 23.09 591.6 0.01 34.62 5 99.9 $199.5$ 583.6 17.28 591.6 0.01 99.94 5 99.9 $980.9$ 530.5 2.081 591.6 0.01 60.00 10 99.9 447.8 $1.084\cdot 10^3$ 5.984 $1.095\cdot 10^3$ 0.01 120.0 20 99.9 $1.229\cdot 10^3$ $2.074\cdot 10^3$ 1.587 $1.095\cdot 10^3$ 0.001 17.34 5 999 $\sim 73$ $\sim 4\cdot 10^3$ $\sim 5\cdot 10^3$ $5.916\cdot 10^3$ 0.0001 34.82 100 9900 $203.2$ $4.181\cdot 10^5$ $1.737\cdot 10^5$ $1.010\cdot 10^6$ 0.0001 34.82 1000 9900 $180.9$ $9.955\cdot 10^6$ $2.097\cdot 10^5$ $1.001\cdot 10^7$ 0.0001 34.82 10000 9900 $160.9$ $9.993\cdot 10^7$ $2.552\cdot 10^5$ $1.000\cdot 10^8$ : **Quasi-bound modes.** Results of the numerical analysis based on the trapping coefficient method. $L_{AdS}=1$. The fitted values for $\gamma=0.01$, $y_b=66.15$, $l=1$ and for $\gamma=0.001$, $y_b=17.34$, $l=5$ are purely indicative. We remark that for each set of parameters reported above it is possible to choose a tension $T$ for the brane such that the corresponding $y_b$ is, for instance, the turning point of the brane, while the corresponding Euclidean solutions are sensible ($\tau_0>0$) and dominant in the gravity path integral ($\Delta I<0$).[]{data-label="fitresults"} Time scales and adiabatic approximation --------------------------------------- As we pointed out in Section \[gravloc\], the three time scales that we must compare in order to verify that the adiabatic approximation we used is reliable are the oscillation time $t_o=1/\bar{\omega}$, the decay time $t_d=2/\Gamma$ and the quantity $$T_H=\frac{y(t)}{y'(t)}=\frac{\tilde{T}y^2}{f(y)\sqrt{\tilde{T}^2y^2-f(y)}},$$ which in comoving coordinates would correspond to the Hubble time of the braneworld cosmology, and that we will call for semplicity “Hubble time”. If $t_o\ll T_H$ for a given radius of the brane $y=y_b$, the adiabatic approximation holds. We remark that the Hubble time does not depend on the angular momentum $l$. Thus, since increasing $l$ the real part of the frequency increases and the oscillation time $t_o$ decreases, for higher values of $l$ it is easier to satisfy the adiabatic approximation condition. Additionally, as we have explained, the relation $$\frac{[y'(t)]^2}{f^2(y)}=\frac{\tilde{T}^2y^2}{\tilde{T}^2y^2-f(y)}\ll 1 \label{approx}$$ must hold as well at $y=y_b$. We remind that $\tilde{T}=r_+T$. In Section \[gravloc\] we have shown that, for $l=10$, $\gamma=0.01$, $r_+=100$, $r_-=99.9$ and $T=0.999999999$ it is possible to obtain $t_d\gg T_H\gg t_o$ for a large part of the brane trajectory, meaning that the adiabatic approximation is reliable and that it breaks down (and therefore our analysis loses its meaning) before the graviton leaks into the bulk. Additionally, the condition (\[approx\]) is satisfied always when the adiabatic approximation holds. For completeness we report in Figure \[timesc1\] a plot of these quantities as a function of the position of the brane for the data reported in Table \[timescales\]. For the same size and charge of the black hole but with $l=1$ and $T=0.99999993325$ (the maximum radius of the brane is $y_0=r_0/r_+=66.15$), using the results reported in Table \[fitresults\], we find that the adiabatic approximation is never reliable (i.e. $T_H<t_o$) during the trajectory of the brane (see Figure \[timesc2\]). It clearly holds at the turning point $y_0$, where the Hubble time diverges, but there the quasi-bound mode is extremely short-lived ($\bar{\omega}$ and $\Gamma$ are of the same order). The condition (\[approx\]) is instead satisfied for a large part of the brane trajectory. This problem sums up with the discordance between the expected 4-dimensional value of the frequency $\omega_{GR}$ and the one obtained numerically. Even if for smaller black holes (for instance in the $\gamma=0.1$ case) close to the turning point the adiabatic approximation can hold also for $l=1$, we can conclude that an effective 4-dimensional description of gravity localized on the brane is obtained more easily and for a larger amount of brane proper time when the value of the angular momentum $l$ is higher. This result, together with the observation that our analysis is valid only on a time scale smaller than the Hubble time and that the quasi-bound mode is not stable, confirms our expectation to find gravity localization only locally and for a finite amount of time. [^1]: In the large black hole case, only one mode, supported by the attractive delta potential, will be present in the spectrum. In the following we will then drop the $n$ index, understood to take the value $n=0$.
--- abstract: 'This paper proposes a distributed alternating mixed discrete-continuous (DAMDC) algorithm to approach the oracle algorithm based on the diffusion strategy for parameter and spectrum estimation over sensor networks. A least mean squares (LMS) type algorithm that obtains the oracle matrix adaptively is developed and compared with the existing sparsity-aware and conventional algorithms. The proposed algorithm exhibits improved performance in terms of mean square deviation and power spectrum estimation accuracy. Numerical results show that the DAMDC algorithm achieves excellent performance.' author: - 'Rodrigo C. de Lamare\' title: 'Study of Distributed Spectrum Estimation Using Alternating Mixed Discrete-Continuous Adaptation' --- Distributed processing, spectrum estimation, oracle Algorithm, diffusion-LMS, sparsity-aware algorithms. Introduction ============ signal processing strategies are very promising tools for solving parameter estimation problems in wireless networks and applications such as sensor networks [@lopes; @cattivelli; @mateos]. These techniques can exploit the spatial diversity available in a network of sensors to obtain increased estimation accuracy and robustness against sensor failures. Another set of tools for enhancing the performance of signal processing algorithms is the exploitation of sparsity, work on which initially dealt with centralized problems [@candes; @gu; @delamarespl1; @delamarespl07; @yilun; @jidf; @fa10; @eksioglu; @angelosante; @kalouptsidis; @saalt; @zhaocheng; @zhaocheng2] and, more recently, has examined distributed techniques [@chouvardas; @lorenzo1; @lorenzo2; @lorenzo3; @arablouei; @liu; @dcg; @dce; @dta_ls; @dcg_iet] in several applications. A common strategy among the techniques reported so far is the development of adaptive algorithms such as the least mean squares (LMS) [@yilun; @saalt; @lorenzo1; @lorenzo2; @lorenzo3; @arablouei; @dce; @dta_ls] and recursive least-squares (RLS) [@eksioglu; @angelosante; @liu; @dta_ls] using different penalty functions. Such penalty functions perform a regularization that attracts to zero the elements of the parameter vector with small magnitudes. The most well-known and successful penalty functions are the $l_{0}$-norm [@gu; @eksioglu], the $l_{1}$-norm [@yilun] and the log-sum penalty [@candes; @yilun]. The optimal algorithm for processing sparse signals is known as the oracle algorithm [@saalt], which requires an exhaustive search for the location of the non-zero coefficients followed by parameter estimation. With the development and increasing deployment of mobile networks, the frequency spectrum has become a resource that should be exploited in a judicious way to avoid interference. By estimating the power spectrum with spatially distributed sensors this resource can be planned and properly exploited [@lorenzo1; @lorenzo2; @lorenzo4]. Diffusion adaptation strategies incorporating sparsity constraints have been used to solve distributed spectrum estimation problems in [@lorenzo1] and [@lorenzo2]. However, prior work on distributed techniques that approach the oracle algorithm is rather limited, and adaptive techniques that exploit potential sparsity of signals using discrete and continuous variables have not been developed so far. In this work, we propose a sparsity-aware distributed alternating mixed discrete-continuous LMS (DAMDC-LMS) algorithm based on the diffusion adapt-then-combine (ATC) protocol. We consider an alternating optimization strategy with an LMS-type recursion along with a mapping from continuous to discrete variables, which is used to find the actual non-zero values, and another LMS-type recursion that performs continuous adaptation. In particular, the proposed DAMDC-LMS algorithm is incorporated into a distributed spectrum estimation strategy. DAMDC-LMS is compared with prior art in a distributed spectrum estimation application. This paper is organized as follows. Section II describes the system model and the problem statement. Section III presents the proposed DAMDC-LMS algorithm. Section IV details the proposed algorithm for an application to spectrum estimation. Section V presents and discusses the simulation results. Finally, Section VI provides our conclusions. [*Notation*]{}: In this paper, matrices and vectors are designated by boldface upper case letters and boldface lower case letters, respectively. The superscript $(.)^{H}$ denotes the Hermitian operator, $\|.\|^{1}$ refers to the $l_{1}$-norm and $E[\cdot]$ denotes expected value. System Model and Problem Statement ================================== ![Network topology with $N$ nodes.[]{data-label="1"}](fig1.ps) We consider a network that is partially connected and consists of $N$ nodes that exchange information among themselves. Each node $k$ employs a parameter estimator and has its neighborhood described by the set ${\mathcal N}_{k}$, as shown in Fig. \[1\]. The task of parameter estimation is to adjust an *M* $\times 1$ weight vector $\boldsymbol\omega_{k,i}$ at each node $k$ and time $i$ based on an *M*$\times 1$ input signal vector $\boldsymbol x_{k,i}$ and ultimately estimate an unknown *M* $\times 1$ system parameter vector $\boldsymbol\omega_{0}$ [@lopes]. The desired signal $d_{k,i}$ at each time $i$ and node $k$ is drawn from a random process and given by $$\ d_{k,i}=\boldsymbol\omega_{0}^{H}\boldsymbol x_{k,i}+n_{k,i},$$ where $n_{k,i}$ is measurement noise. We consider a distributed estimation problem for a network in which each agent $k$ has access at each time instant to a realization of zero-mean spatial data $\{d_{k,i} , \boldsymbol x_{k,i}\}$ [@lopes; @mateos]. The goal of the network is to minimize the following cost function: $$\label{Eqn4:cost_function} \begin{split} C(\boldsymbol\omega_{k,i}) & =\sum_{k=1}^NE[|d_{k,i}-\hat{d}_{k,i}|^{2}] \\ & =\sum_{k=1}^N E[|d_{k,i} - \boldsymbol\omega_{k,i}^{H}\boldsymbol x_{k,i}|^{2}],~ {\rm for}~ k = 1, 2, \ldots, N, \end{split}$$ By solving this minimization problem one can obtain the optimum solution for the weight vector at each node. For a network with possibly sparse parameter vectors, the cost function might also involve a penalty function that exploits sparsity. In what follows, we present a novel distributed diffusion technique to approach the oracle algorithm and efficiently solve (\[Eqn4:cost\_function\]) under sparseness conditions. Proposed DAMDC-LMS Algorithm ============================ In this section, we detail the proposed distributed scheme and DAMDC-LMS algorithm using the diffusion ATC strategy. The proposed scheme for each agent $k$ of the network is shown in Fig. \[2\]. The output estimate of the proposed scheme is given by $$\begin{split} \hat{d}_{k,i} & = \boldsymbol\omega_{k,i}^{H}\boldsymbol P_{k,i}\boldsymbol x_{k,i} = \boldsymbol p^{T}_{k,i}\boldsymbol W^{*}_{k,i}\boldsymbol x_{k,i}\\ & =\boldsymbol x^{T}_{k,i}\boldsymbol W^{*}_{k,i}\boldsymbol p_{k,i}=\boldsymbol x^{T}_{k,i}\boldsymbol P_{k,i}\boldsymbol \omega^{*}_{k,i}, \label{relation} \end{split}$$ where the parameter vector ${\boldsymbol \omega}_{k,i}$ is a column vector of $M$ coefficients related to the diagonal matrix $ \boldsymbol W_{k,i} = {\rm diag}(\boldsymbol\omega_{k,i})$. The matrix $\boldsymbol P_{k,i}$ is a square diagonal matrix with $M$ elements that is applied to the input vector $\boldsymbol x_{k,i}$ and aims to simulate the oracle algorithm by identifying the null positions of $\boldsymbol\omega_{0}$. In order to obtain recursions for ${\boldsymbol P}_{k,i}$ and ${\boldsymbol \omega}_{k,i}$ we compute the stochastic gradient of the cost function in (\[Eqn4:cost\_function\]) with respect to both parameters, where the optimization of ${\boldsymbol P}_{k,i}$ involves discrete variables and ${\boldsymbol \omega}_{k,i}$ deals with continuous variables. In particular, we develop an alternating optimization approach using an LMS type algorithm that consists of a recursion for ${\boldsymbol P}_{k,i}$ and another recursion for ${\boldsymbol \omega}_{k,i}$ that are employed in an alternating fashion. \#1\#2[0.85]{} [In order to compute ${\boldsymbol P}_{k,i}$ and ${\boldsymbol \omega}_{k,i}$ we must solve the mixed discrete-continuous non-convex optimization problem: $$\begin{split} {\boldsymbol p}_{k,i}^*, {\boldsymbol \omega}_{k,i}^* & = \min_{{\boldsymbol p}_{k,i} \in {\mathcal I}^{M \times 1},~~ {\boldsymbol \omega}_{k,i}\in {\mathcal C}^{M \times 1}} C({\boldsymbol p}_{k,i}, {\boldsymbol \omega}_{k,i}),\\ & ~{\rm for}~ k = 1, 2, \ldots, N, \label{mdc_problem} \end{split}$$ where $$C({\boldsymbol p}_{k,i}, {\boldsymbol \omega}_{k,i}) =\sum_{k=1}^N E[|d_{k,i}-\boldsymbol p^{T}_{k,i}{\boldsymbol W}^{H}_{k,i}{\boldsymbol x}_{k,i}|^{2}], \label{Eqn5:MSE_1}$$ ${\boldsymbol p}_{k,i}$ contains the elements of the main diagonal of ${\bf P}_{k,i}$, and ${\mathcal I}^{M \times 1}$ denotes the set of $M$-dimensional binary vectors with values $0$ and $1$. Since the problem in (\[mdc\_problem\]) is NP-hard, we resort to an approach that assumes ${\boldsymbol p}_{k,i}$ is a real-valued continuous parameter vector for its computation and then map ${\boldsymbol p}_{k,i}$ to discrete values.]{} The relations in (\[relation\]) allow us to compute the gradient of the cost function with respect to ${\boldsymbol p}_{k,i}$ and ${\boldsymbol \omega}_{k,i}$ and their diagonal versions ${\boldsymbol P}_{k,i}$ and ${\boldsymbol W}^{H}_{k,i}$, respectively. The gradient of the cost function with respect to ${\boldsymbol p}_{k,i}$ is given by $$\label{Eqn7:MSE_derivation} \begin{split} \nabla_{\boldsymbol p_{k,i}}C(\boldsymbol p_{k,i},\boldsymbol\omega_{k,i})& =\frac{\partial}{\partial\boldsymbol p_{k,i}}\Big(E|d_{k,i}|^{2}-(\boldsymbol p^{T}_{k,i}\boldsymbol W^{*}_{k,i}E[d^{*}_{k,i}\boldsymbol x_{k,i}])\\ & \quad +\boldsymbol p^{T}_{k,i}\boldsymbol W^{*}_{k,i}E[\boldsymbol x_{k,i}\boldsymbol x^{H}_{k,i}\boldsymbol W_{k,i}\boldsymbol p_{k,i}] \\ & \quad -E[d_{k,i}\boldsymbol x^{H}_{k,i}]\boldsymbol W_{k,i}\boldsymbol p_{k,i} \Big). \end{split}$$ Replacing the expected values with instantaneous values, we obtain $$\label{Eqn8:MSE_derivation2} \begin{split} \hat{\nabla}_{\boldsymbol p_{k,i}}C(\boldsymbol p_{k,i},\boldsymbol\omega_{k,i}) & =\frac{\partial}{\partial\boldsymbol p_{k,i}}\Big(|d_{k,i}|^{2}-\boldsymbol p^{T}_{k,i}\boldsymbol W^{*}_{k,i}d^{*}_{k,i}\boldsymbol x_{k,i}\\ & \quad +\boldsymbol p^{T}_{k,i}\boldsymbol W^{*}_{k,i}\boldsymbol x_{k,i}\boldsymbol x^{H}_{k,i}\boldsymbol W_{k,i}\boldsymbol p_{k,i} \\ & \quad -d_{k,i}\boldsymbol x^{H}_{k,i}\boldsymbol W_{k,i}\boldsymbol p_{k,i}\Big). \end{split}$$ Computing the gradient of the cost function with respect to ${\boldsymbol p}_{k,i}$, we obtain $$\label{Eqn9:MSE_derivation3} \begin{split} \hat{\nabla}_{\boldsymbol p_{k,i}}C(\boldsymbol p_{k,i},\boldsymbol\omega_{k,i}) & = d^{*}_{k,i}\boldsymbol W^{*}_{k,i}{\boldsymbol x}_{k,i}-d_{k,i}\boldsymbol W^{T}_{k,i}\boldsymbol x^{*}_{k,i}\\ & \quad + \boldsymbol W^{*}_{k,i}\boldsymbol x_{k,i}\boldsymbol x^{H}_{k,i}\boldsymbol W_{k,i} \boldsymbol p_{k,i} \\ & \quad + \boldsymbol W^{T}_{k,i}\boldsymbol x^{*}_{k,i}\boldsymbol x^{T}_{k,i}\boldsymbol W^{H}_{k,i}\boldsymbol p_{k,i}. \end{split}$$ Grouping common terms, we arrive at $$\label{Eqn10:Grouping} \begin{split} \hat{\nabla}_{\boldsymbol p_{k,i}}C(\boldsymbol p_{k,i},\boldsymbol\omega_{k,i}) & = -\Big(d_{k,i}- \boldsymbol x^{H}_{k,i} \boldsymbol W_{k,i}\boldsymbol p_{k,i}\boldsymbol W_{k,i}\boldsymbol x_{k,i}\\ & \quad +d_{k,i}- \boldsymbol x^{T}_{k,i}\boldsymbol W^{H}_{k,i}\boldsymbol p_{k,i}\boldsymbol W^{T}_{k,i}\boldsymbol x^{*}_{k,i}\Big), \end{split}$$ where $\boldsymbol p_{k,i}$ is a real parameter vector, $\boldsymbol p_{k,i}=\boldsymbol p^{*}_{k,i}$, $ \boldsymbol p^{H}_{k,i}=[\boldsymbol p^{*}_{k,i}]^{T}=\boldsymbol p^{T}_{k,i}$. Since $\boldsymbol W_{k,i}$ is symmetric, i.e., $ \boldsymbol W^{T}_{k,i}=\boldsymbol W_{k,i}$, we have $ \boldsymbol W^{H}_{k,i}=[\boldsymbol W^{*}_{k,i}]^{T}=[\boldsymbol W^{T}_{k,i}]^{*}=\boldsymbol W^{*}_{k,i}$. The terms in (\[Eqn10:Grouping\]) represent the sum of a vector and its conjugate: $$\label{Eqn11:Sum} \begin{split} \hat{\nabla}_{\boldsymbol p_{k,i}} C(\boldsymbol p_{k,i},\boldsymbol\omega_{k,i}) & = -\underbrace{\Big(\big(d_{k,i}- \boldsymbol x^{H}_{k,i}\boldsymbol W_{k,i}\boldsymbol p_{k,i}\big)\boldsymbol W^{*}_{k,i}\boldsymbol x_{k,i}}_{A}\\ & \quad +\underbrace{\big(d_{k,i}- \boldsymbol x^{T}_{k,i}\boldsymbol W^{H}_{k,i}\boldsymbol p_{k,i}\big)\boldsymbol W_{k,i}\boldsymbol x^{*}_{k,i}\Big)}_{A^{*}}. \end{split}$$ Applying the property $A + A^{*}=2\Re(A)$, we have $$\label{Eqn12:Sum_Conj} \hat{\nabla}_{\boldsymbol p_{k,i}}\mbox{MSE}(\boldsymbol p_{k,i},\boldsymbol\omega_{k,i})=2\Re(A).$$ The recursion to update the parameter vector ${\boldsymbol p}_{k,i}$ is given by [ $$\begin{split} \label{Eqn13:P_recursion} \boldsymbol p_{k,i+1} & = \boldsymbol p_{k,i} -\eta\hat{\nabla}_{\boldsymbol p_{k,i}}\mbox{MSE}(\boldsymbol p_{k,i},\boldsymbol W_{k,i})\\ & =\boldsymbol p_{k,i} + 2\eta\Re(e_{p_{k,i}}^{*}\boldsymbol x^{H}_{k,i}\boldsymbol W_{k,i}), \end{split}$$]{} where the error signal is given by $$\label{Eqn14:error} \ e_{p_{k,i}}=d_{k,i}- \boldsymbol p^{T}_{k,i}\boldsymbol W_{i-1}\boldsymbol x_{k,i}.$$ For the update of the parameter vector ${\boldsymbol w}_{k,i}$, we can apply well-known adaptive algorithms. By computing the gradient of the cost function with respect to ${\boldsymbol w}_{k,i}^*$, we have $$\label{eqn16new} \nabla C_{{\boldsymbol w}^*_{k,i}}({\boldsymbol p}_{k,i}, {\boldsymbol w}_{k,i}) = (d_{k,i} - \boldsymbol x^{T}_{k,i}{\boldsymbol P}_{k,i} \boldsymbol\omega^{*}_{i-1})^{*}{\boldsymbol P}_{k,i}{\boldsymbol x}_{k,i}$$ The following LMS type recursion updates the parameter vector ${\boldsymbol \omega}_{k,i}$: $$\label{Eqn17:W_Recursion2} \boldsymbol\omega_{k,i+1}=\boldsymbol\omega_{k,i}+\mu e^{*}_{k,i}\boldsymbol P_{k,i}\boldsymbol x_{k,i},$$ where the error signal is given by $e_{k,i} = d_{k,i}-\boldsymbol x^{T}_{k,i}\boldsymbol P_{k,i}\boldsymbol\omega^{*}_{i-1}$. The recursions for ${\boldsymbol p}_{k,i}$ and ${\boldsymbol \omega}_{k,i}$ using the ATC protocol [@lopes; @cattivelli] [for $k=1, 2, \ldots, N$ are then given by]{} $$\label{Eqn18:Distributed_RecursionP} \boldsymbol p_{k,i+1}=\boldsymbol p_{k,i} + 2\eta \Re(e_{p_{k,i}}^{*}\boldsymbol x_{k,i}^{H}\boldsymbol W_{k,i}),$$ $$\label{Eqn19:Distributed_Recursion} \boldsymbol\varphi_{k,i+1}=\boldsymbol\omega_{k,i-1}+\mu e^{*}_{k,i}\boldsymbol P_{k,i}\boldsymbol x_{k,i},\\$$ $$\label{Eqn20:Distributed_Combination} \boldsymbol \omega_{k,i}=\sum_{l\in N_{k}}a_{lk}\boldsymbol\varphi_{l,i},$$ where (\[Eqn18:Distributed\_RecursionP\]) and (\[Eqn19:Distributed\_Recursion\]) are the adaptation step, and (\[Eqn20:Distributed\_Combination\]) is the combination step of the ATC protocol. The combining coefficients of the latter are represented by $a_{lk}$ and should comply with $$\label{Eqn21:combinig coefficients} \sum_{l \in N_{k}} a_{lk}=1,\ l \in N_{k,i}, \forall k.$$ The strategy adopted in this work for the $a_{lk}$ combiner is the Metropolis rule [@lopes] given by $$\label{Eqn22:Metropolis rule} a_{kl}=\left\{\begin{array}{ll} \frac{1} {max\{|\mathcal{N}_k|,|\mathcal{N}_l|\}}\ \ $if\ $k\neq l$\ are linked$,\\ 1 - \sum\limits_{l\in \mathcal{N}_k / k} a_{kl}, \ \ $for\ $k$\ =\ $l$$. \end{array} \right.$$ In order to compute the discrete vector $\boldsymbol p_{k,i}$, we rely on a simple approach that maps the continuous variables into discrete variables, which is inspired by the likelihood ascent approach adopted for detection problems in wireless communications [@vardhan; @las_li]. The initial value at each node is an all-one vector ($\boldsymbol p_{k,0}= \boldsymbol 1$ or $\boldsymbol P_{k,0}=\boldsymbol I$). The $\boldsymbol \omega_{k,i}$ vector is initialized as an all-zero vector ($\boldsymbol\omega_{k,0}= \boldsymbol 0$ or $\boldsymbol W_{k,0}= \boldsymbol 0$). [After each iteration of (\[Eqn18:Distributed\_RecursionP\])]{}, we obtain discrete values from $\boldsymbol p_{k,i}$ using the following rule for $m = 1, \ldots, M$: $$\label{Eqn23:P rule} p_{k,i+1}^{m}=\left\{\begin{array}{ll} 1, \ $if\ $ p_{k,i}^{m}> \tau, \\ 0, \ \mbox{otherwise}, \end{array} \right.$$ [where $\tau$ is a threshold used to determine the positions of the non-zero values of the parameter vector $\boldsymbol p_{k,i}$. The goal is to approach the results of the oracle algorithm and an appropriate value for $\tau$ can be obtained experimentally.]{} Distributed Spectrum Estimation using the DAMDC-LMS Algorithm ============================================================= We now illustrate the use of DAMDC-LMS in distributed spectrum estimation, which aims to estimate the spectrum of a transmitted signal $s$ with $N$ nodes using a wireless sensor network [@mateos; @lorenzo1; @lorenzo2]. The power spectral density (PSD) of the signal $s$ at each frequency denoted by $\Phi_{s}(f)$ is given by $$\label{Eqn24:PSD} \ \Phi_{s}(f)=\sum_{m=1}^{M}b_{m}(f)\omega_{0m}=\boldsymbol b_{0}^{T}(f)\boldsymbol\omega_{0},$$ where $\boldsymbol b_{0}(f)=[ b_{1}(f),...,b_{M}(f)]^{T}$ is the vector of basis functions evaluated at frequency $f$, $\boldsymbol\omega_{0}=[\omega_{01},...,\omega_{0M}]$ is a vector of weighting coefficients representing the transmit power of the signal $s$ over each basis, and $M$ is the number of basis functions. For $M$ sufficiently large, the basis expansion in (\[Eqn24:PSD\]) can approximate well the spectrum. Possible choices for the set of basis functions $\{{b_{m}(f)\}}_{m=1}^M$ include rectangular functions, raised cosines, Gaussian bells and splines [@dcg_iet]. We denote the channel transfer function between a transmit node conveying the signal $s$ and receive node $k$ at time instant $i$ by $H_{k}(f,i)$, and thus the PSD of the received signal observed by node k can be expressed as $$\label{Eqn25:PSD2} \begin{split} \Phi_{k}(f,i) & =|H_{k}(f,i)|^{2} \Phi_{s}(f)+\upsilon^{2}_{n,k},\\ & =\sum_{m=1}^{M}|H_{k}(f,i)|^{2}b_{m}(f)\omega_{0m}+ \upsilon^{2}_{n,k}, \\ & = \boldsymbol b_{k,i}^{T}(f)\boldsymbol\omega_{0m}+ \upsilon^{2}_{n,k}. \end{split}$$ where $\boldsymbol b_{k,i}^{T}(f)=[|H_{k}(f,i)|^{2}b_{m}(f)]^{M}_{m=1}$ and $\upsilon_{n,k}^{2}$ is the noise power of the receiver at node $k$. Following the distributed model, at every iteration $i$ every node $k$ measures the PSD $\Phi_{k}(f,i)$ presented in (\[Eqn25:PSD2\]) over $N_{c}$ frequency samples $f_{j}=f_{min}:(f_{max}-f_{min})/N_{c}:f_{max}$, for $j = 1,..., N_{c}$, the desired signal is given by $$\label{Eqn26:Desired_PSD} \ d_{k,i}(j)=\boldsymbol b_{k,i}^T(f_{j})\boldsymbol\omega_{0}+\upsilon_{n,k}^2 + n_{k,i}(j),$$ where the last term denotes the observation noise with zero mean and variance $\sigma_{n,j}^{2}$. The noise power $\upsilon_{n,k}^{2}$ at the receiver of node $k$ can be estimated with high accuracy in a preliminary step using, e.g., an energy estimator over an idle band, and then subtracted from (\[Eqn26:Desired\_PSD\]). A linear model is obtained from the measurements over $N_{c}$ contiguous channels $$\label{Eqn27:Linear_PSD} \ \boldsymbol d_{k,i}=\boldsymbol B_{k,i}\boldsymbol\omega_{0}+ \boldsymbol n_{k,i},$$ where $\boldsymbol B_{k,i}=[\boldsymbol b_{k,i}^T(f_{j})]_{j=1}^{N_{c}}\in {\mathcal R}^{N_{c}\times M}$, and $\boldsymbol n_{k,i}$ is a zero mean random vector. Then we can introduce the cost function for each agent $k$ described by [ $$\label{Eqn28:Distributed_CostFunction} \ C(\boldsymbol\omega_{k,i})=E[|\boldsymbol d_{k,i}-\boldsymbol B_{k,i}\boldsymbol\omega_{k,i}|^{2}],~ {\rm for}~ k = 1, \ldots, N.$$]{} Once we have the cost function, the DAMDC-LMS algorithm can be applied by introducing the discrete parameter vector $\boldsymbol p_{k,i}$ in (\[Eqn28:Distributed\_CostFunction\]), which results in [ $$\label{Eqn29:Distributed_CostFunction2} \ C(\boldsymbol\omega_{k,i},\boldsymbol p_{k,i})=E[|\boldsymbol d_{k,i}-\boldsymbol B_{k,i}\boldsymbol P_{k,i}\boldsymbol\omega_{k,i}|^{2}],~ {\rm for}~ k = 1, \ldots, N,$$ where]{} $\boldsymbol P_{k,i}$ is the $B\times B$ diagonal matrix to exploit the sparsity for a more accurate spectrum estimation. Introducing the matrix $\boldsymbol P_{k,i}$ for exploiting sparsity in the recursions (\[Eqn18:Distributed\_RecursionP\]), (\[Eqn19:Distributed\_Recursion\]) and (\[Eqn20:Distributed\_Combination\]), we obtain for $k = 1, 2, \ldots, N$: $$\label{Eqn30:P_PSD} {\rm Adaptation} \left\{\begin{array}{ll} \boldsymbol p_{k,i+1}=\boldsymbol p_{k,i} + 2\eta \Re(e_{p_{k,i}}^{*}\boldsymbol B_{k,i}^{H}\boldsymbol W_{k,i-1})\\ p_{k,i+1}^{m}=\left\{\begin{array}{ll} 1, \ $if\ $ p_{k,i}^{m}> \tau, ~{\rm for}~m = 1, \ldots, M,\\ 0, \ \mbox{otherwise}, \end{array} \right.\\ \boldsymbol\varphi_{k,i+1}=\boldsymbol\omega_{k,i-1}+\mu e^{*}_{k,i}\boldsymbol P_{k,i}\boldsymbol B_{k,i}, \\ \end{array} \right.$$ $$\label{Eqn31:ATC2_PSD} {\rm Combination} \left\{\begin{array}{ll} \boldsymbol \omega_{k,i}=\sum_{l\in N_{k}}a_{lk}\boldsymbol P_{k,i}\boldsymbol\varphi_{l,i}. \end{array} \right.$$ The positions in $\boldsymbol p _{k,i}$ with ones indicate the information content at each node and sample of the signal. With this approach, we can identify the positions of the non-zero coefficients of the frequency spectrum and achieve performance similar to that of the oracle algorithm as seen in the following section. Simulation Results ================== In this section, we evaluate the performance of the DAMDC-LMS algorithm for distributed spectrum estimation using sensor networks, where DAMDC-LMS is compared with existing algorithms. The results are shown in terms of the mean square deviation (MSD), power and PSD estimation. [We consider a network with $20$ nodes for estimating the unknown spectrum $\boldsymbol\omega_{0}$ and set the threshold to $\tau=1$, which according to our studies obtained the best performance for the scenarios under evaluation]{}. Each iteration corresponds to a time instant. The results are averaged over 100 experiments. The nodes scan $100$ frequencies over the frequency axis, which is normalized between $0$ and $1$, and use $B = 50$ non$-$overlapping rectangular basis functions to model the expansion of the spectrum [@dcg]. The basis functions have amplitudes equal to one. We assume that the unknown spectrum $\boldsymbol\omega_{0}$ is examined over 8 basis functions, leading to a sparsity ratio equal to $S=8/50$. The power transmitted over each basis function is set to $0.7$ mW and noise variance is set to $0.001$. For distributed spectrum estimation, we have compared the proposed DADMC-LMS algorithm, the oracle ATC-LMS, the RZA-ATC-LMS [@lorenzo1], the $l_0$-ATC-LMS [@lorenzo1] and the standard ATC-LMS algorithms with the parameters optimized. We first measure the performance of the algorithms in terms of MSD as shown in Fig. \[msd\]. The results show that DAMDC-LMS outperforms standard and sparsity-aware algorithms and exhibits performance close to that of the oracle algorithm, provided that the step sizes are appropriately adjusted. \#1\#2[0.85]{} [In a second example, we assess the [PSD]{} estimation performance of the algorithms. Fig. \[psd\] shows that the DAMDC-LMS algorithm is able to accurately estimate the spectrum consistently with the oracle algorithm.]{} \#1\#2[0.85]{} In order to verify the adaptation performance of DAMDC-LMS, in Fig. \[tracking\] we evaluate the behavior of the PSD estimates over an initially busy channel (the $16$-th channel in this case) that ceases to be busy after $500$ iterations, by comparing the results achieved by the DAMDC-LMS and the oracle algorithms. We consider the same settings of the previous example. The transmit power is set to $0.20$ mW. We notice that DADMC-LMS is able to more effectively track the spectrum as compared to the oracle algorithm due to its rapid learning. \#1\#2[0.8]{} Conclusion ========== In this work, we have proposed a distributed sparsity-aware algorithm for spectrum estimation over sensor networks. The proposed DADMC-LMS algorithm outperforms previously reported algorithms. Simulations have shown that DADMC-LMS can obtain lower MSD values and faster convergence than prior art and close to that of the oracle algorithm. [99]{} C. G. Lopes and A. H. Sayed, “Diffusion least-mean squares over adaptive networks: Formulation and performance analysis," *IEEE Transactions on Signal Processing*, vol. 56, no. 7, pp. 3122-3136, July 2008. F. S. Cattivelli and A. H. Sayed, “Diffusion LMS strategies for distributed estimation," *IEEE Transactions on Signal Processing*, vol. 58, pp. 1035-1048, March 2010. G. Mateos, J. A. Bazerque, and G. B. Giannakis, “Distributed sparse linear regression," *IEEE Transactions on Signal Processing*, vol. 58, no. 10, pp. 5262-5276, Oct 2010. E. J. Candes, M. Wakin, and S. Boyd, “Enhancing sparsity by reweighted l1 minimization," *Journal of Fourier Analysis and Applications*, 2008. Y. Gu, J. Jin, and S. Mei, “$L_0$-norm constraint LMS algorithm for sparse system identification," *IEEE Signal Processing Letters*, vol. 16, pp. 774-777, 2009. R. C. de Lamare and R. Sampaio-Neto, “Adaptive reduced-rank MMSE filtering with interpolated FIR filters and adaptive interpolators", *IEEE Sig. Proc. Letters*, vol. 12, no. 3, 2005, pp. 177 - 180. R. C. de Lamare and R. Sampaio-Neto, “Reduced-rank adaptive filtering based on joint iterative optimization of adaptive filters", *IEEE Signal Process. Lett.*, vol. 14, no. 12, pp. 980-983, Dec. 2007. Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for system identification," *Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, April 2009. R. C. de Lamare and R. Sampaio-Neto, “Adaptive Reduced-Rank Processing Based on Joint and Iterative Interpolation, Decimation and Filtering", *IEEE Transactions on Signal Processing*, vol. 57, no. 7, pp. 2503 - 2514, July 2009. R. Fa, R. C. de Lamare, and L. Wang, “Reduced-Rank STAP Schemes for Airborne Radar Based on Switched Joint Interpolation, Decimation and Filtering Algorithm," *IEEE Transactions on Signal Processing*, vol.58, no.8, Aug. 2010, pp.4182-4194. E. M. Eksioglu and A. L. Tanc, “RLS algorithm with convex regularization," *IEEE Signal Processing Letters*, vol. 18, no. 8, pp. 470-473, August 2011. D. Angelosante, J.A Bazerque, and G.B. Giannakis, “Online adaptive estimation of sparse signals: Where RLS meets the l1-norm," *IEEE Transactions on Signal Processing*, vol. 58, no. 7, pp. 3436-3447, July 2010. N. Kalouptsidis, G. Mileounis, B. Babadi, and V. Tarokh, “Adaptive algorithms for sparse system identification," *Signal Processing*, vol. 91, no. 8, pp. 1910-1919, August 2011. Z. Yang, R. C. de Lamare and X. Li, “L1-Regularized STAP Algorithms With a Generalized Sidelobe Canceler Architecture for Airborne Radar," IEEE Transactions on Signal Processing, vol.60, no.2, pp.674-686, Feb. 2012. Z. Yang, R. C. de Lamare and X. Li, “Sparsity-aware space-time adaptive processing algorithms with L1-norm regularisation for airborne radar," IET signal processing, vol. 6, no. 5, pp. 413-423, 2012. R. C. de Lamare and R. Sampaio-Neto, “Sparsity-aware adaptive algorithms based on alternating optimization with shrinkage," *IEEE Signal Processing Letters*, vol. 21, no. 2, February 2014. S. Chouvardas, K. Slavakis, Y. Kopsinis, and S. Theodoridis, “A sparsity promoting adaptive algorithm for distributed learning," *IEEE Transactions on Signal Processing*, vol. 60, no. 10, pp. 5412-5425, October 2012. P. Di Lorenzo, S. Barbarossa and A. H. Sayed “Distributed spectrum estimation for small cell networks based on sparse diffusion adaptation," *IEEE Signal Processing Letters*, vol. 20, no. 12, December 2013. P. Di Lorenzo and S. Barbarossa, “Distributed least-mean squares strategies for sparsity-aware estimation over Gaussian Markov random fields," *Proc. IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP)*, May 2014. P. Di Lorenzo and A. H. Sayed, “Sparse distributed learning based on diffusion adaptation," *IEEE Transactions on Signal Processing*, vol. 61, no. 6, March 2013. P. Di Lorenzo, S. Barbarossa, and Ali H. Sayed, “Bio-inspired decentralized radio access based on swarming mechanisms over adaptive networks," *IEEE Transactions on Signal Processing*, Vol. 61, no. 12, pp. 3183-3197, June 2013. R. Arablouei, S. Werner, Y.-F. Huang and K. Dogançay, “Distributed least mean-square estimation with partial diffusion," *IEEE Transactions on Signal Processing*, vol. 62, No. 2, pp. 472-484, January 2014. Z. Liu, Y. Liu and C. Li, “Distributed sparse recursive least-squares over networks," *IEEE Transactions on Signal Processing*, vol. 62, no. 6, pp. 1386-1395, March 2014. S. Xu and R. C. de Lamare, “Distributed conjugate gradient strategies for distributed estimation over sensor networks," *Proc. Sensor Signal Processing for Defense (SSPD)*, September 2012 S. Xu, R. C. de Lamare and H. V. Poor, “Distributed compressed estimation based on compressive sensing," *IEEE Signal Processing Letters*, vol. 22, no. 9, September 2014. S. Xu, R. C. de Lamare and H. V. Poor, “Adaptive link selection algorithms for distributed estimation," *EURASIP Journal on Advances in Signal Processing*, 2015. S. Xu, R. C. de Lamare and H. V. Poor, “Distributed estimation over sensor networks based on distributed conjugate gradient strategies," *IET Signal Processing*, 2016. K. Vardhan, S. Mohammed, A. Chockalingam, and B. Rajan, “A low-complexity detector for large MIMO systems and multicarrier CDMA systems," *IEEE Journal on Selected Areas in Communications*, vol. 26, no. 3, p. 473-485, 2008. P. Li and R. C. Murch, “Multiple output selection-LAS algorithm in large MIMO systems," *IEEE Communications Letters*, vol.14, no.5, pp.399-401, May 2010. O. Axelson, *Iterative Solution Methods*, Cambridge Univ. Press, 1994. G. H. Golub and C. F. Van Loan, *Matrix Computations*, 2nd Ed. Baltimore, MD: Johns Hopkins Univ. Press, 1989. S. Theodoridis, *Machine Learning: a Bayesian and Optimization Perspective*, Academic Press, March 2015. N. A. Lynch, *Distributed Algorithms*, Morgan Kaufmann, 1997. O. Jahromi and P. Aarabi, “Distributed spectrum estimation in sensor networks," *Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing*, vol. 3, pages. 849-52, May 2004.
--- abstract: 'We perform a Dalitz plot analysis of about $100,000$ $\Ds$ decays to $\Kp \Km \pip$ and measure the complex amplitudes of the intermediate resonances which contribute to this decay mode. We also measure the relative branching fractions of $\Ds \to \Kp \Kp \pim$ and $\Ds \to \Kp \Kp \Km$. For this analysis we use a 384 ${\rm fb}^{-1}$ data sample, recorded by the   detector at the 2 asymmetric-energy $e^+e^-$ collider running at center-of-mass energies near 10.58 .' title: ' [**Dalitz plot analysis of [$\Ds \to \Kp \Km \pip$]{}**]{} ' --- -PUB-[10]{}/[016]{}\ SLAC-PUB-[14266]{}\ arXiv:[1011.4190]{} \[hep-ex\]\ authors\_aug2010\_bad2294 Introduction ============ Scalar mesons are still a puzzle in light meson spectroscopy. New claims for the existence of broad states close to threshold such as $\kappa(800)$ [@Aitala:2002kr] and $f_0(600)$ [@Aitala:2000xt], have reopened discussion about the composition of the ground state $J^{PC}=0^{++}$ nonet, and about the possibility that states such as the $a_0(980)$ or $f_0(980)$ may be 4-quark states, due to their proximity to the $K \Kbar$ threshold [@Close:2002zu]. This hypothesis can be tested only through accurate measurements of the branching fractions and the couplings to different final states. It is therefore important to have precise information on the structure of the $\pi \pi$ and $K \Kbar$ $\mathcal{S}$-waves. In this context, $\Ds$ mesons can shed light on the structure of the scalar amplitude coupled to $s \bar s$. The $\pi \pi$ $\mathcal{S}$-wave has been already extracted from  data in a Dalitz plot analysis of $\Ds \to \pip \pim \pip$ [@:2008tm]. The understanding of the $K \Kbar$ $\mathcal{S}$-wave is also of great importance for the precise measurement of -violation in oscillations using $\Bs \to \jpsi \phi$ [@Stone:2008ak; @Xie:2009fs]. This paper focuses on the study of $\Ds$ meson decay to $\Kp\Km\pip$ [@conj]. Dalitz plot analyses of this decay mode have been performed by the E687 and CLEO collaborations using 700 events [@Frabetti:1995sg], and 14400 events [@:2009tr] respectively. The present analysis is performed using about $100,000$ events. The decay $\Ds \to \phi \pip$ is frequently used in particle physics as the reference mode for $\Ds$ decay. Previous measurements of this decay mode did not, however, account for the presence of the $K \Kbar$ $\mathcal{S}$-wave underneath the $\phi$ peak. Therefore, as part of the present analysis, we obtain a precise measurement of the branching fraction $\BR(\Ds \to \phi \pip)$ relative to $\BR(\Ds \to K^+ K^- \pi^+)$. Singly Cabibbo-suppressed (SCS) and doubly Cabibbo-suppressed (DCS) decays play an important role in studies of charmed hadron dynamics. The naive expectations for the rates of SCS and DCS decays are of the order of $\tan^2 \theta_C$ and $\tan^4 \theta_C$, respectively, where $\theta_C$ is the Cabibbo mixing angle. These rates correspond to about 5.3% and 0.28% relative to their Cabibbo-favored (CF) counterpart. Due to the limited statistics in past experiments, branching fraction measurements of DCS decays have been affected by large statistical uncertainties [@Nakamura:2010zzi]. A precise measurement of $\frac{ \BR(\Ds \to K^+ K^+ \pim)}{ \BR(\Ds \to K^+ K^- \pi^+)}$ has been recently performed by the Belle experiment [@Ko:2009tc]. In this paper we study the $\Ds$ decay $$\Ds \to \Kp \Km \pip \label{eq:eq1}$$ and perform a detailed Dalitz plot analysis. We then measure the branching ratios of the SCS decay $$\Ds \to \Kp \Km \Kp \label{eq:eq2}$$ and the DCS decay $$\Ds \to \Kp \Kp \pim \label{eq:eq3}$$ relative to the CF channel (\[eq:eq1\]). The paper is organized as follows. Section \[sec:sec\_det\] briefly describes the  detector, while Sec. \[sec:sec\_ev\_sel\] gives details of event reconstruction. Section \[sec:sec\_eff\] is devoted to the evaluation of the selection efficiency. Section \[sec:sec\_pwa\] describes a partial wave analysis of the $\Kp \Km$ system, the evaluation of the $\Ds \to \phi \pip$ branching fraction and the $K \Kbar$ $\mathcal{S}$-wave parametrization. Section \[sec:sec\_DP\_method\] deals with the description of the Dalitz plot analysis method and background description. Results from the Dalitz plot analysis of $\Ds \to K^+ K^- \pi^+$ are given in Sec. \[sec:sec\_DP\]. The measurements of the $\Ds$ SCS and DCS branching fractions are described in Sec. \[sec:sec\_BR\], while Sec. \[sec:sec\_sum\] summarizes the results. The  Detector and Dataset {#sec:sec_det} ========================= The data sample used in this analysis corresponds to an integrated luminosity of 384 recorded with the  detector at the SLAC 2 collider, operated at center-of-mass (c.m.) energies near the 4S resonance. The  detector is described in detail elsewhere [@Aubert:2001tu]. The following is a brief summary of the components important to this analysis. Charged particle tracks are detected, and their momenta measured, by a combination of a cylindrical drift chamber (DCH) and a silicon vertex tracker (SVT), both operating within a 1.5 T solenoidal magnetic field. Photon energies are measured with a CsI(Tl) electromagnetic calorimeter (EMC). Information from a ring-imaging Cherenkov detector (DIRC), and specific energy-loss measurements in the SVT and DCH, are used to identify charged kaon and pion candidates. ![image](fig1.eps){width="\textwidth"} Event Selection and [$\protect\Ds \to \Kp \Km \pip$]{} Reconstruction {#sec:sec_ev_sel} ===================================================================== Events corresponding to the three-body $\Ds \to \Kp \Km \pip$ decay are reconstructed from the data sample having at least three reconstructed charged tracks with net charge $\pm$ 1. We require that the invariant mass of the $K^+K^-\pi^+$ system lie within the mass interval \[1.9-2.05\]. Particle identification is applied to the three tracks, and the presence of two kaons is required. The efficiency that a kaon is identified is 90% while the rate that a kaon is misidentified as a pion is 2%. The three tracks are required to originate from a common vertex, and the $\chi^2$ fit probability ($P_1$) must be greater than 0.1%. We also perform a separate kinematic fit in which the $\Ds$ mass is constrained to its known value [@Nakamura:2010zzi]. This latter fit will be used only in the Dalitz plot analysis. In order to help in the discrimination of signal from background, an additional fit is performed, constraining the three tracks to originate from the $e^+ e^-$ luminous region (beam spot). The $\chi^2$ probability of this fit, labeled as $P_2$, is expected to be large for most of the background events, when all tracks originate from the luminous region, and small for the $\Ds$ signal, due to the measurable flight distance of the latter. The decay $$D^*_s(2112)^+ \to \Ds \gamma$$ is used to select a subset of event candidates in order to reduce combinatorial background. The photon is required to have released an energy of at least 100 into the EMC. We define the variable $$\Delta m = m(\Kp \Km \pip \gamma) - m(\Kp \Km \pip)$$ and require it to be within $\pm 2\sigma_{\Dss}$ with respect to $\Delta m_{\Dss} $ where $\Delta m_{\Dss}=144.94\pm0.03_{\rm stat}$ and $\sigma_{\Dss}=5.53 \pm 0.04_{\rm stat}$ are obtained from a Gaussian fit of the $\Delta m$ distribution. Each $\Ds$ candidate is characterized by three variables: the c.m. momentum $p^*$ in the $e^+e^-$ rest frame, the difference in probability $P_1 - P_2$, and the signed decay distance $d_{xy} = \frac{{\mathbf d} \cdot {\mathbf p_{xy}}}{|{\mathbf p_{xy}}|}$ where ${\mathbf d}$ is the vector joining the beam spot to the $\Ds$ decay vertex and ${\mathbf p_{xy}}$ is the projection of the $\Ds$ momentum on the $xy$ plane. These three variables are used to discriminate signal from background events: in fact signal events are expected to be characterized by larger values of $p^*$ [@Aubert:2002ue], due to the jet-like shape of the $e^+e^-\to c \bar c$ events, and larger values of $d_{xy}$ and $P_1-P_2$, due to the measurable flight distance of the $\Ds$ meson. The distributions of these three variables for signal and background events are determined from data and are shown in Fig. \[fig:fig1\]. The background distributions are estimated from events in the $\Ds$ mass-sidebands, while those for the signal region are estimated from the $\Ds$ signal region with sideband subtraction. The normalized probability distribution functions (PDFs) are then combined in a likelihood-ratio test. A selection is performed on this variable such that signal to background ratio is maximized. Lower sideband, signal and upper sideband regions are defined between \[1.911 - 1.934\] , \[1.957 - 1.980\] and \[2.003 - 2.026\] , respectively, corresponding to $(-10 \sigma,-6\sigma)$, $(-2 \sigma,2 \sigma)$ and $(6 \sigma, 10 \sigma)$ regions, where $\sigma$ is estimated from the fit of a Gaussian function to the $\Ds$ lineshape. We have examined a number of possible background sources. A small peak due to the decay $D^{*+} \to \pip \Dz$ where $\Dz \to \Kp \Km$ is observed. A Gaussian fit to this $\Kp \Km$ spectrum gives $\sigma_{\Dz \to \Kp\Km}=5.4$ . For events within 3.5$\sigma_{\Dz \to \Kp\Km}$ of the $D^0$ mass, we plot the mass difference $\Delta m(\Kp \Km \pip) = m(\Kp \Km \pip)-m(\Kp \Km)$ and observe a clean $\Dstarp$ signal. We remove events that satisfy $\Delta m(\Kp \Km \pip)< 0.15$ . The surviving events still show a $\Dz \to \Kp \Km$ signal which does not come from this $\Dstarp$ decay. We remove events that satisfy $m(\Kp \Km)>$1.85 . Particle misidentification, in which a pion $\pi_{\rm mis}^+$ is wrongly identified as a kaon, is tested by assigning the pion mass to the . In this way we identify the background due to the decay $\Dp \to \Km \pip \pip$ which, for the most part, populates the higher mass $\Ds \to \Kp \Km \pip$ sideband. However, this cannot be removed without biasing the $\Ds$ Dalitz plot, and so this background is taken into account in the Dalitz plot analysis. ![image](fig2a.eps){width="7.8cm"} ![image](fig2b.eps){width="7.8cm"} We also observe a clean peak in the distribution of the mass difference $m(\Km \pi_{\rm mis}^+ \pip) - m(\Km \pi_{\rm mis}^+)$. Combining $m(K^- \pi_{\rm mis}^+)$ with each of the $\pi^0$ meson candidates in the event, we identify this contamination as due to $D^{*+} \to \pi^+ D^0 (\to K^- \pi^+ \pi^0)$ with a missing $\pi^0$. We remove events that satisfy $m(\Km \pip_{\rm mis} \pip)-m(\Km \pip_{\rm mis})<0.15$ . Finally, we remove the $\Ds$ candidates that share one or two daughters with another $\Ds$ candidate; this reduces the number of candidates by 1.8%, corresponding to 0.9% of events. We allow there to be two or more non-overlapping multiple candidates in the same event. The resulting $K^+K^-\pi^+$ mass distribution is shown in Fig. \[fig:fig2\](a). This distribution is fitted with a double-Gaussian function for the signal, and a linear background. The fit gives a $\Ds$ mass of $1968.70 \pm 0.02_{\rm stat}$ , $\sigma_1=4.96 \pm 0.06_{\rm stat}$ , $\sigma_2/\sigma_1=1.91 \pm 0.06_{\rm stat}$ where $\sigma_1$ ($\sigma_2$) is the standard deviation of the first (second) Gaussian, and errors are statistical only. The fractions of the two Gaussians are $f_{\sigma_1} = 0.80 \pm 0.02$ and $f_{\sigma_2} = 0.20 \pm 0.02$. The signal region is defined to be within $\pm 2 \sigma_{\Ds}$ of the fitted mass value, where $\sigma_{\Ds}=\sqrt {f_{\sigma_1}\sigma_1^2+f_{\sigma_2}\sigma_2^2}=6.1$ is the observed mass resolution (the simulated mass resolution is $6$ ) . The number of signal events in this region (Signal), and the corresponding purity (defined as Signal/(Signal+Background)), are given in Table \[tab:table1\]. [crclc]{} $\Ds$ decay mode & & Purity (%) $\Kp \Km \pip$ & 96307 & $\pm$ & 369 & 95 $\Kp \Km \Kp$ & 748 & $\pm$ & 60 & 28 $\Kp \Kp \pim$ & 356 & $\pm$ & 52 & 23 \[tab:table1\] For events in the $\Ds \to \Kp\Km\pip$ signal region, we obtain the Dalitz plot shown in Fig. \[fig:fig2\](b). For this distribution, and for the Dalitz plot analysis (Sec. \[sec:sec\_DP\_method\]), we use the track parameters obtained from the $\Ds$ mass-constrained fit, since this yields a unique Dalitz plot boundary. In the $\Kp \Km$ threshold region, a strong $\phi(1020)$ signal is observed, together with a rather broad structure. The $f_0(980)$ and $a_0(980)$ $\mathcal{S}$-wave resonances are, in fact, close to $K^+ K^-$ threshold, and might be expected to contribute in the vicinity of the $\phi(1020)$. A strong $\Kstarzbm$ signal can also be seen in the $K^-\pi^+$ system, but there is no evidence of structure in the $K^+\pi^+$ mass. ![image](fig3.eps){width="\textwidth"} Efficiency {#sec:sec_eff} ========== The selection efficiency for each $\Ds$ decay mode analyzed is determined from a sample of Monte Carlo (MC) events in which the $\Ds$ decay is generated according to phase space (i.e. such that the Dalitz plot is uniformly populated). The generated events are passed through a detector simulation based on the <span style="font-variant:small-caps;">Geant4</span> toolkit [@Agostinelli:2002hh], and subjected to the same reconstruction and event selection procedure as that applied to the data. The distribution of the selected events in each Dalitz plot is then used to determine the reconstruction efficiency. The MC samples used to compute these efficiencies consist of 4.2 $\times 10^6$ generated events for $\Ds \to \Kp \Km \pip$ and $\Ds \to \Kp \Kp \pim$, and 0.7 $\times 10^6$ for $\Ds \to K^+ K^- K^+$ . For $\Ds \to K^+ K^- \pi^+$, the efficiency distribution is fitted to a third-order polynomial in two dimensions using the expression: $$\begin{aligned} &\eta(x,y) = & a_0 + a_1x^\prime + a_3x^{\prime2} + a_4y^{\prime2} + a_5x^\prime y^\prime \nonumber\\ & & + a_6x^{\prime3} +a_7y^{\prime3}\end{aligned}$$ where $x=m^2(K^+ K^-)$, $y=m^2(K^- \pi^+)$, $x^\prime=x-2$, and $y^\prime=y-1.25$. Coefficients consistent with zero have been omitted. We obtain a good description of the efficiency with $\chi^2/NDF=1133/(1147-7)=0.994$ ($NDF=$ Number of Degrees of Freedom). The efficiency is found to be almost uniform in $K^-\pi^+$ and $K^+K^-$ mass, with an average value of $\approx$ 3.3% (Fig. \[fig:fig3\]). ![image](fig4.eps){width="\textwidth"} Partial Wave Analysis of the [$\Kp \Km$]{} and [$\Km \pip$]{} threshold regions {#sec:sec_pwa} =============================================================================== In the $K^+K^-$ threshold region both $a_0(980)$ and $f_0(980)$ can be present, and both resonances have very similar parameters which suffer from large uncertainties. In this section we obtain model-independent information on the $\Kp \Km$ $\mathcal{S}$-wave by performing a partial wave analysis in the $\Kp \Km$ threshold region. Let $N$ be the number of events for a given mass interval $I = [m_{\Kp\Km};m_{\Kp\Km} + {\rm d}m_{\Kp\Km}]$. We write the corresponding angular distribution in terms of the appropriate spherical harmonic functions as $$\frac{ {\rm d} N}{{\rm d}\cos\theta} = 2\pi\sum_{k=0}^L\left<Y^0_k\right>Y^0_k(\cos\theta), \label{eq:sph_harmonics}$$ where $L = 2\ell_{\rm max}$, and $\ell_{\rm max}$ is the maximum orbital angular momentum quantum number required to describe the $\Kp\Km$ system at $m_{\Kp\Km}$ (e.g. $\ell_{\rm max} = 1$ for an $\mathcal{S}$-, $\mathcal{P}$-wave description); $\theta$ is the angle between the $\Kp$ direction in the $\Kp \Km$ rest frame and the prior direction of the $\Kp \Km$ system in the $\Ds$ rest frame. The normalizations are such that $$\int^1_{-1} Y^0_k(\cos\theta) Y^0_j(\cos\theta) {\rm d}\cos\theta = \frac{\delta_{kj}}{2\pi},$$ and it is assumed that the distribution $\frac{{\rm d}N}{{\rm d}\cos\theta}$ has been efficiency-corrected and background-subtracted. Using this orthogonality condition, the coefficients in the expansion are obtained from: $$\left<Y^0_k\right> = \int^1_{-1}Y^0_k(\cos\theta)\frac{{\rm d}N}{{\rm d}\cos \theta} {\rm d}\cos\theta$$ where the integral is given, to a good approximation, by $\sum^N_{n=1}Y^0_k(\cos\theta_n)$, where $\theta_n$ is the value of $\theta$ for the $n$-th event. Figure \[fig:fig4\] shows the $\Kp \Km$ mass spectrum up to $1.5 \gevcc$ weighted by $Y^0_k(\cos\theta)=\sqrt{(2k+1)/4\pi} \ P_k(\cos\theta)$ for $k=0, 1$ and $2$, where $P_k$ is the Legendre polynomial function of order $k$. These distributions are corrected for efficiency and phase space, and background is subtracted using the $\Ds$ sidebands. The number of events $N$ for the mass interval $I$ can be expressed also in terms of the partial-wave amplitudes describing the $K^+K^-$ system. Assuming that only $\mathcal{S}$- and $\mathcal{P}$-wave amplitudes are necessary in this limited region, we can write: $$\frac{{\rm d}N}{{\rm d}\cos\theta} = 2\pi|\mathcal{S} \, Y^0_0(\cos\theta)+\mathcal{P} \, Y^0_1(\cos\theta)|^2. \label{eq:pwa_expansion}$$ By comparing Eq. (\[eq:sph\_harmonics\]) and Eq. (\[eq:pwa\_expansion\]) [@Chung:1997qd], we obtain: $$\begin{aligned} \sqrt{4 \pi} \left<Y^0_0 \right> & = & |\mathcal{S}|^2 + |\mathcal{P}|^2 \nonumber \\ \label{eq:sp2} \sqrt{4 \pi} \left<Y^0_1 \right> & = & 2 |\mathcal{S}| |\mathcal{P}| \cos \phi_{\mathcal{SP}}\\ \sqrt{4 \pi} \left<Y^0_2 \right> & = & \frac{2}{\sqrt 5} |\mathcal{P}|^2 \nonumber\end{aligned}$$ where $\phi_{\mathcal{SP}} = \phi_{\mathcal S} - \phi_{\mathcal P}$ is the phase difference between the $\mathcal{S}$- and $\mathcal{P}$-wave amplitudes. These equations relate the interference between the $\mathcal{S}$-wave ($f_0(980)$, and/or $a_0(980)$, and/or nonresonant) and the $\mathcal{P}$-wave ($\phi(1020)$) to the prominent structure in $\left<Y^0_1 \right>$ (Fig. \[fig:fig4\](b)). The $\left<Y^0_1 \right>$ distribution shows the same behavior as for $\Ds \to K^+K^- e^+ \nu_e$ decay [@Aubert:2008rs]. The $\left<Y^0_2 \right>$ distribution (Fig. \[fig:fig4\](c)), on the other hand, is consistent with the $\phi(1020)$ lineshape. The above system of equations can be solved in each interval of $K^+K^-$ invariant mass for $|\mathcal{S}|$, $|\mathcal{P}|$, and $\phi_{\mathcal{SP}}$, and the resulting distributions are shown in Fig. \[fig:fig5\]. We observe a threshold enhancement in the $\mathcal{S}$-wave (Fig. \[fig:fig5\](a)), and the expected $\phi(1020)$ Breit-Wigner (BW) in the $\mathcal{P}$-wave (Fig. \[fig:fig5\](b)). We also observe the expected $\mathcal{S}$-$\mathcal{P}$ relative phase motion in the $\phi(1020)$ region (Fig. \[fig:fig5\](c)). ![Squared (a) $\mathcal{S}$- and (b) $\mathcal{P}$-wave amplitudes; (c) the phase difference $\phi_{\mathcal{SP}}$; (d) $\phi_{\mathcal{S}}$ obtained as explained in the text. The curves result from the fit described in the text.[]{data-label="fig:fig5"}](fig5.eps){height="22.15cm"} [$\mathcal{P}$]{}-wave/[$\mathcal{S}$]{}-wave ratio in the [$\phi(1020)$]{} region ---------------------------------------------------------------------------------- The decay mode $\Ds \to \phi(1020) \pip$ is used often as the normalizing mode for $\Ds$ decay branching fractions, typically by selecting a $\Kp \Km$ invariant mass region around the $\phi(1020)$ peak. The observation of a significant $\mathcal{S}$-wave contribution in the threshold region means that this contribution must be taken into account in such a procedure. In this section we estimate the $\mathcal{P}$-wave/$\mathcal{S}$-wave ratio in an almost model-independent way. In fact integrating the distributions of $\sqrt{4\pi} \, pq^{\prime}\left<Y^0_0\right>$ and $\sqrt{5\pi} \, pq^{\prime}\left<Y^0_2\right>$ (Fig. \[fig:fig4\]) in a region around the $\phi(1020)$ peak yields $\int(|\mathcal{S}|^2+|\mathcal{P}|^2)pq^{\prime}{\rm d}m_{K^+K^-}$ and $\int |\mathcal{P}|^2pq^{\prime}{\rm d}m_{K^+K^-}$ respectively, where $p$ is the $K^+$ momentum in the $K^+K^-$ rest frame, and $q^{\prime}$ is the momentum of the bachelor $\pi^+$ in the $D_s^+$ rest frame. The $\mathcal{S}$-$\mathcal{P}$ interference contribution integrates to zero, and we define the $\mathcal{P}$-wave and $\mathcal{S}$-wave fractions as $$\begin{aligned} f_{\mathcal{P}-{\rm wave}} & = &\frac{\int |\mathcal{P}|^2pq^{\prime}{\rm d}m_{K^+K^-}}{\int (|\mathcal{S}|^2+|\mathcal{P}|^2)pq^{\prime}{\rm d}m_{K^+K^-} }\\ f_{\mathcal{S}-{\rm wave}} & = & \frac{\int |\mathcal{S}|^2pq^{\prime}{\rm d}m_{K^+K^-} }{\int (|\mathcal{S}|^2+|\mathcal{P}|^2)pq^{\prime}{\rm d}m_{K^+K^-}} \nonumber\\ & = & 1-f_{\mathcal{P}-{\rm wave}} \, . \end{aligned}$$ The experimental mass resolution is estimated by comparing generated and reconstructed MC events, and is $\simeq$ 0.5 at the $\phi$ mass peak. Table \[tab:table2\] gives the resulting $\mathcal{S}$-wave and $\mathcal{P}$-wave fractions computed for three $\Kp \Km$ mass regions. The last column of Table \[tab:table2\] shows the measurements of the relative overall rate ($\frac{N}{N_{\rm tot}}$) defined as the number of events in the $\Kp \Km$ mass interval over the number of events in the entire Dalitz plot after efficiency-correction and background-subtraction. $f_{\mathcal{S}-{\rm wave}}$ (%) $f_{\mathcal{P}-{\rm wave}}$ (%) $\frac{N}{N_{\rm tot}}$ (%) ---------- ---------------------------------- ---------------------------------- ----------------------------- ---------------- ---------------- 1019.456 $\pm$ 5 3.5 $\pm$ 1.0 96.5 $\pm$ 1.0 29.4 $\pm$ 0.2 1019.456 $\pm$ 10 5.6 $\pm$ 0.9 94.4 $\pm$ 0.9 35.1 $\pm$ 0.2 1019.456 $\pm$ 15 7.9 $\pm$ 0.9 92.1 $\pm$ 0.9 37.8 $\pm$ 0.2 : $\mathcal{S}$-wave and $\mathcal{P}$-wave fractions computed in three $\Kp \Km$ mass ranges around the $\phi(1020)$ peak. Errors are statistical only. \[tab:table2\] [$\mathcal{S}$]{}-wave parametrization at the [$K^+K^-$]{} threshold {#sec:sec_pwa_b} -------------------------------------------------------------------- In this section we extract a phenomenological description of the $\mathcal{S}$-wave assuming that it is dominated by the $f_0(980)$ resonance while the $\mathcal{P}$-wave is described entirely by the $\phi(1020)$ resonance. We also assume that no other contribution is present in this limited region of the Dalitz plot. We therefore perform a simultaneous fit of the three distributions shown in Figs. \[fig:fig5\](a),(b), and (c) using the following model: $$\begin{aligned} \frac{{\rm d} N_{\mathcal{S}^2}}{{\rm d} m_ {K^+K^-}} = & \, |C_{f_0(980)} A_{f_0(980)}|^2\\ \frac{{\rm d} N_{\mathcal{P}^2}}{{\rm d} m_ {K^+K^-}} = & \, |C_\phi A_\phi|^2\\ \frac{{\rm d} N_{\phi_{\mathcal{S}\mathcal{P}}}}{{\rm d} m_ {K^+K^-}} = & \, arg(A_{f_0(980)}e^{i \delta})-arg(A_\phi) \end{aligned}$$ where $C_\phi$, $C_{f_0(980)}$, and $\delta$ are free parameters and $$A_\phi = \frac{F_r F_D}{m_\phi^2-m^2-im_\phi\Gamma} \times 4 p q \label{eq:amp_phi}$$ is the spin 1 relativistic BW parametrizing the $\phi(1020)$ with $\Gamma$ expressed as: $$\Gamma = \Gamma_r \left(\frac{p}{p_r}\right)^{2J+1} \left(\frac{M_r}{m}\right)F^2_r. \label{eq:gamma_phi}$$ Here $q$ is the momentum of the bachelor $\pi^+$ in the $K^+K^{-}$ rest frame. The parameters in Eqs. (\[eq:amp\_phi\]) and (\[eq:gamma\_phi\]) are defined in Sec. \[sec:sec\_DP\_method\] below. For $A_{f_0(980)}$ we first tried a coupled channel BW (Flatté) amplitude [@Flatte:1972rz]. However we find that this parametrization is insensitive to the coupling to the $\pi\pi$ channel. Therefore we empirically parametrize the $f_0(980)$ with the following function: $$A_{f_0(980)} = \frac{1}{m_0^2-m^2-im_0\Gamma_0\rho_{KK}}$$ where $\rho_{KK}=2p/m$, and obtain the following parameter values: $$\begin{aligned} m_0 = & \ (0.922 \pm 0.003_{\rm stat})\textrm{\gevcc} \\ \Gamma_0 = & \ (0.24 \pm 0.08_{\rm stat}) \ \textrm{GeV} \end{aligned} \label{eq:f0_val}$$ The errors are statistical only. The fit results are superimposed on the data in Fig. \[fig:fig5\]. -------------- ------- ------- ------ -------- ------- ------ ---- ------- ---- $m_{K^+K^-}$ () 0.988 22178 $\pm$ 3120 -133 $\pm$ 2283 0.992 18760 $\pm$ 1610 2761 $\pm$ 1313 92 $\pm$ 5 0.996 16664 $\pm$ 1264 1043 $\pm$ 971 84 $\pm$ 7 1 12901 $\pm$ 1058 3209 $\pm$ 882 81 $\pm$ 4 1.004 13002 $\pm$ 1029 5901 $\pm$ 915 82 $\pm$ 3 1.008 9300 $\pm$ 964 13484 $\pm$ 1020 76 $\pm$ 3 1.012 9287 $\pm$ 1117 31615 $\pm$ 1327 80 $\pm$ 2 1.016 6829 $\pm$ 1930 157412 $\pm$ 2648 75 $\pm$ 8 1.02 11987 $\pm$ 2734 346890 $\pm$ 3794 55 $\pm$ 6 1.024 5510 $\pm$ 1513 104892 $\pm$ 2055 86 $\pm$ 5 1.028 7565 $\pm$ 952 32239 $\pm$ 1173 75 $\pm$ 2 1.032 7596 $\pm$ 768 15899 $\pm$ 861 74 $\pm$ 2 1.036 6497 $\pm$ 658 10399 $\pm$ 707 77 $\pm$ 2 1.04 5268 $\pm$ 574 7638 $\pm$ 609 72 $\pm$ 3 1.044 5467 $\pm$ 540 5474 $\pm$ 540 72 $\pm$ 3 1.048 5412 $\pm$ 506 4026 $\pm$ 483 72 $\pm$ 3 1.052 5648 $\pm$ 472 2347 $\pm$ 423 71 $\pm$ 3 1.056 4288 $\pm$ 442 3056 $\pm$ 421 70 $\pm$ 3 1.06 4548 $\pm$ 429 1992 $\pm$ 384 73 $\pm$ 3 1.064 4755 $\pm$ 425 1673 $\pm$ 374 70 $\pm$ 4 1.068 4508 $\pm$ 393 1074 $\pm$ 334 75 $\pm$ 4 1.072 3619 $\pm$ 373 1805 $\pm$ 345 75 $\pm$ 4 1.076 4189 $\pm$ 368 840 $\pm$ 312 70 $\pm$ 5 1.08 4215 $\pm$ 367 770 $\pm$ 297 71 $\pm$ 5 1.084 3508 $\pm$ 345 866 $\pm$ 294 71 $\pm$ 5 1.088 3026 $\pm$ 322 929 $\pm$ 285 75 $\pm$ 4 1.092 3456 $\pm$ 309 79 $\pm$ 240 37 $\pm$ 90 1.096 2903 $\pm$ 300 488 $\pm$ 256 75 $\pm$ 6 1.1 2335 $\pm$ 282 885 $\pm$ 248 68 $\pm$ 5 1.104 2761 $\pm$ 284 341 $\pm$ 231 57 $\pm$ 10 1.108 2293 $\pm$ 273 602 $\pm$ 231 77 $\pm$ 5 1.112 1913 $\pm$ 238 269 $\pm$ 186 74 $\pm$ 8 1.116 2325 $\pm$ 252 57 $\pm$ 198 1.12 1596 $\pm$ 228 308 $\pm$ 194 78 $\pm$ 7 1.124 1707 $\pm$ 224 233 $\pm$ 188 67 $\pm$ 10 1.128 1292 $\pm$ 207 270 $\pm$ 176 66 $\pm$ 9 1.132 969 $\pm$ 197 586 $\pm$ 172 60 $\pm$ 6 1.136 1092 $\pm$ 196 553 $\pm$ 170 67 $\pm$ 6 1.14 1180 $\pm$ 193 316 $\pm$ 167 48 $\pm$ 11 1.144 1107 $\pm$ 187 354 $\pm$ 170 68 $\pm$ 8 1.148 818 $\pm$ 178 521 $\pm$ 164 64 $\pm$ 7 -------------- ------- ------- ------ -------- ------- ------ ---- ------- ---- : $\mathcal{S}$- and $\mathcal{P}$-wave squared amplitudes (in arbitrary units) and $\mathcal{S}$-wave phase. The $\mathcal{S}$-wave phase values, corresponding to the mass 0.988 and 1.116 , are missing because the $\left<Y^0_2 \right>$ distribution (Fig. \[fig:fig4\](c)) goes negative or $|\cos\phi_{\mathcal{SP}}|>1$ and so Eqs. (\[eq:sp2\]) cannot be solved. Quoted uncertainties are statistical only. \[tab:table3\] ![image](fig6.eps){width="15.3cm"} In Fig. \[fig:fig5\](c), the $\mathcal{S}$-$\mathcal{P}$ phase difference is plotted twice because of the sign ambiguity associated with the value of $ \phi_{\mathcal{SP}}$ extracted from $\cos \phi_{\mathcal{SP}}$. We can extract the mass-dependent $f_0(980)$ phase by adding the mass-dependent $\phi(1020)$ BW phase to the $\phi_{\mathcal{SP}}$ distributions of Fig. \[fig:fig5\](c). Since the $K^+ K^-$ mass region is significantly above the $f_0(980)$ central mass value of Eq. (\[eq:f0\_val\]), we expect that the $\mathcal{S}$-wave phase will be moving much more slowly in this region than in the $\phi(1020)$ region. Consequently, we resolve the phase ambiguity of Fig. \[fig:fig5\](c) by choosing as the physical solution the one which decreases rapidly in the $\phi(1020)$ peak region, since this reflects the rapid forward BW phase motion associated with a narrow resonance. The result is shown in Fig. \[fig:fig5\](d), where we see that the $\mathcal{S}$-wave phase is roughly constant, as would be expected for the tail of a resonance. The slight decrease observed with increasing mass might be due to higher mass contributions to the $\mathcal{S}$-wave amplitude. The values of $|\mathcal{S}|^2$ (arbitrary units) and phase values are reported in Table \[tab:table3\], together with the corresponding values of $|\mathcal{P}|^2$. In Fig. \[fig:fig6\](a) we compare the $\mathcal{S}$-wave profile from this analysis with the $\mathcal{S}$-wave intensity values extracted from Dalitz plot analyses of $\Dz \to \Kzb \Kp \Km$ [@Aubert:2005sm] and $\Dz \to \Kp \Km \piz$ [@Aubert:2007dc]. The four distributions are normalized in the region from threshold up to 1.05 . We observe substantial agreement. As the $a_0(980)$ and $f_0(980)$ mesons couple mainly to the $u \bar u/d \bar d$ and $s \bar s$ systems respectively, the former is favoured in $\Dz \to \Kzb \Kp \Km$ and the latter in $\Ds \to \Kp \Km \pip$. Both resonances can contribute in $\Dz \to \Kp \Km \piz$. We conclude that the $\mathcal{S}$-wave projections in the $K \Kbar$ system for both resonances are consistent in shape. It has been suggested that this feature supports the hypothesis that the $a_0(980)$ and $f_0(980)$ are 4-quark states [@Maiani:2007iw]. We also compare the $\mathcal{S}$-wave profile from this analysis with the $\pip \pim$ $\mathcal{S}$-wave profile extracted from  data in a Dalitz plot analysis of $\Ds \to \pip \pim \pip$ [@:2008tm] (Fig. \[fig:fig6\](b)). The observed agreement supports the argument that only the $f_0(980)$ is present in this limited mass region. Study of the [$K^-\pi^+$]{} [$\mathcal{S}$]{}-wave at threshold {#sec:kpi_swave} --------------------------------------------------------------- We perform a model-independent analysis, similar to that described in the previous sections, to extract the $K \pi$ $\mathcal{S}$-wave behavior as a function of mass in the threshold region up to $1.1 \gevcc$. Figure \[fig:fig7\] shows the $\Km \pip$ mass spectrum in this region, weighted by $Y^0_k(\cos\theta)=\sqrt{(2k+1)/4\pi} P_k(\cos\theta)$, with $k=0, 1$ and $2$, corrected for efficiency, phase space, and with background from the $\Ds$ sidebands subtracted; $\theta$ is the angle between the $K^-$ direction in the $K^- \pi^+$ rest frame and the prior direction of the $K^- \pi^+$ system in the $D^+_s$ rest frame. We observe that $\left<Y^0_0 \right>$ and $\left<Y^0_2 \right>$ show strong $\Kstarzbm$ resonance signals, and that the $\left<Y^0_1 \right>$ moment shows evidence for $\mathcal{S}$-$\mathcal{P}$ interference. ![image](fig7.eps){width="\textwidth"} We use Eqs. (\[eq:sp2\]) to solve for $|\mathcal{S}|$ and $|\mathcal{P}|$. The result for the $\mathcal{S}$-wave is shown in Fig. \[fig:fig7\](d). We observe a small $\mathcal{S}$-wave contribution which does not allow us to measure the expected phase motion relative to that of the $\Kstarzbm$ resonance. Indeed, the fact that $|\mathcal{S}|^2$ goes negative indicates that a model including only $\mathcal{S}$- and $\mathcal{P}$-wave components is not sufficient to describe the $K^-\pi^+$ system. Dalitz Plot formalism {#sec:sec_DP_method} ===================== An unbinned maximum likelihood fit is performed in which the distribution of events in the Dalitz plot is used to determine the relative amplitudes and phases of intermediate resonant and nonresonant states. The likelihood function is written as: $$\begin{aligned} \mathcal{L} = \prod_{n=1}^N&\bigg[&f_{\rm sig} \cdot \eta(x,y)\frac{\sum_{i,j} c_i c_j^* A_i(x,y) A_j^*(x,y)}{\sum_{i,j} c_i c_j^* I_{A_i A_j^*}} + \nonumber\\ & &(1-f_{\rm sig})\frac{\sum_{i} k_iB_i(x,y)}{\sum_{i} k_iI_{B_i}}\bigg]\end{aligned}$$ where: - $N$ is the number of events in the signal region; - $x=m^2(K^+ K^-)$ and $y=m^2(K^- \pi^+)$ - $f_{\rm sig}$ is the fraction of signal as a function of the $\Kp \Km \pip$ invariant mass, obtained from the fit to the $\Kp \Km \pip$ mass spectrum (Fig. \[fig:fig2\](a)); - $\eta(x,y)$ is the efficiency, parametrized by a $3^{\rm rd}$ order polynomial (Sec. \[sec:sec\_eff\]); - the $A_i(x,y)$ describe the complex signal amplitude contributions; - the $B_i(x,y)$ describe the background probability density function contributions; - $k_i$ is the magnitude of the $i$-th component for the background. The $k_i$ parameters are obtained by fitting the sideband regions; - $I_{A_i A_j^*}=\int A_i (x,y)A_j^*(x,y) \eta(x,y) {\rm d}x{\rm d}y$ and $I_{B_i}~=~\int B_i(x,y) {\rm d}x{\rm d}y$ are normalization integrals. Numerical integration is performed by means of Gaussian quadrature [@cern]; - $c_i$ is the complex amplitude of the $i$-th component for the signal. The $c_i$ parameters are allowed to vary during the fit process. The phase of each amplitude (i.e. the phase of the corresponding $c_i$) is measured with respect to the $\Kp \Kstarzbm$ amplitude. Following the method described in Ref. [@Asner:2003gh], each amplitude $A_i(x,y)$ is represented by the product of a complex BW and a real angular term $T$ depending on the solid angle $\Omega$: $$A(x,y) = BW(m) \times T (\Omega).$$ For a $D_s$ meson decaying into three pseudo-scalar mesons via an intermediate resonance $r$ ($D_s \to r C, r \to AB$), $BW(M_{AB})$ is written as a relativistic BW: $$BW(M_{AB}) = \frac{F_r F_D}{M_r^2 - M_{AB}^2 - i \Gamma_{AB}M_r}$$ where $\Gamma_{AB}$ is a function of the invariant mass of system $AB$ ($M_{AB}$), the momentum $p_{AB}$ of either daughter in the $AB$ rest frame, the spin $J$ of the resonance and the mass $M_r$ and the width $\Gamma_r$ of the resonance. The explicit expression is: $$\Gamma_{AB} = \Gamma_r \left(\frac{p_{AB}}{p_r}\right)^{2J+1} \left(\frac{M_r}{M_{AB}}\right)F^2_r \label{eq:gamma}$$ $$p_{AB} = \frac{\sqrt{\left(M_{AB}^2-M_A^2-M_B^2\right)^2-4M_A^2M_B^2}}{2M_{AB}}. \label{eq:pAB}$$ The form factors $F_r$ and $F_D$ attempt to model the underlying quark structure of the parent particle and the intermediate resonances. We use the Blatt-Weisskopf penetration factors [@blatt] (Table \[tab:table4\]), that depend on a single parameter $R$ representing the meson “radius”. We assume $R_{\Ds}=3 \gev^{-1}$ for the $D_s$ and $R_r=1.5 \gev^{-1}$ for the intermediate resonances; $q_{AB}$ is the momentum of the bachelor $C$ in the $AB$ rest frame: $$q_{AB} = \frac{\sqrt{\left(M_{D_s}^2+M_C^2-M_{AB}^2\right)^2-4M_{D_s}^2M_C^2}}{2M_{AB}}. \label{eq:qAB}$$ $p_r$ and $q_r$ are the values of $p_{AB}$ and $q_{AB}$ when $m_{AB}=m_r$. ![image](fig8.eps){width="\textwidth"} Spin $F_r$ $F_D$ ------ ------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------- 0 $1$ $1$ 1 $\frac{\sqrt{1+(R_r p_r)^2}}{\sqrt{1+(R_r p_{AB})^2}}$ $\frac{\sqrt{1+(R_{\Ds} q_r)^2}}{\sqrt{1+(R_{\Ds} q_{AB})^2}}$ 2 $\frac{\sqrt{9+3(R_r p_r)^2+(R_r p_r)^4}}{\sqrt{9+3(R_r p_{AB})^2+(R_r p_{AB})^4}}$ $\frac{\sqrt{9+3(R_{\Ds} q_r)^2+(R_{\Ds} q_r)^4}}{\sqrt{9+3(R_{\Ds} q_{AB})^2+(R_{\Ds} q_{AB})^4}}$ : Summary of the Blatt-Weisskopf penetration form factors. $q_r$ and $p_r$ are the momenta of the decay particles in the parent rest frame. \[tab:table4\] The angular terms $T (\Omega)$ are described by the following expressions: $$\begin{aligned} \textrm{Spin 0}: T(\Omega) = & 1\\ \textrm{Spin 1}: T(\Omega) = & M^2_{BC}-M^2_{AC} \\ & -\frac{(M^2_{D_s}-M^2_C)(M^2_B-M^2_A)}{M_{AB}^2} \\ \textrm{Spin 2}: T(\Omega) = & a_1^2 - \frac{1}{3}a_2 a_3 \end{aligned}$$ where: $$\begin{aligned} a_1 = & M^2_{BC}-M^2_{AC}+\frac{(M^2_{D_s}-M^2_C)(M^2_A-M^2_B)}{M_{AB}^2}\\ a_2 = & M^2_{AB}-2M^2_{D_s}-2M^2_C+\frac{(M^2_{D_s}-M^2_C)^2}{M_{AB}^2} \\ a_3 = & M^2_{AB}-2M^2_A-2M^2_B+\frac{(M^2_A-M^2_B)^2}{M^2_{AB}}. \end{aligned}$$ Resonances are included in sequence, starting from those immediately visible in the Dalitz plot projections. All allowed resonances from Ref. [@Nakamura:2010zzi] have been tried, and we reject those with amplitudes consistent with zero. The goodness of fit is tested by an adaptive binning $\chi^2$. The efficiency-corrected fractional contribution due to the resonant or nonresonant contribution $i$ is defined as follows: $$f_i = \frac {|c_i|^2 \int |A_i(x,y)|^2 {\rm d}x {\rm d}y} {\int |\sum_j c_j A_j(x,y)|^2 {\rm d}x {\rm d}y}.$$ The $f_i$ do not necessarily add to 1 because of interference effects. We also define the interference fit fraction between the resonant or nonresonant contributions $k$ and $l$ as: $$f_{kl} = \frac {2 \int \Re[c_kc_l^* A_k(x,y) A_l^*(x,y)] {\rm{d}}x {\rm{d}}y} {\int |\sum_j c_j A_j(x,y)|^2 {\rm d}x {\rm d}y}.$$ Note that $f_{kk}=2f_k$. The error on each $f_i$ and $f_{kl}$ is evaluated by propagating the full covariance matrix obtained from the fit. Background parametrization -------------------------- To parametrize the $\Ds$ background, we use the $\Ds$ sideband regions. An unbinned maximum likelihood fit is performed using the function: $$\mathcal{L} = \prod_{n = 1}^{N_B} \left[ \frac{\sum_{i} k_iB_i}{\sum_{i} k_i I_{B_i}} \right]$$ where $N_B$ is the number of sideband events, the $k_i$ parameters are real coefficients floated in the fit, and the $B_i$ parameters represent Breit-Wigner functions that are summed incoherently. The Dalitz plot for the two sidebands shows the presence of $\phi(1020)$ and $\Kstarzbm$ (Fig. \[fig:fig8\]). There are further structures not clearly associated with known resonances and due to reflections of other final states. Since they do not have definite spin, we parametrize the background using an incoherent sum of $\mathcal{S}$-wave Breit-Wigner shapes. Dalitz plot analysis of {#sec:sec_DP} ======================== Using the method described in Sec. \[sec:sec\_DP\_method\], we perform an unbinned maximum likelihood fit to the $\Ds \to \Kp \Km \pip$ decay channel. The fit is performed in steps, by adding resonances one after the other. Most of the masses and widths of the resonances are taken from Ref. [@Nakamura:2010zzi]. For the $f_0(980)$ we use the phenomenological model described in Sec. \[sec:sec\_pwa\_b\]. The $\Kstarzbm$ amplitude is chosen as the reference amplitude. Decay mode ------------------------- --------------------- ------- ----------------- -------------------- --------- ------------------ --------------------- --------- ------------------ $\Kstarzbm K^+$ $47.9 \, \pm \, $ $0.5$ $\, \pm \, 0.5$ $\phi(1020) \, \pi^+$ $41.4 \, \pm \, $ $0.8$ $\, \pm \, 0.5$ $1.15 \, \pm \, $ $ 0.01$ $\, \pm \,0.26$ $ 2.89 \, \pm \, $ $ 0.02$ $\, \pm \, 0.04$ $f_0(980) \, \pi^+$ $16.4 \, \pm \, $ $0.7$ $\, \pm \, 2.0$ $2.67 \, \pm \,$ $ 0.05$ $\, \pm \, 0.20$ $ 1.56 \, \pm \, $ $ 0.02$ $\, \pm \, 0.09$ $\Kbar^*_0(1430)^0 K^+$ $2.4 \, \pm \, $ $0.3$ $\, \pm \, 1.0$ $1.14 \, \pm \, $ $ 0.06$ $\, \pm \, 0.36$ $2.55 \, \pm \, $ $ 0.05$ $\, \pm \, 0.22$ $f_0(1710) \, \pi^+$ $1.1 \, \pm \, $ $0.1$ $\, \pm \, 0.1$ $0.65 \, \pm \, $ $ 0.02$ $\, \pm \, 0.06$ $ 1.36 \, \pm \, $ $ 0.05$ $\, \pm \, 0.20$ $f_0(1370) \, \pi^+$ $1.1 \, \pm \, $ $0.1$ $\, \pm \, 0.2$ $0.46 \, \pm \, $ $ 0.03$ $\, \pm \, 0.09$ $ -0.45 \, \pm \, $ $ 0.11$ $\, \pm \, 0.52$ Sum $110.2 \, \pm \, $ $0.6$ $\, \pm \, 2.0$ $\chi^2/NDF$ \[tab:table5\] The decay fractions, amplitudes, and relative phase values for the best fit obtained, are summarized in Table \[tab:table5\] where the first error is statistical, and the second is systematic. The interference fractions are quoted in Table \[tab:table6\] where the error is statistical only. We observe the following features. - The decay is dominated by the $\Kstarzbm \Kp$ and $\phi(1020) \pi^+$ amplitudes. - The fit quality is substantially improved by leaving the $\Kstarzbm$ parameters free in the fit. The fitted parameters are: $$\begin{aligned} m_{\Kstarzbm} = & \, (895.6 \pm 0.2_{\rm stat} \pm 0.3_{\rm sys}) \mevcc \\ \Gamma_{\Kstarzbm} = & \, (45.1 \pm 0.4_{\rm stat} \pm 0.4_{\rm sys}) \mev \label{eq:m_g_892} \end{aligned}$$ ![image](fig9.eps){width="14.1cm"} $f_{kl}$ (%) $\Kstarzbm K^+$ ------------------------ ----------------- ------- ------- ------ ------ ------- ------- ----- ------- ------- ------ ------- ------- ------ ------- ------ $\Kstarzbm K^+$ 47.9 $\pm$ 0.5 -4.36 $\pm$ 0.03 -2.4 $\pm$ 0.2 -0.06 $\pm$ 0.03 0.08 $\pm$ 0.08 $\phi(1020) \, \pi^+$ 41.4 $\pm$ 0.8 -0.7 $\pm$ 0.2 $f_0(980) \, \pi^+$ 16.4 $\pm$ 0.7 4.1 $\pm$ 0.6 -3.1 $\pm$ 0.2 -4.5 $\pm$ 0.3 $\Kbar^*_0(1430)^0K^+$ 2.4 $\pm$ 0.3 0.48 $\pm$ 0.08 -0.7 $\pm$ 0.1 $f_0(1710) \, \pi^+$ 1.1 $\pm$ 0.1 0.86 $\pm$ 0.06 $f_0(1370) \, \pi^+$ 1.1 $\pm$ 0.1 \[tab:table6\] ![image](fig10.eps){height="9.9cm"} ![image](fig11.eps){height="9.9cm"} We notice that the width is about 3 lower than that in Ref. [@Nakamura:2010zzi]. However this measurement is consistent with results from other Dalitz plot analyses [@:2009tr]. - The $f_0(1370)$ contribution is also left free in the fit, and we obtain the following parameter values: $$\begin{aligned} m_{f_0(1370)}=& \, (1.22 \pm 0.01_{\rm stat} \pm 0.04_{\rm sys}) \gevcc \\ \Gamma_{f_0(1370)}=& \, (0.21 \pm 0.01_{\rm stat} \pm 0.03_{\rm sys}) \gev \end{aligned}$$ These values are within the broad range of values measured by other experiments [@Nakamura:2010zzi]. - A nonresonant contribution, represented by a constant complex amplitude, was included in the fit function. However this contribution was found to be consistent with zero, and therefore is excluded from the final fit function. - In a similar way contributions from the $K^*_1(1410)$, $f_0(1500)$, $f_2(1270)$, and $f_2'(1525)$ are found to be consistent with zero. - The replacement of the $K^*_0(1430)$ by the LASS parametrization [@Aston:1987ir] of the entire $K \pi$ $\mathcal{S}$-wave does not improve the fit quality. - The fit does not require any contribution from the $\kappa(800)$ [@Aitala:2002kr]. The results of the best fit ($\chi^2/NDF=2843/(2305-14)=1.24$) are superimposed on the Dalitz plot projections in Fig. \[fig:fig9\]. Other recent high statistics charm Dalitz plot analyses at  [@delAmoSanchez:2010xz] have shown that a significant contribution to the $\chi^2/NDF$ can arise from imperfections in modelling experimental effects. The normalized fit residuals shown under each distribution (Fig. \[fig:fig9\]) are given by ${\rm Pull}=(N_{\rm data} - N_{\rm fit})/\sqrt{N_{\rm data}}$. The data are well reproduced in all the projections. We observe some disagreement in the $K^-\pi^+$ projection below 0.5 [${\mathrm{\,Ge\kern -0.1em V^2\!/}c^4}$]{}. It may be due to a poor parametrization of the background in this limited mass region. A systematic uncertainty takes such effects in account (Sec. \[sec:syst\]). The missing of a $K \pi$ $\mathcal{S}$-wave amplitude in the $\Km \pip$ low mass region may be also the source of such disagreement. Another way to test the fit quality is to project the fit results onto the $\left<Y^0_k \right>$ moments, shown in Fig. \[fig:fig10\] for the $\Kp \Km$ system and Fig. \[fig:fig11\] for the $\Km \pip$ system. We observe that the fit results reproduce the data projections for moments up to $k=7$, indicating that the fit describes the details of the Dalitz plot structure very well. The $\Km \pip$ $\left<Y^0_3 \right>$ and $\left<Y^0_5 \right>$ moments show activity in the $\Kstarzbm$ region which the Dalitz plot analysis relates to interference between the $\Kstarzbm K^+$ and $f_0(1710)\pi^+$ decay amplitudes. This seems to be a reasonable explanation for the failure of the model-independent $K^-\pi^+$ analysis (Sec. \[sec:kpi\_swave\]), although the fit still does not provide a good description of the $\left<Y^0_3 \right>$ and $\left<Y^0_5 \right>$ moments in this mass region. We check the consistency of the Dalitz plot results and those of the analysis described in Sec. \[sec:sec\_pwa\_b\]. We compute the amplitude and phase of the $f_0(980)$/$\mathcal{S}$-wave relative to the $\phi(1020)$/$\mathcal{P}$-wave and find good agreement. ------------------------- -------------------- ------- ----------------- ------------------- ------------------- ----------------- ------------------ ------- ------------------ $\Kstarzbm K^+$ $47.9 \, \pm \,$ $0.5$ $\, \pm \, 0.5$ $47.8 \, \pm \,$ $4.6$ $\, \pm \, 4.0$ $47.4 \, \pm \,$ $1.5$ $\, \pm \,0.4$ $\phi(1020) \, \pi^+$ $41.4 \, \pm \,$ $0.8$ $\, \pm \, 0.5$ $39.6 \, \pm \,$ $3.3$ $\, \pm \, 4.7$ $42.2 \, \pm \,$ $1.6$ $\, \pm \, 0.3 $ $f_0(980) \, \pi^+$ $16.4 \, \pm \,$ $0.7$ $\, \pm \, 2.0$ $11.0 \, \pm \, $ $3.5$ $\, \pm 2.6$ $28.2 \, \pm \,$ $1.9$ $\, \pm \, 1.8$ $\Kbar^*_0(1430)^0 K^+$ $2.4 \, \pm \,$ $0.3$ $\, \pm \, 1.0$ $9.3 \, \pm \,$ $3.2$ $\, \pm 3.2$ $3.9 \, \pm \,$ $0.5$ $\, \pm \, 0.5$ $f_0(1710) \, \pi^+$ $1.1 \, \pm \,$ $0.1$ $\, \pm \, 0.1$ $3.4 \, \pm \,$ $2.3$ $\, \pm 3.5$ $3.4 \, \pm \,$ $0.5$ $\, \pm \, 0.3$ $f_0(1370) \, \pi^+$ $1.1 \, \pm \,$ $0.1$ $\, \pm \, 0.2$ $4.3 \, \pm \,$ $0.6$ $\, \pm \, 0.5$ Sum $ 110.2 \, \pm \,$ $0.6$ $\, \pm \, 2.0$ $129.5 \, \pm \,$ $4.4$ $\, \pm \, 2.0$ $\chi^2/NDF$ Events ------------------------- -------------------- ------- ----------------- ------------------- ------------------- ----------------- ------------------ ------- ------------------ \[tab:tablex\] Systematic errors {#sec:syst} ----------------- Systematic errors given in Table \[tab:table5\] and in other quoted results take into account: - Variation of the $R_r$ and $R_{\Ds}$ constants in the Blatt-Weisskopf penetration factors within the range \[0-3\] GeV$^{-1}$ and \[1-5\] GeV$^{-1}$, respectively. - Variation of fixed resonance masses and widths within the $\pm 1\sigma$ error range quoted in Ref. [@Nakamura:2010zzi]. - Variation of the efficiency parameters within $\pm 1\sigma$ uncertainty. - Variation of the purity parameters within $\pm 1\sigma$ uncertainty. - Fits performed with the use of the lower/upper sideband only to parametrize the background. - Results from fits with alternative sets of signal amplitude contributions that give equivalent Dalitz plot descriptions and similar sums of fractions. - Fits performed on a sample of $100,000$ events selected by applying a looser likelihood-ratio criterion but selecting a narrower ($\pm 1 \sigma_{\Ds}$) signal region. For this sample the purity is roughly the same as for the nominal sample ($\simeq 94.9\%$). ![image](fig12.eps){width="\textwidth"} Comparison between Dalitz plot analyses of ------------------------------------------- Table \[tab:tablex\] shows a comparison of the Dalitz plot fit fractions, shown in Table \[tab:table5\], with the results of the analyses performed by the E687 [@Frabetti:1995sg] and CLEO [@:2009tr] collaborations. The E687 model is improved by adding a $f_0(1370)$ amplitude and leaving the $\Kstarzbm$ parameters free in the fit. We find that the $\Kstarzbm$ width (Eq. \[eq:m\_g\_892\]) is about 3 lower than that in Ref. [@Nakamura:2010zzi]. This result is consistent with the width measured by CLEO-c collaboration ($\Gamma_{\Kstarzbm} = 45.7 \pm 1.1 \mev$). What is new in this analysis is the parametrization of the $\Kp\Km$ $\mathcal{S}$-wave at the $\Kp\Km$ threshold. While E687 and CLEO-c used a coupled channel BW (Flatté) amplitude [@Flatte:1972rz] to parametrize the $f_0(980)$ resonance, we use the model independent parametrization described in Section \[sec:sec\_pwa\_b\]. This approach overcomes the uncertainties that affect the coupling constants $g_{\pi\pi}$ and $g_{KK}$ of the $f_0(980)$, and any argument about the presence of an $a(980)$ meson decaying to $\Kp\Km$. The model, described in this paper, returns a more accurate description of the event distribution on the Dalitz plot ($\chi^2/\nu=1.2$) and smaller $f_0(980)$ and total fit fractions respect to the CLEO-c result. In addition the goodness of fit in this analysis is tested by an adaptive binning $\chi^2$, a tool more suitable when most of the events are gathered in a limited region of the Dalitz plot. Finally we observe that the phase of the $\phi(1020)$ amplitude ($166^\circ \pm 1^\circ \pm 2^\circ$) is consistent with the E687 result ($178^\circ\pm20^\circ\pm24^\circ$) but is roughly shifted by $180^\circ$ respect to the CLEO-c result ($-8^\circ \pm 4^\circ \pm 4^\circ$). Singly-Cabibbo-Suppressed [$\protect\Ds \to \Kp \Km \Kp$]{}, and Doubly-Cabibbo-Suppressed [$\protect\Ds \to \Kp \Kp \pim$]{} decay {#sec:sec_BR} =================================================================================================================================== In this section we measure the branching ratio of the SCS decay channel (\[eq:eq2\]) and of the DCS decay channel (\[eq:eq3\]) with respect to the CF decay channel (\[eq:eq1\]). The two channels are reconstructed using the method described in Sec. \[sec:sec\_ev\_sel\] with some differences related to the particle identification of the $\Ds$ daughters. For channel (\[eq:eq2\]) we require the identification of three charged kaons while for channel (\[eq:eq3\]) we require the identification of one pion and two kaons having the same charge. We use both the $\Dss$ identification and the likelihood-ratio to enhance signal with respect to background as described in Sec. \[sec:sec\_ev\_sel\]. The ratios of branching fractions are computed as: $$\frac{\BR(\Ds \to \Kp \Km \Kp)}{\BR(\Ds \to \Kp \Km \pip)} \kern-0.3em = \kern-0.3em \frac{N_{\Ds \to \Kp \Km \Kp}}{N_{\Ds \to \Kp \Km \pip}}\kern-0.3em \times \kern-0.3em \frac{\epsilon_{\Ds \to \Kp \Km \pip}}{\epsilon_{\Ds \to \Kp \Km \Kp}}$$ and $$\frac{ \BR(\Ds \to K^+ K^+ \pim)}{ \BR(\Ds \to K^+ K^- \pi^+)} \kern-0.3em = \kern-0.3em \frac{N_{\Ds \to K^+ K^+ \pim}}{N_{\Ds \to K^+ K^- \pi^+}} \times \frac{\epsilon_{\Ds \to K^+ K^- \pi^+}}{\epsilon_{\Ds \to K^+ K^+ \pim}}.$$ Here the $N$ values represent the number of signal events for each channel, and the $\epsilon$ values indicate the corresponding detection efficiencies. To compute these efficiencies, we generate signal MC samples having uniform distributions across the Dalitz plots. These MC events are reconstructed as for data events, and the same particle-identification criteria are applied. Each track is weighted by the data-MC discrepancy in particle identification efficiency obtained independently from high statistics control samples. A systematic uncertainty is assigned to the use of this weight. The generated and reconstructed Dalitz plots are divided into $50 \times 50$ cells and the Dalitz plot efficiency is obtained as the ratio of reconstructed to generated content of each cell. In this way the efficiency for each event depends on its location on the Dalitz plot. By varying the likelihood-ratio criterion, the sensitivity $S$ of $\Ds \to K^+ K^- K^+$ is maximized. The sensitivity is defined as $S = N_s/\sqrt{N_s + N_b}$, where $s$ and $b$ indicate signal and background. To reduce systematic uncertainties, we then apply the same likelihood-ratio criterion to the $\Ds \to K^+ K^- \pi^+$ decay. We then repeat this procedure to find an independently optimized selection criterion for the $\Ds \to K^+ K^+ \pim$ to $\Ds \to K^+ K^- \pi^+$ ratio. The branching ratio measurements are validated using a fully inclusive $e^+e^-\to c \bar c$ MC simulation incorporating all known charmed meson decay modes. The MC events are subjected to the same reconstruction, event selection, and analysis procedures as for the data. The results are found to be consistent, within statistical uncertainty, with the branching fraction values used in the MC generation. Study of [$\protect\Ds \to K^+ K^- K^+$]{} {#sec:sec_kkk} ------------------------------------------ The resulting $K^+ K^- K^+$ mass spectrum is shown in Fig. \[fig:fig12\](a). The $\Ds$ yield is obtained by fitting the mass spectrum using a Gaussian function for the signal, and a linear function for the background. The resulting yield is reported in Table \[tab:table1\]. The systematic uncertainties are summarized in Table \[tab:table7\] and are evaluated as follows: - The effect of MC statistics is evaluated by randomizing each efficiency cell on the Dalitz plot according to its statistical uncertainty. - The selection made on the $\Dss$ candidate $\Delta m$ is varied to $\pm$2.5$\sigma_{\Dss}$ and $\pm$1.5$\sigma_{\Dss}$. - For particle identification we make use of high statistics control samples to assign 1% uncertainty to each kaon and 0.5% to each pion. - The effect of the likelihood-ratio criterion is studied by measuring the branching ratio for different choices. Uncertainty $\frac{ \BR(\Ds \to K^+ K^- K^+)}{ \BR(\Ds \to K^+ K^- \pi^+)}$ ------------------ ----------------------------------------------------------------- MC statistics 2.6 % $\Delta m$ 0.3 % Likelihood-ratio 3.5 % PID 1.5 % Total 4.6 % : Summary of systematic uncertainties on the measurement of the $\Ds \to K^+ K^- K^+$ branching ratio. \[tab:table7\] We measure the following branching ratio: $$\frac{ \BR(\Ds \to K^+ K^- K^+)}{ \BR(\Ds \to K^+ K^- \pi^+)} = (4.0 \pm 0.3_{\rm stat} \pm 0.2_{\rm syst}) \times 10^{-3}.$$ ![image](fig13.eps){width="95.10000%"} A Dalitz plot analysis in the presence of a high level of background is difficult, therefore we can only extract empirically some information on the decay. Since there are two identical kaons into the final state, the Dalitz plot is symmetrized by plotting two combinations per event ($[m^2(K^-K^+_1), m^2(K^-K^+_2)]$ and $[m^2(K^-K^+_2), m^2(K^-K^+_1)]$). The symmetrized Dalitz plot in the $\Ds \to K^+ K^- K^+$ signal region, corrected for efficiency and background-subtracted, is shown in Fig. \[fig:fig12\](b). It shows two bands due to the $\phi(1020)$ and no other structure, indicating a large contribution via $\Ds \to \phi(1020)K^+$. To test the possible presence of $f_0(980)$, we plot, in Fig. \[fig:fig12\](d), the distribution of the $\left<Y^0_1\right>$ moment; $\theta$ is the angle between the $K^+$ direction in the $K^+ K^-$ rest frame and the prior direction of the $K^+ K^-$ system in the $D^+_s$ rest frame. We observe the mass dependence characteristic of interference between $\mathcal{S}$- and $\mathcal{P}$-wave amplitudes, and conclude that there is a contribution from $\Ds \to f_0(980)K^+$ decay, although its branching fraction cannot be determined in the present analysis. An estimate of the $\phi(1020)K^+$ fraction can be obtained from a fit to the $K^+K^-$ mass distribution (Fig. \[fig:fig12\](c)). The mass spectrum is fitted using a relativistic BW for the $\phi(1020)$ signal, and a second order polynomial for the background. We obtain: $$\begin{aligned} \frac{\BR(\Ds \to \phi K^+)\cdot\BR(\phi \to K^+ K^-)}{\BR(\Ds \to K^+ K^- K^+ )} & = & \nonumber \\ 0.41 \pm 0.08\kern-0.6em&_{\rm stat}&\kern-0.5em \pm 0.03_{\rm syst}.\end{aligned}$$ The systematic uncertainty includes the contribution due to $\Delta m$ and the likelihood-ratio criteria, the fit model, and the background parametrization. Study of [$\Ds \to K^+K^+\pi^-$]{} ---------------------------------- Figure \[fig:fig13\](a) shows the $K^+K^+\pi^-$ mass spectrum. A fit with a Gaussian signal function and a linear background function gives the yield presented in Table \[tab:table1\]. To minimize systematic uncertainty, we apply the same likelihood-ratio criteria to the $K^+ K^+ \pi^-$ and $K^+ K^- \pi^+$ final states, and correct for the efficiency evaluated on the Dalitz plot. The branching ratio which results is: $$\frac{ \BR(\Ds \to K^+ K^+ \pi^-)}{ \BR(\Ds \to K^+ K^- \pi^+)} = (2.3 \pm 0.3_{\rm stat} \pm 0.2_{\rm syst}) \times 10^{-3}.$$ This value is in good agreement with the Belle measurement: $\frac{ \BR(\Ds \to K^+ K^+ \pi^-)}{ \BR(\Ds \to K^+ K^- \pi^+)} =(2.29 \pm 0.28 \pm 0.12) \times 10^{-3}$ [@Ko:2009tc]. Table \[tab:table8\] lists the results of the systematic studies performed for this measurement; these are similar to those used in Sec. \[sec:sec\_kkk\]. The particle identification systematic is not taken in account because the final states differ only in the charge assignments of the daughter tracks. Uncertainty $\frac{ \BR(\Ds \to K^+ K^+ \pi^-)}{ \BR(\Ds \to K^+ K^- \pi^+)}$ ------------------ ------------------------------------------------------------------- MC statistics 0.04 % $\Delta m$ 4.7 % Likelihood-ratio 6.0 % Total 7.7 % : Summary of systematic uncertainties in the measurement of the $\Ds \to K^+ K^+ \pi^-$ relative branching fraction. \[tab:table8\] The symmetrized Dalitz plot for the signal region, corrected for efficiency and background-subtracted, is shown in Fig. \[fig:fig13\](b). We observe the presence of a significant $\Kstarzm$ signal, which is more evident in the $\Kp \pim$ mass distribution, shown in Fig. \[fig:fig13\](c). Fitting this distribution using a relativistic $\mathcal{P}$-wave BW signal function and a threshold function, we obtain the following fraction for this contribution. $$\begin{aligned} \frac{\BR(\Ds \to \Kstarzm K^+)\cdot \BR(\Kstarzm \to K^+ \pi^-)}{\BR(\Ds \to K^+ K^+ \pi^- )} = \nonumber \\ 0.47 \pm 0.22_{\rm stat} \pm 0.15_{\rm syst}.\end{aligned}$$ Systematic uncertainty contributions include those from $\Delta m$ and the likelihood-ratio criteria, the fitting model, and the background parametrization. The symmetrized Dalitz plot shows also an excess of events at low $\Kp\Kp$ mass, which may be due to a Bose-Einstein correlation effect [@Goldhaber:1960sf]. We remark, however, that this effect is not visible in $\Ds \to \Kp\Km\Kp$ decay (Fig. \[fig:fig12\](b)). Conclusions {#sec:sec_sum} =========== In this paper we perform a high statistics Dalitz plot analysis of $\Ds \to \Kp \Km \pip$, and extract amplitudes and phases for each resonance contributing to this decay mode. We also make a new measurement of the $\mathcal{P}$-wave/$\mathcal{S}$-wave ratio in the $\phi(1020)$ region. The $\Kp \Km$ $\mathcal{S}$-wave is extracted in a quasi-model-independent way, and complements the $\pip \pim$ $\mathcal{S}$-wave measured by this experiment in a previous publication [@:2008tm]. Both measurements can be used to obtain new information on the properties of the $f_0(980)$ state [@Pennington:2007zy]. We also measure the relative and partial branching fractions for the SCS $\Ds \to K^+ K^- K^+$ and DCS $\Ds \to K^+ K^+ \pi^-$ decays with high precision. Acknowledgments =============== acknowledgements [99]{} E. M. Aitala [*et al.*]{} \[E791 Collaboration\], Phys. Rev. Lett. [**89**]{}, 121801 (2002). M. Ablikim [*et al.*]{} \[BES Collaboration\], Phys. Lett. B [**633**]{}, 681 (2006). E. M. Aitala [*et al.*]{} \[E791 Collaboration\], Phys. Rev. Lett.  [**86**]{}, 765 (2001). M. Ablikim [*et al.*]{} \[BES Collaboration\], Phys. Lett.  B [**598**]{}, 149 (2004). See for example F. E. Close and N. A. Tornqvist, J. Phys. G [**28**]{}, R249 (2002). B. Aubert [*et al.*]{} \[ Collaboration\], Phys. Rev.  D [**79**]{}, 032003 (2009). S. Stone and L. Zhang, Phys. Rev.  D [**79**]{}, 074024 (2009). Y. Xie, P. Clarke, G. Cowan and F. Muheim, JHEP [**0909**]{}, 074 (2009). All references in this paper to an explicit decay mode imply the use of the charge conjugate decay also. P. L. Frabetti [*et al.*]{} \[E687 Collaboration\], Phys. Lett.  B [**351**]{}, 591 (1995). R. E. Mitchell [*et al.*]{} \[CLEO Collaboration\], Phys. Rev.  D [**79**]{}, 072008 (2009). K. Nakamura [*et al.*]{} \[Particle Data Group\], J. Phys. G [**37**]{}, 075021 (2010). B. R. Ko [*et al.*]{} \[Belle Collaboration\], Phys. Rev. Lett.  [**102**]{}, 221802 (2009). B. Aubert [*et al.*]{} \[ Collaboration\], Nucl. Instrum. Meth. Phys. Res., Sect. A [**479**]{}, 1 (2002). B. Aubert [*et al.*]{} \[ Collaboration\], Phys. Rev.  D [**65**]{}, 091104 (2002). S. Agostinelli [*et al.*]{} \[<span style="font-variant:small-caps;">Geant4</span> Collaboration\], Nucl. Instrum. Meth. Phys. Res., Sect. A [**506**]{}, 250 (2003). S. U. Chung, Phys. Rev.  D [**56**]{}, 7299 (1997). B. Aubert [*et al.*]{} \[ Collaboration\], Phys. Rev.  D [**78**]{}, 051101 (2008). S. M. Flatté [*et al.*]{}, Phys. Lett.  B [**38**]{}, 232 (1972). B. Aubert [*et al.*]{} \[ Collaboration\], Phys. Rev.  D [**72**]{}, 052008 (2005). B. Aubert [*et al.*]{} \[ Collaboration\], Phys. Rev.  D [**76**]{}, 011102(R) (2007). L. Maiani, A. D. Polosa and V. Riquer, Phys. Lett.  B [**651**]{}, 129 (2007). K. S. Kölbig, Gaussian Quadrature for Multiple Integrals, CERN Program Library, D110. D. Asner, arXiv:hep-ex/0410014 (2004);\ S. Eidelman [*et al.*]{} \[Particle Data Group\], Phys. Lett.  B [**592**]{}, 664 (2004). J. M. Blatt and V. F. Weisskopf, Theoretical Nuclear Physics, John Wiley & Sons, New York, 1952. D. Aston [*et al.*]{} \[LASS Collaboration\], Nucl. Phys.  B [**296**]{}, 493 (1988). P. del Amo Sanchez [*et al.*]{} \[ Collaboration\], Phys. Rev. Lett.  [**105**]{}, 081803 (2010). G. Goldhaber, S. Goldhaber, W. Y. Lee and A. Pais, Phys. Rev.  [**120**]{}, 300 (1960). M. R. Pennington, [*In the Proceedings of 11th International Conference on Meson-Nucleon Physics and the Structure of the Nucleon (MENU 2007), Julich, Germany, 10-14 Sep 2007, pp 106*]{} \[arXiv:0711.1435 \[hep-ph\]\].
--- abstract: 'We propose an efficient Monte Carlo method for the computation of the volumes of high-dimensional bodies with arbitrary shape. We start with a region of known volume within the interior of the manifold and then use the multi-state Bennett acceptance-ratio method to compute the dimensionless free-energy difference between a series of equilibrium simulations performed within this object. The method produces results that are in excellent agreement with thermodynamic integration, as well as a direct estimate of the associated statistical uncertainties. The histogram method also allows us to directly obtain an estimate of the interior radial probability density profile, thus yielding useful insight into the structural properties of such a high dimensional body. We illustrate the method by analysing the effect of structural disorder on the basins of attraction of mechanically stable packings of soft repulsive spheres.' author: - Stefano Martiniani - 'K. Julian Schrenk' - 'Jacob D. Stevenson' - 'David J. Wales' - Daan Frenkel bibliography: - 'mbar\_bv\_method.bib' title: 'Structural analysis of high-dimensional basins of attraction' --- Introduction ============ In science we often face, and occasionally confront, the following question: “Can we estimate the [*a priori*]{} probability of observing a system in a very unlikely state?” An example is: “How likely is a given disordered sphere packing?”, not to mention questions such as “How likely is life, or the existence of a universe like ours?” within the context of dynamical systems and of the multiverse. In a number of cases, where the states correspond to extrema in a high dimensional function, this question can be narrowed down to: “How large is the ‘basin of attraction’ of a given state?”. In such cases, estimating the probability of observing a particular state is equivalent to computing the volume of the (high-dimensional) basin of attraction of this state. That simplifies the problem, but not by much [@ball1997elementary; @simonovits2003compute]: analytical approaches are typically limited to highly symmetric (often convex) volumes, whilst ‘brute force’ numerical techniques can deal with more complex shapes, but only in low-dimensional cases. Computing the volume of an arbitrary, high-dimensional body is extremely challenging. For instance, it can be proved that the exact computation of the volume of a convex polytope is a NP-hard problem [@dyer1988complexity; @khachiyan1989problem; @khachiyan1988complexity] and, of course, the problem does not get any easier in the non-convex case. Yet, the importance of such computations is apparent: the volume of the basin of attraction for the extrema of a generic energy landscape, be that of biological molecules [@miller1999energy], an artificial neural network [@sagun2014explorations; @ballard2016energy; @ballard2016landscape], a dynamical system [@wiley2006size; @menck2013basin], or even of a “string theory landscape” (where the minima corresponds to different de Sitter vacua [@frazer2011exploring; @greene2013tumbling]), is essential for understanding the systems’ behavior. In high dimensions, simple quadrature and brute-force sampling fail [@Bishop09] and other methods are needed. In statistical mechanics, the problem is equivalent to the calculation of the partition function (or, equivalently, the free energy) of a system, and several techniques have been developed to tackle this problem (see e.g [@Frenkel02]). The earliest class of techniques to compute partition functions is based on thermodynamic integration (TI) [@Kirkwood35; @gelman1998simulating; @Frenkel02], which is based on the idea that a transformation of the Hamiltonian of the system can transform an unknown partition function into one that is known analytically. More recent techniques include histogram-based methods (Wang-Landau [@PhysRevLett.86.2050], parametric and non parametric weighted histogram analysis method (WHAM) [@Habeck12]) or Nested Sampling [@skilling2004nested; @Martiniani14]. In essence, all these techniques reduce the computation of the partition function to the numerical evaluation of a one-dimensional integral. Among the above methods Nested Sampling and Wang Landau are Monte Carlo algorithms in their own right, that produce the (binned) density of states as a by-product. On the other hand, TI can be identified as a particular Umbrella Sampling scheme [@Frenkel02], that outputs multiple sets of equilibrium states that can be analysed, either by numerical quadrature (e.g. see the Einstein crystal method [@Frenkel84]), or by WHAM and multi-state Bennet acceptance ratio method (MBAR). All the above methods can be used to compute high-dimensional volumes. However, the choice of the MBAR method [@Shirts2008] is an optimal one. Not only is MBAR non-parametric (no binning is required) and has the lowest known variance reweighting estimator for free energy calculations, but it also eliminates the need for explicit numerical integration of the density of states, thus reducing to a minimum the number of systematic biases. One reason why brute force methods are not suited to estimate the volumes of high-dimensional bodies, is that for such bodies the volume of the largest inscribed hypersphere, quickly becomes negligible to the volume of the smallest circumscribed hypersphere – and most of the volume of the circumscribed hypersphere is empty. Hence, using a Monte Carlo ‘rejection method’ to compute the volume of the non-convex body as the fraction of volume contained in a hypersphere [@Sheng2007; @Ashwin12], does not yield accurate results: the largest contribution should come from points that are barely sampled, if at all. In this Letter we show that MBAR can be used, not only to arrive at an accurate estimate of a high-dimensional, non-convex volume, but that it also can be used to probe the spatial distribution of this volume. Computing High-Dimensional Volumes ================================== Our aim is then to measure the volume of a $n-\text{dimensional}$ connected compact manifold $\Omega \subseteq \mathbb{R}^n$ with boundaries. We require this body to be “well guaranteed”, i.e. it has both an inscribed and a circumscribed hypersphere [@simonovits2003compute]. To explore different parts of the non-convex volume, we use a spherically symmetric bias that either favors the sampling of points towards the center, or towards the periphery. We start by performing a series of $K+1$ random walks under different applied bias potentials, similarly to the Einstein-crystal method [@Frenkel84]. We refer to each of the walkers as a “replica” $R_i$. Unlike TI, where biasing is always ‘attractive’ (i.e. it favors larger confinement), in MBAR we are free to choose both attractive and repulsive bias potentials (see SM for details of our implementation). Additionally MBAR uses the full posterior distribution (hence all moments) rather than just the average log-likelihood computed over the posterior, as for TI. The present method directly yields an estimate for the statistical uncertainty in the results that depends on the full distributions and is sensitive to their degree of overlap, thus making the method more robust to under-sampling. In contrast, TI would require an expensive resampling numerical procedure to achieve the same objective. The Markov Chain Monte Carlo (MCMC) random walk of replica $i \in [0,K]$ will generate samples with unnormalised probability density $q_i({\mathbf{x}})$, which for a standard Metropolis Monte Carlo walk is $$q_i({\mathbf{x}}) \equiv e^{-\beta_iU_i({\mathbf{x}})}$$ with biasing potential $U_i({\mathbf{x}})$ and inverse temperature $\beta_i$; from now on we assume $\beta_i = 1$ for all walkers $R_i$, without loss of generality. The normalised probability density is then $$p_i({\mathbf{x}}) = Z_i^{-1}q_i({\mathbf{x}})$$ with normalisation constant $$\label{partition_function} Z_i = \int_{\mathbb{R}^n} q_i({\mathbf{x}}) {\mathop{}\!\mathrm{d}}{\mathbf{x}}.$$ We require that the bias potential $U_i({\mathbf{x}})$ can be factorised as $$U_i({\mathbf{x}}) = \mathbf{{\raisebox{0pt}[1ex][1ex]{$\chi$}}}_{\Omega}({\mathbf{x}})u_i({\mathbf{x}})$$ where $u_i$ is the reduced potential function and $\mathbf{{\raisebox{0pt}[1ex][1ex]{$\chi$}}}_{\Omega}({\mathbf{x}})$ is the “oracle” [@simonovits2003compute], such that for all choices of $u_i({\mathbf{x}})$, $$U_i({\mathbf{x}})= \left\{ \begin{array}{l l} u_i({\mathbf{x}}) & \quad \text{if ${\mathbf{x}} \in \Omega$} \\ \infty & \quad \text{if ${\mathbf{x}} \not\in \Omega$} \end{array} \right.$$ We thus have that the normalisation constant in Eq. (\[partition\_function\]) becomes an integral over the manifold $\Omega$ $$\label{partition_function2} Z_i = \int_{\mathbb{R}^n} e^{-U_i({\mathbf{x}})} {\mathop{}\!\mathrm{d}}{\mathbf{x}} = \int_{\Omega} e^{-u_i({\mathbf{x}})} {\mathop{}\!\mathrm{d}}{\mathbf{x}}.$$ If replica $R_M$ is chosen to have bias $u_M=0$, by definition Eq. (\[partition\_function2\]) becomes the volume $V_{\Omega}$. Hence if we can compute the partition function for the reduced potential function $u_M=0$, we can compute the volume $V_{\Omega}$. The MBAR method [@Shirts2008] is a binless and statistically optimal estimator to compute the difference in dimensionless free energy for multiple sets of equilibrium states (trajectories) $\{ {\mathbf{x}} \}_i$ obtained using different biasing potentials $u_i({\mathbf{x}})$. The difference in dimensionless free energy is defined as $$\Delta \hat{f}_{ij} \equiv \hat{f}_{j} - \hat{f}_{i} = -\ln \left( \frac{Z_j}{Z_i}\right)$$ which can be computed by solving a set of self-consistent equations as described in Ref. [@Shirts2008]. Note that only the differences of the dimensionless free energies are meaningful as the absolute values $\hat{f}_{i}$ are determined up to an additive constant and that the “hat” indicates MBAR estimates for the dimensionless free energies, to be distinguished from the exact (reference) values. Let us define the volume $V_{\omega}=\pi^{n/2}r_{\omega}^{n}/\Gamma(n/2+1)$ of a $n$-ball $\omega \subseteq \Omega$ with radius $r_{\omega}$ centred on ${\mathbf{x}}_0$ and absolute dimensionless free energy $f_{\omega}=-\ln V_{\omega}$. For instance, when the volume of a basin of attraction in a potential energy landscape is to be measured, ${\mathbf{x}}_0$ is chosen to be the minimum energy configuration and $\omega \subseteq \Omega$ the largest $n$-ball centred at ${\mathbf{x}}_0$ that fits in $\Omega$. We also define $\{{\mathbf{x}}\}_{i}$ to be the set of states sampled with biasing potential $u_i$ and $\{{\mathbf{x}}\}_{\omega} = \cup_{i=0}^K \{{\mathbf{x}} : |{\mathbf{x}}-{\mathbf{x}}_0| \leq r_{\omega}\}_i$ to be the set of states re-sampled within $\omega$ with reduced potential $$u_\omega({\mathbf{x}})= \left\{ \begin{array}{l l} 0 & \quad \text{if $|{\mathbf{x}}-{\mathbf{x}}_0| \leq r_{\omega}$} \\ \infty & \quad \text{if $|{\mathbf{x}}-{\mathbf{x}}_0| > r_{\omega}$} \end{array} \right.\ \label{eq:resampling_potential}$$ In other words we augment the set of states with the additional reduced potential $u_{\omega}$. Note that MBAR can compute free energy differences and uncertainties between sets of states not sampled (*viz.* with a different reduced potential function) without any additional iterative solution of the self-consistent estimating equations, see Ref. [@Shirts2008] for details. Computing the free energy difference between the sets of equilibrium states $\{{\mathbf{x}}\}_{\omega}$ and $\{{\mathbf{x}}\}_{M}$, chosen to have reduced potentials $u_M=0$ and $u_{\omega}$, we find that the absolute free energy for the unbiased set of states $\{{\mathbf{x}}\}_{M}$ is $$f_{M} = f_{\omega} + (\hat{f}_{M} - \hat{f}_{\omega}) \label{eq:free_energy_estimate}$$ where the free energy difference $\hat{f}_{M} - \hat{f}_{\omega}$ is obtained by MBAR with associated uncertainty $\delta\Delta\hat{f}_{M\omega}$. The volume of the manifold is then just $V_{\Omega} = \exp(-f_M)$ with uncertainty $\delta V_{\Omega}=V_{\Omega}\delta\Delta\hat{f}_{M\omega}$. Note that the set of biasing potentials $u_i$ must be chosen so that there is sufficient overlap between each neighbouring pair of $p_i({\mathbf{x}})$. For instance for the harmonic bias $u_i = k_i|{\mathbf{x}}-{\mathbf{x}}_0|^2/2$ we must choose a set of coupling constants $k_i$ so that all neighbouring replicas have a sufficient probability density overlap. Under an appropriate choice of biasing potential the present method may yield information such as the radial posterior probability density function, as an easy to compute by-product, details are discussed in the SM. Basins of attraction in high dimensions ======================================= We define a basin of attraction as the set of all points that lead to a particular minimum energy configuration by a path of steepest descent on a potential energy surface (PES). Exploring a basin of attraction is computationally expensive because each call to the oracle function $\mathbf{{\raisebox{0pt}[1ex][1ex]{$\chi$}}}_{\Omega}({\mathbf{x}})$ requires a full energy minimisation and equilibrating a MCMC on a high dimensional support is difficult [@Xu11; @Asenjo13; @Asenjo14; @Martiniani16]. For this reason little is known about the geometry of these bodies [@Xu10; @Wang12; @Asenjo13; @Martiniani16]. Ashwin et al. [@Ashwin12], defined the basin of attraction as the collection of initial zero-density configurations that evolve to a given jammed packing of soft repulsive disks via a compressive quench. On the basis of ‘brute-force’ calculations on low-dimensional systems, Ashwin et al. suggested that basins of attraction tend to be “branched and threadlike” away from a spherical core region. However, the approach of ref. [@Ashwin12] breaks down for higher dimensional systems for which most of the volume of the basin is concentrated at distances from the ‘minimum’ where the overwhelming majority of points do not belong to the basin. The method that we present here allows us to explore precisely those very rarified regions where most of the ‘mass’ of a basin is concentrated. In general the representation of *all* high dimensional *convex* bodies should have a hyperbolic form such as the one proposed in the illustration by Ashwin et al. due to the exponential decay in volume of parallel hypersections (slices) away from the median (or equator) [@milman1998surprising]. This holds true even for the simplest convex bodies, such as the hypercube, and the underlying geometry need not be “complicated”, as one would guess at first from the two-dimensional representation. For the simplest cases of the unit $d$-sphere and the unit $d$-cube it can be shown that most of the volume is contained within $\mathcal{O}(1/d)$ of the boundary and that at the same time the volume is contained in a slab $\mathcal{O}(1/\sqrt{d})$ and $\mathcal{O}(1)$ from the equator, irrespective of the choice of north pole, respectively [@ball1997elementary; @Guruswami2012]. Hence, there is virtually no interior volume. Such phenomena of concentration of measure are ubiquitous in high dimensional geometry and are closely related to the law of large numbers [@Guruswami2012]. As we will show, the results presented by Ashwin et al. are, within the resolution available to their method, qualitatively consistent with those for a simple (unit) hypercube. ![\[fig::q6\] Structural disorder as a function of polydispersity $\eta$ is quantified by the average coordination number $Z$ (grey diamonds) and the $Q_6$ bond orientational order parameter (blue circles); error bars correspond to one standard deviation of the distribution of values per particle. Basin shape is characterized by the asphericity factor $A_d$ (green triangles) and the mean distance of the centre of mass from the minmum (orange squares); error bars correspond to the standard error. Filled and empty markers correspond to packings obtained starting from an fcc and a disordered arragement respectively. Dotted lines show the $\eta$ after which, in order, $Z$, $A_d$ and $Q_6$ change from the fcc value.](poly_q12.pdf){width="\columnwidth"} ![\[fig::dos\]Top plot shows the measured basin radial probability density function $h(r)$ (DOS) for packings at different polydispersities. The solid and dashed blue curves correspond to the DOS of a $93$D hypercube, measured from the centre of mass (‘iso-cube’) and from a point in one of the corners. The top inset shows the cumulative distribution function for $h(r)$. The bottom panel shows the logarithm of the ratio of the DOS of the basin and of a $93$D hyperball. The bottom inset shows the set of barely distinguishable overlapping curves measured for low polydispersities. Top and bottom plots share the x-axis.](hypercube_logr_comparison.pdf){width="\columnwidth"} Effect of structural disorder on the basins of attraction of jammed sphere packings ----------------------------------------------------------------------------------- We characterise the basins of attraction for a number of 32 hard-core plus soft-shell three-dimensional sphere packings, analogous to the ones described in Ref. [@Martiniani16]. The soft shell interactions are short ranged and purely repulsive, the full functional form of the potential and further technical details are reported in the SM. We systematically introduce structural disorder by preparing packings with (geometrically) increasing particle size polydispersity $\eta$, i.e. the (positive) radii are sampled from a normal distribution $\mathcal{N}(1,\eta)$. For each $\eta$ we prepare $\sim$10 packings at a soft packing fraction $\phi=0.74148$ with a soft to hard-sphere radius ratio of $r_{\mathrm{SS}}/r_{\mathrm{HS}} = 1.12$. The particles are placed initially in a fcc arrangement ${\mathbf{x}}_{\text{fcc}}$ and then relaxed via an energy minimisation to a mechanically stable state ${\mathbf{x}}_{0}$. Thus, for the lowest polydispersities the packings remain in a perfect fcc structure and with increasing $\eta$ they progressively move away into a disordered glassy state. For the largest polydispersity, for which hard-core overlaps do not allow an initial fcc arrangement, we sample a series of completely random initial states followed by an energy minimisation. Note that even for $\eta \approx 0$, due to the high packing fraction, starting from a completely random set of coordinates, an energy minimisation does not lead to the fcc crystal but rather to the closest glassy state (inherent structure). We are interested in the effect of structural disorder on the shape of the basin of attraction for the soft sphere packings. We determine the amount of structural disorder in the packing by computing the $Q_6$ bond orientational order parameter [@steinhardt1983bond] and the average number of contacts per particle $Z$, shown in Fig. \[fig::q6\]. As the polydispersity of the system is increased, the coordination number $Z$ decays monotonically from the close-packed value of $12$ to a value $Z_{\mathrm{fcc}}>Z>Z_{\mathrm{iso}}$, where $Z_{\mathrm{iso}}=6$ is the average contact number at iso-staticity for a three-dimensional packing of frictionless spheres [@OHern03]. The $Q_6$ order parameter, computed using a solid-angle based nearest-neighbor definition [@vanMeel12], decays from its fcc value well after the contact number has dropped below the close-packed value of $12$. We start characterising the shape of the high dimensional basins of attraction associated with these packings by performing an unconstrained random walk within the basin and performing principal component analysis (PCA) on the trajectory thus obtained [@Bishop09]. PCA yields a set of eigenvectors that span the $d$-dimensional configurational space with associated eigenvalues $\lambda_1,\dots,\lambda_{d}$. If the basin posses $d$-dimensional spherical symmetry then all the eigenvalues are expected to be equal. A measure of the shape of a random walk is then the asphericity factor [@rudnick1987shapes] $$A_d = \frac{\sum_{i>j}(\lambda_i-\lambda_j)^2}{(d-1)\left(\sum_{i=1}^d \lambda_i\right)^2},$$ that has a value of $0$ for a spherically symmetric random walk and of $1$ for a walk that extends only in one dimension. Furthermore, we compute the distance of the centre of mass (*CoM*) position from the minimum energy configuration for the random walk, $|\langle {\mathbf{x}} \rangle - {\mathbf{x}}_0|$. This quantity reveals whether the basin is isotropic around the minimum or not. Both quantities, averaged over all packings, are plotted as a function of polydispersity in Fig. \[fig::q6\] along with the structural order parameters. Interestingly, we observe that for low $\eta$ the basins are, on average, spherically symmetric and isotropic around the minimum. With the onset of structural disorder we observe a marginal increase in asphericity and in the *CoM* distance from the minimum. In order to observe a significant change however, we need to go to the fully disordered packings at higher polydispersity. With increasing polydispersity, we observe significant changes in the structural order parameters and in the asphericity factor $A_d$ and *CoM* distance from the minimum. The implementation details of the MBAR method that we have used are discussed in the SM. Using this method to compute the volume of the basins of attraction, we find excellent agreement with thermodynamic integration, see Fig. S2. As a natural by-product of the computation we are able to compute the radial probability density function (DOS), shown in Fig. \[fig::dos\] together with the logarithm of the ratio between the measured DOS, and that of a $d$-hypersphere. The log-ratio curves clearly show that all basins have a well-defined hyperspherical core region, where the curves are flat around $0$, followed by a series of exponential decays at larger distances from the minimum. For $\eta < 10^{-4}$ the curves are mostly indistinguishable from one another with most of the probability mass concentrated between $ 1< r < 3$, as it can be seen from the inset showing the corresponding cumulative distribution function (CDF). For higher polydispersity, the DOS curves have ever longer tails, as it is also shown by the systematic shift in the CDF. Importantly, the curves show that a ‘rejection’ method to measure the basin volume will fail. In this method, the volume of the basin is determined by integrating the fraction of points on a hyper-shell with radius $r$ that fall inside the basin. That fraction is the function shown in the bottom panel of Fig. \[fig::dos\]. The most important contribution to the integral would come from the range of $r$ values where $h(r)$ (top panel of Fig. \[fig::dos\]) has a significant value. As can be seen from the figure, for disordered systems this happens for values of $r$ where the fraction of hyper-sphere points within the basin is extremely small, in the example shown $\mathcal{O}(10^{-30})$. Hence, the dominant part of the integral would come from parts that are never sampled. To interpret our results for the DOS curves, it is useful to compare with the corresponding result for a unit hypercube (see Fig. \[fig::dos\]). In one instance we do so by placing the ‘origin’ of the hypercube at its *CoM*, and in another by placing the origin on one of the $2^d$ corners of the hypercube, to generate a DOS of a system with a very anisometric density distribution. Not surprisingly, moving the origin of the system from the center to the corner of a hypercube has a dramatic effect on the shape of the DOS, which is now much more similar to the curves for large $\eta$, with similar characteristic changes of slope observed for the basins. Again, this agrees with the observation that the *CoM* distance increases with increasing structural disorder. The effect of the basin asphericity, as measured by the asphericity factor $A_d$ is difficult to infer from the DOS alone. We thus observe that the structural isotropy and high degree of rotational symmetry in the crystal, as indicated by the $Q_6$ parameter, is reflected in the isotropy and spherical symmetry of the basin around the minimum, even for relatively large polydispersities when the average contact number has already dropped considerably from the close-packed value. Similarly, the structural disorder at larger $\eta$ is reflected in the anisotropy and asphericity of the basin. Hence, changes in the basin structure, as indicated by the asphericity factor, the $CoM$ and the density profile, occur before any observable changes occur in $Q_6$ and after the average contact number ($Z \lesssim 9$) has fallen well below the close-packed value of $12$. S.M. acknowledges financial support by the Gates Cambridge Scholarship. K.J.S. acknowledges support by the Swiss National Science Foundation under Grant No. P2EZP2-152188 and No. P300P2-161078. J.D.S. acknowledges support by Marie Curie Grant 275544. D.F. and D.J.W. acknowledge support by EPSRC Programme Grant EP/I001352/1, by EPSRC grant EP/I000844/1 (D.F.) and ERC Advanced Grant RG59508 (D.J.W.).
--- abstract: 'A robust, fast and accurate method for solving the Colebrook-like equations is presented. The algorithm is efficient for the whole range of parameters involved in the Colebrook equation. The computations are not more demanding than simplified approximations, but they are much more accurate. The algorithm is also faster and more robust than the Colebrook solution expressed in term of the Lambert ${\operatorname{W}}$-function. [Matlab${}^\copyright$]{} and [FORTRAN]{} codes are provided.' author: - Didier Clamond date: 'Laboratoire J.-A. Dieudonné, 06108 Nice cedex 02, France.\' title: Efficient resolution of the Colebrook equation --- Introduction {#secintro} ============ Turbulent fluid flows in pipes and open channels play an important role in hydraulics, chemical engineering, transportation of hydrocarbons, for example. These flows induce a significant loss of energy depending on the flow regime and the friction on the rigid boundaries. It is thus important to estimate the dissipation due to turbulence and wall friction. The dissipation models involve a friction coefficient depending on the flow regime (via a Reynolds number) and on the geometry of the pipe or the channel (via an equivalent sand roughness parameter). This friction factor if often given by the well-known Colebrook–White equation, or very similar equations. The Colebrook–White equation estimates the (dimensionless) Darcy–Weisbach friction factor $\lambda$ for fluid flows in filled pipes. In its most classical form, the Colebrook–White equation is $$\begin{aligned} \frac{1}{\sqrt{\lambda}} &=&-\,2\,\log_{10}\!\left(\,\frac{K}{3.7}\,+\, \frac{2.51}{R}\frac{1}{\sqrt{\lambda}}\,\right), \label{col1}\end{aligned}$$ where $R=UD/\nu$ is a (dimensionless) Reynolds number and $K=\epsilon/D$ is a relative (dimensionless) pipe roughness ($U$ the fluid mean velocity in the pipe, $D$ the pipe hydraulic diameter, $\nu$ the fluid viscosity and $\epsilon$ the pipe absolute roughness height). There exist several variants of the Colebrook equation, e.g. $$\begin{aligned} \frac{1}{\sqrt{\lambda}} &=& 1.74\ -\,2\,\log_{10}\!\left(\,2\,K\,+\, \frac{18.7}{R}\frac{1}{\sqrt{\lambda}}\,\right), \label{col2}\\ \frac{1}{\sqrt{\lambda}} &=& 1.14\ -\,2\,\log_{10}\!\left(\,K\,+\, \frac{9.3}{R}\frac{1}{\sqrt{\lambda}}\,\right). \label{col3}\end{aligned}$$ These variants can be recast into the form (\[col1\]) with small changes in the numerical constants $2.51$ and $3.7$. Indeed, the latter numbers being obtained fitting experimental data, they are known with limited accuracy. Thus, the formulae (\[col2\]) and (\[col3\]) are not fundamentally different from (\[col1\]). Similarly, there are variants of the Colebrook equations for open channels, which are very similar to (\[col1\]). Thus, we shall focus on the formula (\[col1\]), but it is trivial to adapt the resolution procedure introduced here to all variants, as demonstrated in this paper. The Colebrook equation is transcendent and thus cannot be solved in terms of elementary functions. Some explicit approximate solutions have then been proposed [@haa; @rom; @son2]. For instance, the well-known Haaland formula [@haa] reads $$\label{solhaa} \frac{1}{\sqrt{\lambda}}\ \approx\ -1.81\times\log_{10}\!\left[\,\frac{6.9}{R}\ +\ \left(\frac{K}{3.7}\right)^{1.11}\,\right].$$ Haaland’s approximation is explicit but is not as simple as it may look. Indeed, this approximation involves one logarithm only, but also a non-integer power. The computation of the latter requires the evaluation of one exponential and one logarithm, since it is generally computed via the relation $$x^{1.11}\ =\ \exp(1.11\times\ln(x)),$$ where ‘$\ln$’ is the natural (Napierian) logarithm. Hence, the overall evaluation of (\[solhaa\]) requires the computation of three transcendant functions (exponentials and logarithms). We present in this paper much more accurate approximations requiring the evaluation of only two or three logarithms, plus some trivial operations ($+,-,\times,\div$). Only quite recently, it was noticed that the Colebrook–White equation (\[col1\]) can be solved in closed form [@kea] using the long existing Lambert W-function [@cor]. However, when the Reynolds number is large, this exact solution in term of the Lambert function is not convenient for numerical computations due to overflow errors [@son]. To overcome this problem, Sonnad and Goudar [@son; @son2] proposed to combine several approximations depending on the Reynolds number. These approaches are somewhat involve and it is actually possible to develop a simpler and more efficient strategy, as we demonstrate in this paper. A fast, accurate and robust resolution of the Colebrook equation is, in particular, necessary for scientific intensive computations. For instance, numerical simulations of pipe flows require the computation of the friction coefficient at each grid point and for each time step. For long term simulations of long pipes, the Colebrook equation must therefore be solved a huge number of times and hence a fast algorithm is required. An example of such demanding code is the program [OLGA]{} [@olga] which is widely used in the oil industry. Although the Colebrook formula itself is not very accurate, its accurate resolution is nonetheless an issue for numerical simulations because a too crude resolution may affect the repeatability of the simulations. Robustness is also important since one understandably wants an algorithm capable of dealing with all the possible values of the physical parameters involved in the model. The method described in the present paper was developed to address all this issues. It is also very simple so it can be used for simple applications as well. The method proposed here aims at giving a definitive answer to the problem of solving numerically the Colebrook-like equations. The paper is organized as follow. In section \[secgencol\], a general Colebrook-like equation and its solution in term of the Lambert ${\operatorname{W}}$-function are presented. For the sake of completeness, the Lambert function is briefly described in section \[secWfun\], as well as a standard algorithm used for its computation. A severe drawback of using the Lambert function for solving the Colebrook equation is also pointed out. To overcome this problem, a new function is introduced in section \[secOfun\] and an improved new numerical procedure is described. Though this function introduces a big improvement for the computation of the friction factor, it is still not fully satisfactory for solving the Colebrook equation. The reasons are explained in the section \[secpifun\], where a modified function is derived to address the issue. The modified function is subsequently used in section \[secsolcol\] to solve the Colebrook equation efficiently. The accuracy and speed of the new algorithm is tested and compared with Haaland’s approximation. For testing the method and for intensive practical applications, [Matlab${}^\copyright$]{} and [FORTRAN]{} implementations of the algorithm are provided in the appendices. The algorithm is so simple that it can easily be implemented in any other language and modified to be adapted to any variants of the Colebrook equation. Generic Colebrook equation and its solution {#secgencol} =========================================== We consider here a generic Colebrook-like equation as $$\label{colgen} \frac{1}{\sqrt{\lambda}}\ =\ c_0\ -\ c_1\,\ln\!\left(\,c_2\,+\, \frac{c_3}{\sqrt{\lambda}}\,\right),$$ where the $c_i$ are given constants such that $c_1c_3>0$. The classical Colebrook–White formula (\[col1\]) is obviously obtained as a special case of (\[colgen\]) with $c_0=0$, $c_1=2/\ln10$, $c_2=K/3.7$ and $c_3=2.51/R$. The equation (\[colgen\]) has the exact analytical solution $$\label{colgensolW} \frac{1}{\sqrt{\lambda}}\ =\ c_1\left[\,{\operatorname{W}}\!\left(\exp\left(\,{c_0\over c_1}+{c_2\over c_1\/c_3} - \ln(c_1\/c_3)\right)\,\right)\ -\ \frac{c_2}{c_1\/c_3}\,\right],$$ which is real if $c_1c_3>0$ and where ${\operatorname{W}}$ is the principal branch of the Lambert function, often denoted ${\operatorname{W}}_0$ [@cor]. In this paper, only the principal branch of the Lambert function is considered because the other branches correspond to non-physical solutions of the Colebrook equations, so the simplified notation ${\operatorname{W}}$ is not ambiguous. Brief introduction to the Lambert ${\operatorname{W}}$-function {#secWfun} =============================================================== For the sake of completeness, we briefly introduce the Lambert function and its practical computation. Much more details can be found in [@cor; @ser]. The Lambert ${\operatorname{W}}$-function solves the equation $$\begin{aligned} \label{eqW} y\,\exp(y)\ =\ x \qquad \Longrightarrow \qquad y\ =\ {\operatorname{W}}(x),\end{aligned}$$ where, here, $x$ is real — more precisely $x\geqslant-\exp(-1)$ — and ${\operatorname{W}}(0)=0$. The Lambert function cannot be expressed in terms of elementary functions. An efficient algorithm for its computation is based on Halley’s iterations [@cor] $$\label{halori} y_{j+1}\ =\ y_j\ -\ \frac{y_j\,\exp(y_j)\,-\,x} {(y_j+1)\,\exp(y_j)\,-\, {{\textstyle{1\over2}}}(y_j+2)\,(y_j\,\exp(y_j)-x)/(y_j+1)},$$ provided an initial guess $y_0$. Halley’s method is cubic (c.f. Appendix \[appsoleq\]), meaning that the number of exact digits is (roughly) multiplied by three after each iteration. Today, programs for computing the Lambert function are easily found. For instance, an efficient implementation in [Matlab${}^\copyright$]{} (including complex argument and all the branches) is freely available [@get]. The Taylor expansion around $\,x=0\,$ of the Lambert function is $$\label{expW0} \mathrm{W}(x)\ =\ \sum_{n=1}^\infty\,\frac{(-n)^{n-1}}{n!}\,x^n, \qquad |x|<\exp(-1).$$ This expansion is of little interest to solve the Colebrook equation because, in this context, the corresponding variable $x$ is necessarily large ($x\gg1$). It is thus more relevant to consider the asymptotic expansion $$\begin{aligned} \label{devWexp} \mathrm{W}(x)\ \sim\ \ln(x)\ -\ \ln(\ln(x)) \qquad \text{as}\quad x\rightarrow\infty.\end{aligned}$$ This expansion reveals that ${\operatorname{W}}$ behaves logarithmically for large $x$, while we must compute ${\operatorname{W}}(\exp(x))$ to solve the Colebrook equation, c.f. relation (\[colgensolW\]). For our applications, $x$ is large and $\exp(x)$ is therefore necessarily huge, to an extend that the computation of $\exp(x)$ cannot be achieved due to overflow. Even when the intermediate computations can be done, the result can be very inaccurate due to large round-off errors. Therefore, the resolution of the Colebrook equation via the Lambert function [@kea] is not efficient for the whole range of parameter of practical interest [@son]. The $\omega$-function {#secOfun} ===================== To overcome the numerical difficulties related to the Lambert ${\operatorname{W}}$-function, when used for solving the Colebrook–White equation, we introduce here a new function: the $\omega$-function. The $\omega$-function is defined such that it solves the equation $$\begin{aligned} \label{eqom} y\ +\ \ln(y)\ =\ x \qquad \Longrightarrow \qquad y\ =\ \omega(x),\end{aligned}$$ where we consider only real $x$. The $\omega$-function is related to the ${\operatorname{W}}$-function as $$\omega(x)\ =\ {\operatorname{W}}(\exp(x)).$$ Note that the Lambert ${\operatorname{W}}$-function is also sometimes called the Omega function, that should not be confused with the $\omega$-function defined here, where we follow the notation used in [@orc]. In terms of the $\omega$-function, the solution of (\[colgen\]) is of course $$\label{colgensolome} \frac{1}{\sqrt{\lambda}}\ =\ c_1\left[\,\omega\!\left(\,{c_0\over c_1}+{c_2\over c_1\/c_3}-\ln(c_1\/c_3)\right)\ -\ \frac{c_2}{c_1\/c_3}\,\right].$$ For large arguments $\omega(x)$ behaves like $x$, i.e. we have the asymptotic behavior $$\begin{aligned} \label{devOexp} \omega(x)\ \sim\ x\ -\ \ln(x) \qquad \text{as}\quad x\rightarrow\infty,\end{aligned}$$ which is an interesting feature for the application considered in this paper. As noted by Corless [*et al.*]{} [@ser], the equation (\[eqom\]) is in some ways nicer than (\[eqW\]). In particular, its derivatives (with respect of $y$) are simpler, leading thus to algebraically simpler formulae for its numerical resolution. An efficient iterative quartic scheme (c.f. Appendix \[appsoleq\]) is thus $$\label{solwexp} y_{j+1}\ =\ y_j \ -\ \frac{\left(\,1+y_j+{{\textstyle{1\over2}}}\/\epsilon_j\,\right)\epsilon_j\,y_j} {\left(\,1+y_j+\epsilon_j+{{\textstyle{1\over3}}}\/\epsilon_j^{\,2}\,\right)} \qquad \mathrm{for}\quad j\geqslant1,$$ with $$\epsilon_j\ \equiv\ \frac{y_j\,+\,\ln(y_j)\,-\,x}{1\,+\,y_j}, \qquad y_0\ =\ x\ -\ \frac{1}{5}.$$ The computationally costless initial guess ($y_0=x-{{\textstyle{1\over5}}}$) was obtained considering the asymptotic behavior (\[devWexp\]), minus an empirically found correction (the term $-{{\textstyle{1\over5}}}$) to improve somewhat the accuracy of $y_0$ for small $x$ without affecting the accuracy for large $x$. The relative error $e_j$ of the $j$-th iteration, i.e. $$e_j(x)\ \equiv\ \left|\,\frac{y_j(x)\,-\,\omega(x)}{\omega(x)}\,\right|,$$ is displayed on the figure \[figerrome\] for $1\leqslant x\leqslant10^6$ and $j=0,1,2$. (The accuracy of (\[solwexp\]) were measured using the arbitrary precision capability of [Mathematica${}^\copyright$]{}.) We can see that with $j=2$ we have already reached the maximum accuracy possible when computing in double precision, since $\max(e_2)\approx4\times10^{-17}$ for $x\in[1;\infty[$. We note that the relative error continues to decay monotonically as $x$ increases (even for $x>10^6$) and that there are no overflow problems when computing $y_j$ even for very large $x$ (i.e. $x\gg10^6$). We note also that for $x\gtrapprox5700$ the machine double precision is obtained after one iteration only. The scheme (\[solwexp\]) is quartic, meaning that the number of exact digits is multiplied by four after each iteration (c.f. Appendix \[appsoleq\]). Hence, starting with an initial guess with one correct digit, four digits are exact after one iteration and sixteen after two iterations. That is to say that the machine precision (if working in double precision) is achieved after two iterations only (Fig. \[figerrome\]). Moreover, the scheme (\[solwexp\]) has a comparable algebraic complexity per iteration than the scheme (\[halori\]), i.e. the computational times per iteration are almost identical. However, the iterative quartic scheme (\[solwexp\]) converges faster than the cubic one (\[halori\]), and there are no overflow problems as they appear when computing ${\operatorname{W}}(\exp(x))$ for large $x$. This algorithm could therefore be used to compute the solution of the Colebrook–White equation (\[col1\]), but we will use instead an even better one defined in the next section. We note in passing that the iterations (\[solwexp\]) are also efficient for computing the $\omega$-function for any complex $x$, provided some changes in the initial guess $y_0$ depending on $x$.\ [**Remarks:**]{} [*i*]{}- With a more accurate initial guess $y_0$, such as $y_0=x-\ln(x)$, the desired accuracy may be obtained with fewer iterations. However, the computation of such an improved initial guess requires the evaluation of transcendent functions. Thus, it cannot be significantly faster than the evaluation of $y_1$ with (\[solwexp\]) from the simplest guess $y_0=x-{{\textstyle{1\over5}}}$, and most likely less accurate. [*ii*]{}- Higher-order iterations are generally more involved per iteration than the low-order ones. Higher-order iterations are thus interesting if the number of iterations is sufficiently reduced so that the total computation is faster to achieve the desired accuracy. This is precisely the case here. [*iii*]{}- Intensive tests have convinced us that the choice of the simplest initial guess $y_0=x-{{\textstyle{1\over5}}}$ together with the quartic iterations (\[solwexp\]) is probably the best possible scheme for computing the $\omega$-function in the interval $x\in[1;\infty[$, at least when working in double precision. If improvements can be found, they are thus most likely very minor in terms of both robustness, speed and accuracy. The $\varpi$-function {#secpifun} ===================== Solving the Colebrook equation via the $\omega$-function is a big improvement compared to its solution in term of the Lambert ${\operatorname{W}}$-function. One can check that the numerical resolution of the Colebrook equation via the algorithm (\[solwexp\]) is indeed very efficient when $K=0$, even for very large $R$. However, when $K>0$ the scheme (\[solwexp\]) is not so effective for large $R$, meaning that not all the numerical shortcomings have been addressed introducing the $\omega$-function. The cause for these numerical problems can be explained as follow. The solution of the Colebrook equation requires the computation of an expression like $\omega(x_1+x_2)-x_1$ where $x_1\gg x_2$ when $R$ is large and $K\neq0$ (but $x_1=0$ if $K=0$), see the relation (\[colsolome\]) below. Assuming $x_2\propto\ln(x_1)$, as is the case here, the asymptotic expansion as $x_1\rightarrow\infty$, i.e. $$\omega(x_1+x_2)\,-\,x_1\ \sim\ (x_1+x_2-\ln(x_1))\,-\,x_1\ =\ x_2\,-\,\ln(x_1),$$ exhibits the source of the numerical problems. Indeed, when $K>0$ and $R$ is large, we have $x_1\gg x_2$ and $x_1\gg\ln(x_1)$. Therefore $|x_2-\ln(x_1)|/x_1$ can be smaller than the accuracy used in the computation and we thus obtain numerically $x_1+x_2-\ln(x_1)\approx x_1$ due to round-off errors. Hence $\omega(x_1+x_2)-x_1\approx0$ is computed instead of $\omega(x_1+x_2)-x_1\approx x_2-\ln(x_1)$. To overcome this problem we introduce yet another function: the $\varpi$-function. Introducing the change of variable $y=z+x_1$ into the equation (\[eqom\]), the $\varpi$-function is defined such that it solves the equation $$\begin{aligned} \label{eqpi} z\ +\ \ln(x_1+z)\ =\ x_2 \qquad \Longrightarrow \qquad z\ =\ \varpi(x_1\left|\,x_2\right.),\end{aligned}$$ where the $x_i$ are real. The $\varpi$-function is related to the $\omega$- and ${\operatorname{W}}$-functions as $$\varpi(x_1\left|\,x_2\right.)\ =\ \omega(x_1+x_2)\,-\,x_1\ =\ {\operatorname{W}}(\exp(x_1+x_2))\,-\,x_1.$$ In terms of the $\varpi$-function, the solution of (\[colgen\]) is obviously $$\label{colgensolvpi} \frac{1}{\sqrt{\lambda}}\ =\ c_1\,\varpi\!\left(\,{c_2\over c_1\/c_3}\left|\,{c_0\over c_1}-\ln(c_1\/c_3)\right.\right).$$ The $\varpi$-function is nothing more than the $\omega$-function shifted by the quantity $x_1$. This is a very minor analytic modification but this is a numerical significant improvement when $x_1$ is large. An efficient numerical algorithm for computing the $\varpi$-function is directly derived from the scheme (\[solwexp\]) used for the $\omega$-function. We thus obtain at once $$\label{solvpi} z_{j+1}\ =\ z_j \ -\ \frac{\left(\,1+x_1+z_j+{{\textstyle{1\over2}}}\/\epsilon_j\,\right)\epsilon_j\,(x_1+z_j)} {\left(\,1+x_1+z_j+\epsilon_j+{{\textstyle{1\over3}}}\/\epsilon_j^{\,2}\,\right)} \qquad \mathrm{for}\quad j\geqslant1,$$ with $$\epsilon_j\ \equiv\ \frac{z_j\,+\,\ln(x_1+z_j)\,-\,x_2}{1\,+\,x_1\,+\,z_j}, \qquad z_0\ =\ x_2\ -\ \frac{1}{5}.$$ If $x_1=0$ the scheme (\[solwexp\]) is recovered. The rate of convergence of (\[solvpi\]) is of course identical to the scheme (\[solwexp\]). Thus, the efficiency of (\[solvpi\]) does not need to be re-discussed here (see section \[secOfun\]). Resolution of the Colebrook–White equation {#secsolcol} ========================================== We test the new procedure with the peculiar Colebrook–White equation (\[col1\]). Its general solution is $$\begin{aligned} \frac{1}{\sqrt{\lambda}} &=& \frac{2}{\ell}\left[\,{\operatorname{W}}\left(\exp\left(\frac{\ell\,K\, R}{18.574}\,+\,\ln\!\left(\frac{\ell\,R}{5.02}\right)\right)\right)\ -\ \frac{\ell\,K\, R}{18.574}\,\right]\label{colsollam} \\ &=&\frac{2}{\ell}\left[\,\operatorname{\omega}\!\left(\frac{\ell\,K\, R}{18.574}\,+\,\ln\!\left(\frac{\ell\,R}{5.02}\right)\right)\ -\ \frac{\ell\,K\, R}{18.574}\,\right]\label{colsolome}\\ &=&\frac{2}{\ell}\,\operatorname{\varpi}\!\left(\frac{\ell\,K\, R}{18.574}\,\left|\,\ln\!\left(\frac{\ell\,R}{5.02}\right)\right.\right),\label{colsolvpi}\end{aligned}$$ where $\ell=\ln(10)\approx2.302585093$. All these analytic solutions are mathematically equivalent, but the relation (\[colsolvpi\]) is more efficient for numerical computations if we use the scheme described in the previous section. Numerical procedure ------------------- The solution of the Colebrook–White equation is obtained computing the $\varpi$-function with $$x_1\ =\ \frac{\ell\,K\,R}{18.574}, \qquad x_2\ =\ \ln\!\left(\,\frac{\ell\,R}{5.02}\,\right),$$ and using the iterative scheme (\[solvpi\]) with $j=0,1,2$. An approximation of the friction factor is eventually $$\lambda_j\ \approx\ (\,\ell\,/\,2\,z_j\,)^2.$$ This way, the whole computation of $\lambda_j$ requires the evaluation of $j+1$ logarithms only,[^1] i.e. one logarithm per iteration. A [Matlab${}^\copyright$]{} implementation of this algorithm is given in the appendix \[appmat\]. This (vectorized) code was written with clarity in mind, so that one can test and modify easily the program. This program is also fast, accurate and robust, so it can be used in real intensive applications developed in [Matlab]{}. A [FORTRAN]{} implementation of this algorithm is given in the appendix \[appfor\]. This program was written with speed in mind, so there are no checks of the input parameters. The code is clear enough that it should be easy to modify and to translate into any programming language. Accuracy -------- For the range of Reynolds numbers $10^3\leqslant R\leqslant 10^{13}$ and for four relative roughness $K=\{0,10^{-3},10^{-2},10^{-1}\}$, the accuracy of $\lambda_j^{-1/2}$ — obtained from the iterations (\[solvpi\]) with $j=\{0,1,2\}$ — and of Haaland’s approximation $\lambda_\text{H}^{-1/2}$ — given by (\[solhaa\]) — are compared with the exact friction coefficient $\lambda^{-1/2}$. The relative errors are displayed on the figure \[figerrcole\]. It appears clearly that $\lambda_2^{-1/2}$ is accurate to machine double-precision (at least) for all Reynolds numbers and for all roughnesses (in the whole range of physical interest, and beyond). It also appears that $\lambda_1^{-1/2}$ is more accurate than Haaland’s approximation, specially for large $R$ and $K$. Moreover, the computation of $\lambda_1$ requires the evaluation of only two logarithms, so it is faster than Haaland’s formula. Note that other explicit approximations having more or less the same accuracy as Haaland’s formula, $\lambda_1^{-1/2}$ is significantly more accurate than these approximations. Finally, we note that $\lambda_0^{-1/2}$ is a too poor approximation to be of any practical interest. Speed ----- Testing the actual speed of an algorithm is a delicate task because the running time depends of many factors independent of the algorithm itself (implementation, system, compiler, hardware, etc.), specially on multi-tasking and multi-users computers. In order to estimate the speed of our scheme as fairly as possible, the following methodology was used. The speeds of the computation of $\lambda_1$ and $\lambda_2$ are compared with the Haaland approximation $\lambda_\text{H}$. The [Matlab]{} environnement and its built-in [cputime]{} function is used, for simplicity. Two vectors of $N$ components, with $1\leqslant N\leqslant10^5$, are created for $R$ and $K$. The values are chosen randomly in the intervals $10^3\leqslant R\leqslant10^9$ and $0\leqslant K<1$. The computational times are measured several times, the different procedures being called in different orders. For each value of $N$, the respective timings are averaged and divided by the averaged time used by the Haaland approximation (the latter having thus a relative computational time equal to one for all $N$). The result of this test are displayed on the figure \[figspecole\]. (The whole procedure was repeated several times and the corresponding graphics were similar.) For small $N$, say $N<2000$, the computations are so fast that the function [cputime]{} cannot measure the times. For larger values of $N$, we can see on the figure \[figspecole\] that the computations of $\lambda_1$ are a bit faster than the Haaland formula, while the computations of $\lambda_2$ are a bit slower, in average. This is in agreement with the number of evaluations of transcendent functions needed for each approximations, as mentioned above. These relative times may vary depending on the system, hardware and software, but we believe that the results would not be fundamentally different from the ones obtained here. The important result is that the procedure presented in this paper is comparable, in term of speed, to simplified formulae such as the Haaland approximation. The new procedure being much more accurate, it should thus be preferred. Conclusion ========== We have introduced a simple, fast, accurate and robust algorithm for solving the Colebrook equation. The formula used is the same for the whole range of the parameters. The accuracy is around machine double precision (around sixteen digits). The present algorithm is more efficient than the solution of the Colebrook equation expressed in term of the Lambert ${\operatorname{W}}$-function and than simple approximations, such as the Haaland formula. We have also provided routines in [Matlab]{} and [FORTRAN]{} for its practical use. The algorithm is so simple that it can easily be implemented in any other language and can be adapted to any variant of the Colebrook equation. To derive the algorithm, we introduced two special functions: the $\omega$- and $\varpi$-functions. These functions could also be useful in other contexts than the Colebrook equation. The efficient algorithms introduced in this paper for their numerical computation could then be used, perhaps with some modifications of the initial guesses, specially if high accuracy is needed. High-order schemes for solving a single nonlinear equation {#appsoleq} ========================================================== Let be a single nonlinear equation $f(y)=0$, where $f$ is a sufficiently regular given function and $y$ is unknown. This equation can be solved iteratively via the numerical scheme [@hou] $$\label{defhou} y_{j+1}\ =\ y_j\ +\ (p+1)\left[\frac{(1/f)^{(p)}}{(1/f)^{(p+1)}}\right]_{y=y_j},$$ where $p$ is a non-negative integer and $F^{(p)}$ denotes the $p$-th derivative of $F$ with $F^{(0)}=F$. The scheme (\[defhou\]) is of order $p+2$, meaning that the number of exact digits is roughly multiplied by $p+2$ after each iteration (when the procedure converges, of course). For $p=0$ and $p=1$, one obtains Newton’s and Halley’s schemes, respectively. The scheme (\[solwexp\]) for solving the Colebrook equation is obtained with $p=2$ together with the function $f$ given by the equation (\[eqom\]), plus some elementary algebra. Intensive tests have convinced us that it is most probably the best choice for the problem at hand here. MATLAB code {#appmat} =========== The [Matlab${}^\copyright$]{} function below is a vectorized implementation of the algorithm described in this paper. This code can also be freely downloaded [@cla]. We hope that the program is sufficiently documented so that one can easily test and modify it. function F = colebrook(R,K) % F = COLEBROOK(R,K) fast, accurate and robust computation of the % Darcy-Weisbach friction factor according to the Colebrook formula: % - - % 1 | K 2.51 | % --------- = -2 * Log10 | ----- + ------------- | % sqrt(F) | 3.7 R * sqrt(F) | % - - % INPUT: % R : Reynolds' number (should be > 2300). % K : Equivalent sand roughness height divided by the hydraulic % diameter (default K=0). % % OUTPUT: % F : Friction factor. % % FORMAT: % R, K and F are either scalars or compatible arrays. % % ACCURACY: % Around machine precision for all R > 3 and for all 0 <= K, % i.e. in an interval exceeding all values of physical interest. % % EXAMPLE: F = colebrook([3e3,7e5,1e100],0.01) % Check for errors. if any(R(:)<=0) == 1, error('The Reynolds number must be positive (R>2000).'); end, if nargin == 1, K = 0; end, if any(K(:)<0) == 1, error('The relative sand roughness must be non-negative.'); end, % Initialization. X1 = K .* R * 0.123968186335417556; % X1 <- K * R * log(10) / 18.574. X2 = log(R) - 0.779397488455682028; % X2 <- log( R * log(10) / 5.02 ); % Initial guess. F = X2 - 0.2; % F <- X2 - 1/5; % First iteration. E = ( log(X1+F) + F - X2 ) ./ ( 1 + X1 + F ); F = F - (1+X1+F+0.5*E) .* E .* (X1+F) ./ (1+X1+F+E.*(1+E/3)); % Second iteration (remove the next two lines for moderate accuracy). E = ( log(X1+F) + F - X2 ) ./ ( 1 + X1 + F ); F = F - (1+X1+F+0.5*E) .* E .* (X1+F) ./ (1+X1+F+E.*(1+E/3)); % Finalized solution. F = 1.151292546497022842 ./ F; % F <- 0.5 * log(10) / F; F = F .* F; % F <- Friction factor. FORTRAN code {#appfor} ============ The FORTRAN function below was written with maximum speed in mind, so some trivial arithmetic simplifications were used and there are no check for errors in the input parameters. DOUBLE PRECISION FUNCTION COLEBROOK(R,K) C F = COLEBROOK(R,K) computes the Darcy-Weisbach friction C factor according to the Colebrook-White formula. C C R : Reynold's number. C K : Roughness height divided by the hydraulic diameter. C F : Friction factor. IMPLICIT NONE DOUBLE PRECISION R, K, F, E, X1, X2, T PARAMETER ( T = 0.333333333333333333D0 ) C Initialization. X1 = K * R * 0.123968186335417556D0 X2 = LOG(R) - 0.779397488455682028D0 C Initial guess. F = X2 - 0.2D0 C First iteration. E = (LOG(X1+F)-0.2D0) / (1.0D0+X1+F) F = F - (1.0D0+X1+F+0.5D0*E)*E*(X1+F) / (1.0D0+X1+F+E*(1.0D0+E*T)) C Second iteration (if needed). IF ((X1+X2).LT.(5.7D3)) THEN E = (LOG(X1+F)+F-X2) / (1.0D0+X1+F) F = F - (1.0D0+X1+F+0.5D0*E)*E*(X1+F) / (1.0D0+X1+F+E*(1.0D0+E*T)) ENDIF C Finalized solution. F = 1.151292546497022842D0 / F COLEBROOK = F * F RETURN END Note that, depending on the FORTRAN version and on the compiler, the command [LOG]{} may have to be replaced by [DLOG]{} to ensure that the logarithm is computed with a double-precision accuracy.\ 1991\. Dynamic two-fluid model OLGA. Theory and application. [*SPE Prod. Engin.*]{} [**6**]{}, 171-–180. 2008\. [colebrook.m]{}. [Matlab]{} Central File Exchange. 1996\. On the Lambert W function. [*Adv. Comput. Math.*]{} [**5**]{}, 329–359. 1997\. A sequence of series for the Lambert W function. [*Proc. Int. Symp. Symb. Alg. Comp., Maui, Hawaii.*]{} ACM Press, 197–204. 2005\. [lambertw.m]{}. [Matlab]{} Central File Exchange. 1983\. Simple and explicit formulas for the friction factor in turbulent pipe flow. [*J. Fluids Eng.*]{} [**105**]{}, 89–90. 1970\. [*The Numerical Treatment of a Single Nonlinear Equation.*]{} McGraw-Hill. 1998\. Colebrook–White formula for pipe flows. [*J. Hydr. Engrg.*]{} [**124**]{}, [**1**]{}, 96–97. 2002 Improved explicit equations for estimation of the friction factor in rough and smooth pipes. [*Chem. Eng. J.*]{} [**86**]{}, [**3**]{}, 369–374. 2004\. Constraints for using Lambert W function-based explicit Colebrook–White equation. [*J. Hydr. Engrg.*]{} [**130**]{}, [**9**]{}, 929–931. 2007\. Explicit reformulation of the Colebrook–White equation for turbulent flow friction factor calculation. [*Ind. Eng. Chem. Res.*]{} [**46**]{}, 2593–2600. [Dotted red line: $e_0$; Dashed blue line: $e_1$; Solid green line: $e_2$.]{} [Dotted red line: $\lambda_0^{-{1\over2}}$; Dashed blue line: $\lambda_1^{-{1\over2}}$; Solid green line: $\lambda_2^{-{1\over2}}$; Dashed-dotted black line $\lambda_\text{H}^{-{1\over2}}$ (Haaland’s approximation).]{} [^1]: The numerical constant $\ln(10)$ is not counted because it can be explicitly given in the program and does not need to be computed each time.
--- abstract: 'We discuss the value of the cosmological constant as recovered from CMB and LSS data and the robustness of the results when general isocurvature initial conditions are allowed for, as opposed to purely adiabatic perturbations. The Bayesian and frequentist statistical approaches are compared. It is shown that pre-WMAP CMB and LSS data tend to be incompatible with a non-zero cosmological constant, regardless of the type of initial conditions and of the statistical approach. The non-adiabatic contribution is constrained to be $\leq 40\%$ ($2\sigma$ c.l.).' address: 'Département de Physique Théorique, Université de Genève, 24 quai Ernest Ansermet, CH–1211 Genève 4, Switzerland' author: - Roberto Trotta title: The cosmological constant and the paradigm of adiabaticity --- Cosmic microwave background ,cosmological constant ,initial conditions 98.70Vc ,98.80Hw ,98.80Cq Introduction ============ There are now at least 5 completely independent observations which consistently point toward a majority of the energy-density of the Universe being in the form of a “cosmological constant”, ${\Omega_{\Lambda}}$. Those observations are: cosmic microwave background anisotropies (CMB), large scale structure (LSS), supernovae typ IA, strong and weak gravitational lensing. The very nature of this mysterious component remains unknown, and the so called “smallness problem” (i.e. why ${{\mathcal O}}({{\Omega_{\Lambda}}}) \sim 1$ and not ${\Omega_{\Lambda}}\gsim 10^{58}$ as expected from particle physics arguments) is still unsolved. It is therefore important to test the robustness of results indicating a non-vanishing cosmological constant with respect to non-standard physics. One possibile extension of the “concordance model” is given by non-adiabatic initial conditions for the cosmological perturbations, [i.e. ]{}isocurvature modes. Another test is the use of a different statistical approach then the usual Bayesian one, namely the frequentist method. We discuss this points in the next section, and present their application to the cosmological constant problem in section 3. Section 4 is dedicated to our conclusions. Testing the assumption of adiabaticity ====================================== Statistics ---------- Most of the recent literature on cosmological parameters estimation uses [*Bayesian inference*]{}: the Maximum Likelihood (ML) principle states that the best estimate for the unknown parameters is the one which maximizes the likelihood function. Therefore, in the grid-based method, one usually minimizes the $\chi^2$ over the parameters which one is not interested in. Then one defines $1 {\sigma}$, $2 {\sigma}$ and $3 {\sigma}$ likelihood contours around the best fit point, as the locus of models within $\Delta \equiv \chi^2 - \chi^2_{\rm ML} = 2.30$, $6.18$, $11.83$ away from the ML value for the joint likelihood in two parameters, $\Delta = 1$, $4$, $9$ for the likelihood in only one parameter. Based on Bayes’ Theorem, likelihood intervals measure our degree of belief that the particular set of observations used in the analysis is generated by a parameter set belonging to the specified interval [@statistics]. Since Bayesian likelihood contours are drawn with respect to the ML point, if the best fit value for the $\chi^2$ is much lower then what one would expect statistically for Gaussian variables ([i.e. ]{}$\chi^2/F \approx 1$, were $F$ denotes the number of degrees of freedom, dof), Bayesian contours will underestimate the real errors. The grid-based parameter estimation method can however be used for a determination of true exclusion region ([*frequentist approach*]{}). The Bayesian and frequentist methods can give quite different errors on the parameters, since the meaning of the confidence intervals is different. The frequentist approach answers the question: What is the probability of obtaining the experimental data at hand, if the Universe has some given cosmological parameters? To the extent to which the $C_\ell$’s can be approximated as Gaussian variables, the quantity $\chi^2$ is distributed according to a chi-square probability distribution with $F = N - M$ dof, where $N$ is the number of independent (uncorrelated) experimental data points and $M$ is the number of fitted parameters. Since the chi-square distribution, $P^{(F)}$, is well known, one can readily estimate [*confidence intervals*]{}, by finding the quantile of $P^{(F)}$ for the chosen (1 tail) confidence level. The so obtained exclusion regions do not rely on the ML point. On the other hand, they are rigorously correct only if the assumption of Gaussianity holds, and the number of dof is precisely known. In general one should keep in mind that frequentist contours are less stringent than likelihood (Bayesian) contours. Dependence on initial conditions -------------------------------- CMB anisotropies are sensitive not only to the matter-energy content of the universe, but also to the type of initial conditions (IC) for cosmological perturbations. Initial conditions are set at very early times, and determining them gives precious hints on the type of physical process which produced them. In the context of the inflationary scenario, the type of IC is related to the number of scalar fields in the very early universe and to their masses. For instance, the simplest inflationary model, namely with only one scalar field, predicts adiabatic (AD) initial conditions. In this case, the initial density contrast for all components (baryons, CDM, photons and neutrinos) is the same, up to a constant: $${\frac{\delta \rho_{b}}{\rho_{b}}} = {\frac{\delta \rho_{c}}{\rho_{c}}} = \frac{3}{4}{\frac{\delta \rho_{\gamma}}{\rho_{\gamma}}} = \frac{3}{4} {\frac{\delta \rho_{\nu}}{\rho_{\nu}}} \equiv \Delta_{AD} \qquad \text{(AD).}$$ This excites a cosine oscillatory mode in the photon-baryon fluid, which induces a first peak at $\ell \approx 220$ in the angular power spectrum for a flat universe. Another possibility are CDM isocurvature initial conditions. Then the total energy-density perturbation vanishes (setting ${\frac{\delta \rho_{b}}{\rho_{b}}}={\frac{\delta \rho_{\nu}}{\rho_{\nu}}}=0$ without loss of generality): $${\frac{\delta \rho_{{{\rm tot}}}}{\rho_{{{\rm tot}}}}} = {\frac{\delta \rho_{c}}{\rho_{c}}} + {\frac{\delta \rho_{\gamma}}{\rho_{\gamma}}} = 0 \qquad \text{(CDM ISO)}$$ and therefore the gravitational potential $\Psi$ is approximately zero as well (“isocurvature”). CDM isocurvature IC excite a sine oscillation, and the resulting first peak in the power spectrum is displaced to $\ell \approx 330$. Generation of isocurvature initial conditions requires the presence of (at least) a second light scalar field during inflation. The observation of the first peak at $\ell = 220.1 \pm 0.8$ [@Page03] has ruled out the possibility of pure CDM isocurvature initial conditions. However, a subdominant isocurvature contribution to the prevalent adiabatic mode cannot be excluded. Beside AD and CDM isocurvature, the complete set of IC for a fluid consisting of photons, neutrinos, baryons and dark matter in general relativity consists of three more modes [@BMT]. These are the baryon isocurvature mode (BI), the neutrino isocurvature density (NID) and neutrino isocurvature velocity (NIV) modes. Those five modes are the only regular ones, [i.e. ]{}they do not diverge at early times. The NID mode can be understood as a neutrino entropy mode, while the NIV consists of vanishing density perturbations for all fluids but non-zero velocity perturbations between fluids. The CDM and BI modes are identical, and therefore it suffices to consider only one of them. In the most general scenario, one would expect all four modes to be present with arbitrary initial amplitude and arbitrary correlation or anti-correlation, with the restriction that their superposition must be a positive quantity. For simplicity we consider the case where all modes have the same spectral index, $n_{{\rm S}}$. The most general initial conditions are then described by the spectral index $n_{{\rm S}}$ and a positive semi-definite $4 \times 4$ matrix, which amounts to eleven parameters instead of two in the case of pure AD initial conditions. More details can be found in Refs. [@TRD1; @TRD2]. The CMB and matter power spectra for the different types of initial conditions are plotted in  \[fig:power\_spectra\]. The matter power spectrum ------------------------- ![Joint Bayesian likelihood contours for the baryon density $\omega_b$ and the Hubble parameter $h$, using pre-WMAP CMB data only. The tighter contours (shades of green) assume purely AD initial conditions, the wider contours (yellow/shades of red) include general isocurvature IC (from Ref. [@TRD1]).[]{data-label="fig:TRD1"}](both_trd1_b.eps){width="5.5cm"} Inclusion of general initial conditions in the analysis can lead to very important degeneracies in the IC parameter space, which spoil the accuracy with which other cosmological parameters can be measured by CMB alone. This has been demonstrated in a striking way for the case of the Hubble parameter and the baryon density in Ref [@TRD1], [cf]{}  \[fig:TRD1\]. An effective way to break this degeneracy is achieved by the inclusion of large scale structure (LSS) data. The key point is that, once the corresponding CMB power spectrum amplitude has been COBE-normalized, the amplitude of the AD matter power spectrum is nearly two orders of magnitude larger than [*any*]{} of the isocurvature contribution ([cf]{}  \[fig:power\_spectra\]). Therefore the matter power spectrum essentially measures the adiabatic part, and is nearly insensitive to isocurvature contributions. The argument holds true for observations of the matter spectrum on all scales, ranging from large scale structure to weak lensing and Lyman $\alpha$-clouds. In view of optimally constraining the isocurvature content, it is therefore essential to combine those observations with CMB data, in order to break the strong degeneracy among initial conditions which is present in the CMB power spectrum alone [@Tprep]. ![CMB (left) and matter (right) power spectra of the different auto- (odd panels) and cross-correlators (even panels) for the standard $\Lambda$CDM concordance model. The CMB power spectrum is COBE-normalized. The color and line style codes are as follows: in the odd panels, AD: solid/black line, CI: dotted/green line, NID: short-dashed/red line, NIV: long-dashed/blue line; in the even panels, AD: solid/black line (for comparison), $<{{\rm AD}},{{\rm CI}}>$: long-dashed/magenta line, $<{{\rm AD}},{{\rm NID}}>$: dotted/green line, $<{{\rm AD}},{{\rm NIV}}>$: short-dashed/red line, $<{{\rm CI}},{{\rm NID}}>$: dot-short dashed/blue line, $<{{\rm CI}},{{\rm NIV}}>$: dot-long dashed/light-blue line, and $<{{\rm NID}},{{\rm NIV}}>$: dot-short dashed/black line. []{data-label="fig:power_spectra"}](CMB_AUTOCORR.ps "fig:"){width="3.0cm"} ![CMB (left) and matter (right) power spectra of the different auto- (odd panels) and cross-correlators (even panels) for the standard $\Lambda$CDM concordance model. The CMB power spectrum is COBE-normalized. The color and line style codes are as follows: in the odd panels, AD: solid/black line, CI: dotted/green line, NID: short-dashed/red line, NIV: long-dashed/blue line; in the even panels, AD: solid/black line (for comparison), $<{{\rm AD}},{{\rm CI}}>$: long-dashed/magenta line, $<{{\rm AD}},{{\rm NID}}>$: dotted/green line, $<{{\rm AD}},{{\rm NIV}}>$: short-dashed/red line, $<{{\rm CI}},{{\rm NID}}>$: dot-short dashed/blue line, $<{{\rm CI}},{{\rm NIV}}>$: dot-long dashed/light-blue line, and $<{{\rm NID}},{{\rm NIV}}>$: dot-short dashed/black line. []{data-label="fig:power_spectra"}](CMB_CROSSCORR.ps "fig:"){width="3.0cm"} ![CMB (left) and matter (right) power spectra of the different auto- (odd panels) and cross-correlators (even panels) for the standard $\Lambda$CDM concordance model. The CMB power spectrum is COBE-normalized. The color and line style codes are as follows: in the odd panels, AD: solid/black line, CI: dotted/green line, NID: short-dashed/red line, NIV: long-dashed/blue line; in the even panels, AD: solid/black line (for comparison), $<{{\rm AD}},{{\rm CI}}>$: long-dashed/magenta line, $<{{\rm AD}},{{\rm NID}}>$: dotted/green line, $<{{\rm AD}},{{\rm NIV}}>$: short-dashed/red line, $<{{\rm CI}},{{\rm NID}}>$: dot-short dashed/blue line, $<{{\rm CI}},{{\rm NIV}}>$: dot-long dashed/light-blue line, and $<{{\rm NID}},{{\rm NIV}}>$: dot-short dashed/black line. []{data-label="fig:power_spectra"}](PS_AUTOCORR.ps "fig:"){width="3.0cm"} ![CMB (left) and matter (right) power spectra of the different auto- (odd panels) and cross-correlators (even panels) for the standard $\Lambda$CDM concordance model. The CMB power spectrum is COBE-normalized. The color and line style codes are as follows: in the odd panels, AD: solid/black line, CI: dotted/green line, NID: short-dashed/red line, NIV: long-dashed/blue line; in the even panels, AD: solid/black line (for comparison), $<{{\rm AD}},{{\rm CI}}>$: long-dashed/magenta line, $<{{\rm AD}},{{\rm NID}}>$: dotted/green line, $<{{\rm AD}},{{\rm NIV}}>$: short-dashed/red line, $<{{\rm CI}},{{\rm NID}}>$: dot-short dashed/blue line, $<{{\rm CI}},{{\rm NIV}}>$: dot-long dashed/light-blue line, and $<{{\rm NID}},{{\rm NIV}}>$: dot-short dashed/black line. []{data-label="fig:power_spectra"}](PS_CROSSCORR.ps "fig:"){width="3.0cm"} The cosmological constant and isocurvature IC ============================================= We apply the above statistical (Bayesian or frequentist) and physical (general initial conditions, matter power spectrum) considerations to the study of the cosmological constant problem from pre-WMAP data. We outline the method and the main results below (see Ref. [@TRD2] for more details) and comment at the end on the qualitative impact of the new WMAP data on those findings. Our analysis makes use of the COBE, BOOMERanG and Archeops data [@CMBdata], covering the range $3 \leq \ell \leq 1000$ in the CMB power spectrum. For the matter power spectrum, we use the galaxy-galaxy linear power spectrum from the 2dF data [@2dFdata], and we assume that light traces mass up to a (scale independent) bias factor $b$, over which we maximise. The main focus being on the type of initial conditions, we restrict our analysis to only 3 cosmological parameters: the scalar spectral index, $n_S$, the cosmological constant ${\Omega_{\Lambda}}$ in units of the critical density and the Hubble parameter, $H_0 = 100 \,h { {\;\mathrm{km}^{}} } { {\;\mathrm{s}^{-1}} } { {\;\mathrm{Mpc}^{-1}} }$. We consider flat universes only and neglect gravitational waves. When we set to zero the isocurvature modes, we recover the well-known results for purely AD perturbations. Because of the “geometrical degeneracy”, CMB alone cannot put very tight lower limits on ${\Omega_{\Lambda}}$ even if we allow only for flat universes. The degeneracy can be broken either by putting an external prior on $h$ or via the LSS spectrum, since $P_{{\rm m}}$ is mainly sensitive to the shape parameter $\Gamma \equiv {\Omega}_{{\rm m}}h$. Combination of CMB and LSS data yields the following likelihood (Bayesian) intervals for ${\Omega_{\Lambda}}$: $${\Omega_{\Lambda}}= 0.70 {_{-0.05}^{+0.05}} \mbox{ at $1 {\sigma}$ $\quad$ and $\quad$} {_{-0.27}^{+0.15}} \mbox{ at $3 {\sigma}$}.$$ From the Bayesian analysis, one concludes that CMB and LSS together with purely AD initial conditions require a non-zero cosmological constant at very high significance, more than $7 {\sigma}$ for the points in our grid! However, our best fit has a reduced chi-square $\chi/F= 0.59$, significantly less than $1$. This leads to artificially tight likelihood regions: the observationally excluded part of parameter space is less extended, and is given by the frequentist analysis. From the frequentist approach, we obtain instead the following confidence intervals: $$0.15 < {\Omega_{\Lambda}}< 0.90 \mbox{ at $1 {\sigma}$ $\quad$ and $\quad$} {\Omega_{\Lambda}}< 0.92 \mbox{ at $3 {\sigma}$}.$$ ![Bayesian (dashed lines) and frequentist (solid, filled) joint $1 {\sigma}$, $2 {\sigma}$, $3 {\sigma}$ contours using pre-WMAP CMB and 2dF data. The left panel assumes purely adiabatic IC, the right panel includes general isocurvature IC.[]{data-label="fig:ADGI"}](AD.eps "fig:"){width="5.5cm"} ![Bayesian (dashed lines) and frequentist (solid, filled) joint $1 {\sigma}$, $2 {\sigma}$, $3 {\sigma}$ contours using pre-WMAP CMB and 2dF data. The left panel assumes purely adiabatic IC, the right panel includes general isocurvature IC.[]{data-label="fig:ADGI"}](AM.eps "fig:"){width="5.5cm"} When we enlarge the space of models by including all possible isocurvature modes, likelihood (Bayesian) and confidence (frequentist) contours widen up along the ${\Omega_{\Lambda}}$, $h$ degeneracy, and this produces a considerable worsening of the likelihood limits. For general initial conditions we now find (Bayesian, CMB and LSS together): $${\Omega_{\Lambda}}= 0.70 {_{-0.10}^{+0.15}} \mbox{ at $1 {\sigma}$ $\quad$ and $\quad$} {_{-0.48}^{+0.25}} \mbox{ at $3 {\sigma}$}.$$ Again, the frequentist statistics give less tight bounds: $${\Omega_{\Lambda}}< 0.90 \mbox{ at $1 {\sigma}$ $\quad$ and $\quad$} {\Omega_{\Lambda}}< 0.95 \mbox{ at $3 {\sigma}$},$$ and in particular we cannot place any lower limit on the value of the cosmological constant. A complete discussion can be found in Ref. [@TRD2]. Joint likelihood contours for ${\Omega_{\Lambda}}$, $h$ with AD and general isocurvature initial conditions are plotted in \[fig:ADGI\] for both statistical approaches. From the frequentist point of view, the region in the ${\Omega_{\Lambda}}, h$ plane which is incompatible with data at more than $3 {\sigma}$ is nearly independent on the choice of initial conditions (compare left and right panel of \[fig:ADGI\]). Enlarging the space of initial conditions seemingly does not have a relevant benefit on fitting pre-WMAP data with or without a cosmological constant. In \[fig:AM\_OL0\] we plot the best fit model (which has $\chi/F= 0.67$) with general initial conditions and ${\Omega_{\Lambda}}= 0$. As a consequence of the red spectral index ($n_S=0.80$) and of the absence of the early Integrated Sachs-Wolfe effect (since ${\Omega_{\Lambda}}=0$), the best fit model has a very low first acoustic peak, even in the presence of isocurvature modes. This is compatible with the BOOMERanG and Archeops data only if the absolute calibration of the experiments is reduced by $28\%$ and $12\%$, respectively. Furthermore, this best fit model has a rather low value of the Hubble parameter, $h=0.35$, which is many sigmas away from the value obtained by the HST Key Project, namely $h=0.72 \pm 0.08$ [@HST]. We conclude that a good fit to the pre-WMAP CMB data combined with LSS measurements can only be obtained at the price of pushing hard the other parameters, even when general initial conditions are allowed for. ![Best fit with general IC and ${\Omega_{\Lambda}}= 0$, combining pre-WMAP CMB (left) and 2dF (right) data. In both panels solid/black is the total spectrum, long-dashed/red the purely AD contribution, short-dashed/green the sum of the pure isocurvature modes, dotted/magenta the sum of the correlators (multiplied by $-1$ in the left panel and in absolute value in the right panel).[]{data-label="fig:AM_OL0"}](AM_OL0_CMB.ps "fig:"){width="5.5cm"} ![Best fit with general IC and ${\Omega_{\Lambda}}= 0$, combining pre-WMAP CMB (left) and 2dF (right) data. In both panels solid/black is the total spectrum, long-dashed/red the purely AD contribution, short-dashed/green the sum of the pure isocurvature modes, dotted/magenta the sum of the correlators (multiplied by $-1$ in the left panel and in absolute value in the right panel).[]{data-label="fig:AM_OL0"}](AM_OL0_PS.ps "fig:"){width="5.5cm"} Finally, in order to constrain deviations from perfect adiabaticity, it is interesting to limit quantitatively the isocurvature contribution. To this end, one can phenomenologically quantify the isocurvature contribution to the CMB power by a parameter $0 \leq \beta \leq 1$, defined in Ref. [@TRD2], so that purely AD IC are characterized by $\beta=0$, while purely isocurvature IC correspond to $\beta=1$. In Fig. \[fig:BETA\_SHADE\] we plot the value of $\beta$ for the best fit models, with the frequentist exclusion regions superimposed. Within $2\sigma$ c.l. (frequentist), the isocurvature contribution to the IC is bounded to be less then $40\%$. ![Isocurvature content $0.0 \leq \beta \leq 1.0$ of best fit models with pre-WMAP CMB and 2dF data. The contours are for $\beta = 0.20, 0.40, 0.60, 0.80$ from the center to the outside. Shaded regions represents 1 to 3 $\sigma$ c.l..[]{data-label="fig:BETA_SHADE"}](BETA_SHADE_lr.eps){width="5.5cm"} Although a quantitative analysis using the more precise WMAP data has not yet been carried out, some qualitative features of the expected results can be discussed. In particular, the first peak has been measured by WMAP to be 10% higher then in previous observations [@WMAP]. On the other hand, our work indicates that the first peak is very suppressed even in the presence of general IC for ${\Omega_{\Lambda}}=0$. Therefore one expects that WMAP data will exclude with much higher confidence a vanishing cosmological constant. In fact, our pre-WMAP best fit ${\Omega_{\Lambda}}=0$ model, when compared to the WMAP data [@WMAP], has $\chi^2_{WMAP}/F \approx 4.4$, and is therefore found to be totally incompatible with the new data. Furthermore, the constraints on non-adiabatic contributions should improve considerably, especially in view of the inclusion of polarization data [@BMTpol]. Conclusions =========== We have shown that the statistical approach (Bayesian or frequentist) can have an important impact in the determination of errors from CMB and LSS data. We found that structure formation data tend to prefer a non-zero cosmological constant even if general isocurvature initial conditions are allowed for. The isocurvature contribution is constrained to be $\leq 40\%$ at $2\sigma$ c.l. (frequentist). Aknowledgments {#aknowledgments .unnumbered} ============== It is a pleasure to thank Alessandro Melchiorri and all the organizers of the workshop. I am also grateful to Alain Riazuelo and Ruth Durrer for a most pleasant collaboration. RT is partially supported by the Schmidheiny Foundation, the Swiss National Science Foundation and the European Network CMBNET. [99]{} G.J. Feldman and R.D. Cousins, Phys. Rev. D [**57**]{}, 3873-3889 (1998); A.G. Frodesen, O. Skjeggestad and H. Tofte, [*Probability and Statistics in Particle Physics*]{} (Universitetsforlaget, Bergen-Oslo-Tromso 1979); M.G. Kendall, and A. Stuart, [*The advanced theory of statistics, Vol. 2*]{}, 4th ed. (High Wycombe, London, 1977). L. Page [[*et al.*]{}]{}, preprint [astro-ph/0302220]{} (2003). M. Bucher, K. Moodley, and N. Turok, Phys. Rev. D [**62**]{}, 083508 (2000). R. Trotta, A. Riazuelo, and R. Durrer, Phys. Rev. Lett. [**87**]{}, 231301 (2001). R. Trotta, A. Riazuelo, and R. Durrer, Phys. Rev. D [**67**]{}, 063520 (2003). R. Trotta [[*et al.*]{}]{}, in preparation. G.F. Smoot [[*et al.*]{}]{}, ApJ [**396**]{}, L1 (1992); C.L. Bennett [[*et al.*]{}]{}, ApJ [**430**]{}, 423 (1994); M. Tegmark and A.J.S. Hamilton, in [*18th Texas Symposium on relativistic astrophysics and cosmology*]{}, edited by A.V. Olinto [[*et al.*]{}]{}, pp 270 (World Scientific, Singapore, 1997); C.B. Netterfield [[*et al.*]{}]{}, ApJ [**571**]{}, 604 (2002); A. Benoît [[*et al.*]{}]{}, preprint [astro-ph/0210306]{}. M. Tegmark, A. Hamilton and Y. Xu, Month. Not. R. Astron.Soc. (accepted), preprint [astro-ph/0111575]{} (2001). W. Freedman [[*et al.*]{}]{}, ApJ [**553**]{}, 47 (2001). G. Hinshaw [[*et al.*]{}]{}, preprint [astro-ph/0302217]{} (2003); L. Verde [[*et al.*]{}]{}, preprint [astro-ph/0302218]{} (2003); A. Kogut [[*et al.*]{}]{}, preprint [astro-ph/0302213]{} (2003). M. Bucher, K. Moodley, and N. Turok, Phys. Rev. Lett. [**87**]{}, 191301 (2001); M. Bucher, K. Moodley, and N. Turok, Phys. Rev. D [**66**]{}, 023528 (2002).
--- abstract: 'To study the impact of active galactic nuclei (AGN) feedback on the galactic ISM, we present Magellan long-slit spectroscopy of 12 luminous nearby type 2 AGN ([$L_{\mathrm{bol}}$]{}$\sim 10^{45.0-46.5}$ [erg s$^{-1}$]{}, $z\sim0.1$). These objects are selected from a parent sample of spectroscopically identified AGN to have high [[\[O [III]{}\]]{}$\lambda$5007]{} and *WISE* mid-IR luminosities and extended emission in the SDSS [*r*]{}-band images, suggesting the presence of extended [[\[O [III]{}\]]{}$\lambda$5007]{} emission. We find spatially resolved [\[O [III]{}\]]{} emission (2-35 kpc from the nucleus) in 8 out of 12 of these objects. Combined with samples of higher luminosity type 2 AGN, we confirm that the size of the narrow-line region ([$R_{\mathrm{NLR}}$]{}) scales with the mid-IR luminosity until the relation flattens at $\sim$ 10 kpc. Nine out of 12 objects in our sample have regions with broad [\[O [III]{}\]]{} linewidths ([$w_{80}$]{} $>600$ [km s$^{-1}$]{}), indicating outflows. We define these regions as the kinematically-disturbed region (KDR). The size of the KDR ([$R_{\mathrm{KDR}}$]{}) is typically smaller than [$R_{\mathrm{NLR}}$]{} by few kpc but also correlates strongly with the AGN mid-IR luminosity. Given the unknown density in the gas, we derive a wide range in the energy efficiency $\eta=\dot{E}/{\ensuremath{L_{\mathrm{bol}}}}=0.01\% - 30\%$. We find no evidence for an AGN luminosity threshold below which outflows are not launched. To explain the sizes, velocity profiles, and high occurrence rates of the outflows in the most luminous AGN, we propose a scenario in which energy-conserving outflows are driven by AGN episodes with $\sim 10^8$-year durations. Within each episode the AGN flickers on shorter timescales, with a cadence of $\sim 10^6$ year active phases separated by $\sim 10^7$ years.' author: - 'Ai-Lei Sun, Jenny E. Greene, Nadia L. Zakamska' bibliography: - 'local.bib' title: 'Sizes and Kinematics of Extended Narrow-Line Regions in Luminous Obscured AGN Selected By Broadband Images' --- Introduction ============ Feedback from active galactic nuclei (AGN) is a key ingredient in modern models of galaxy evolution [@Silk1998; @Springel2005]. It has been invoked to regulate star formation in massive galaxies [e.g., @Croton2006; @Bower2006], while the tight correlation between the supermassive black hole (SMBH) masses and their host galaxy properties [@Gebhardt2000; @Ferrarese2000; @Sun2013a; @McConnell2013] also suggests that feedback processes enforce the coevolution between SMBHs and galaxies [@DiMatteo2005; @DeBuhr2010; @Somerville2008]. Supporting evidence for AGN feedback comes from observations of AGN outflows in both local and distant AGN. These galactic outflows have a multi-phase structure, ranging from cold molecular [@Feruglio2010; @Sturm2011; @Veilleux2013; @Sun2014; @Cicone2014] to warm atomic and ionized gas [@Alexander2010; @Greene2011; @Maiolino2012; @Davis2012; @Rupke2013; @Cano-Diaz2012], and could be related to nuclear X-ray emitting outflows [@Gofford2013; @Tombesi2015]. While we now have empirical evidence that AGN do host outflows, many questions remain about how these outflows are driven, for example by jet, wind, or radiation pressure, and whether and how the outflow properties depend on the AGN luminosity. The warm ionized component of the outflow ($T\sim 10^4$ K) emits strong forbidden emission lines, in particular [[\[O [III]{}\]]{}$\lambda$5007]{}, which makes it possible to detect and resolve AGN outflows via optical spectroscopy particularly at low-redshifts. At redshifts $\lesssim 0.5$, high velocity [\[O [III]{}\]]{} features indicative of outflows are commonly found in luminous AGN using spectroscopic surveys [e.g., @Greene2005; @Mullaney2013; @Zakamska2014; @Woo2016; @Harrison2016]. Spatially resolved studies using long-slit and IFU spectroscopy have also identified a number of extended ionized outflows (few - 10 kpc) in luminous AGN ([$L_{\mathrm{bol}}$]{}$\gtrsim 10^{46}$ [erg s$^{-1}$]{}) at these redshifts [@Greene2012; @Liu2013b; @Liu2013; @Liu2014a; @Harrison2014; @Hainline2014d], particularly among obscured type 2 AGN [@Zakamska2003; @Reyes2008], where the occultation of the active nucleus makes it easier to detect emission lines from the extended ionized nebula. There are other studies that find outflows of much smaller sizes ($\lesssim 2$ kpc) and lower occurrence rates in samples with a wider range of luminosities and a diverse types of AGN [e.g., ULIRG/Seyfert, type 1 and 2 AGN, @RodriguezZaurin2013; @Husemann2013; @Husemann2015; @Karouzos2016]. The size of the outflow could indeed be strongly dependent on the luminosity of the AGN [@Karouzos2016]. Furthermore, these studies do not use uniform definitions of size. Some are based on intensities while others are based on kinematics. To understand the discrepancy between these results and to have a comprehensive picture of AGN outflow sizes, it is important to measure the AGN luminosity and to have a quantitative definition of the outflow size that reflects the extent of the kinematically disturbed region (KDR). Compared to spatially resolved spectroscopy, broadband photometry could provide a much more efficient way to search for candidate extended outflows. Since the [[\[O [III]{}\]]{}$\lambda$5007]{} line in obscured luminous AGN is bright and has high enough EW to be detectable in broadband images, optical photometric surveys, such as the Sloan Digital Sky Survey [SDSS; @York2000], have been used to find [[\[O [III]{}\]]{}$\lambda$5007]{} emission in extended narrow-line regions [e.g., @Keel2012; @Schirmer2013; @Davies2015]. However, not all the extended narrow-line regions have disturbed kinematics. Some luminous AGN are capable of ionizing gas out to tens of kpc from the host galaxy [@Fu2008; @Villar-Martin2010], including gas in small companion galaxies and tidal debris left from a prior galaxy interaction, thus creating extended ionized regions that are kinematically quiescent. For this reason, we also need spectroscopy to confirm the kinematic state of the extended gas and identify outflows. To test if broadband images can help identify extended outflows, in this paper we select a sample of 12 SDSS-identified luminous obscured (type 2) AGN based on their extended emission in the broad band images. We observe them with Magellan IMACS long-slit spectroscopy to measure the extent and kinematic state of the ionized gas. We study the outflow occurrence rate, and constrain the outflow properties, including the sizes, velocities, and energetics, as well as their dependence on the AGN luminosity. In future work we will examine the correspondence between the broadband images and the ionized gas nebula, and evaluate the performance of the extended outflow selection. In Section \[sec:data\], we describe the sample selection and Magellan observations; in Section \[sec:sizes\] we present the Magellan spectra and measure the extents and the kinematics of the ionized gas nebulae, and in Section \[sec:energetics\] we infer the outflow properties, including the energetics, and analyze their dependence on the AGN luminosities. We discuss the outflow occurrence rate and time scales in Section \[sec:timescale\] and summarize in Section \[sec:summary\]. We use an $h=0.7, \Omega_m=0.3, \Omega_{\Lambda}=0.7$ cosmology throughout this paper. We adopt vacuum wavelengths for the analysis, the same as SDSS, but keep the line notations in air wavelengths, e.g. [[\[O [III]{}\]]{}$\lambda$5007]{}. All error bars represent 1-sigma errors. Observations and Data Reduction {#sec:data} =============================== Sample Selection {#sec:data:selection} ---------------- We select luminous AGN from the parent sample of SDSS spectroscopically identified AGN [@Mullaney2013] with $z<0.2$ and AGN luminosities above $L_{\rm{bol}} >5\times10^{44}~\rm{erg\/~s^{-1}}$ (Fig. \[fig:selection\]). The AGN bolometric luminosity is inferred from two luminosity indicators – the [\[O [III]{}\]]{} luminosity and the mid-infrared (mid-IR) luminosity (see Sec. \[sec:data:WISE\]). We calculate the [[\[O [III]{}\]]{}$\lambda$5007]{} luminosity as the sum of both kinematic components measured by @Mullaney2013 from the SDSS spectra. To avoid introducing uncertainties[^1], we do not apply the extinction correction to the [[\[O [III]{}\]]{}$\lambda$5007]{} luminosity, which has a median of 3 among our sample according to @Mullaney2013 The [[\[O [III]{}\]]{}$\lambda$5007]{} luminosity is converted to the AGN bolometric luminosity with a correction factor of $10^{3.5}$ based on the empirical [$L_{\mathrm{[O{\tiny III}]}}$]{}-[$L_{\mathrm{bol}}$]{} relation of type 1 AGN [Eq. 1, @Liu2009]. The mid-infrared luminosity is from the Wide-field Infrared Survey Explorer [*WISE*; @Wright2010] and the conversion is described in Sec. \[sec:data:WISE\]. The luminosities from the two indicators are correlated with a scatter of 0.5 dex. To maximize the chance of finding extended AGN outflows, we looked at the SDSS images to identify the ones with extended morphology. As the strong [\[O [III]{}\]]{} lines fall in the SDSS [*r*]{}-band, which has a green color in the composite images (Fig. \[fig:spec2100\], \[fig:spec2101\], and Appendix \[sec:append:objs\]), we look for extended green-colored emissions in those images. In total, twelve type 2 AGN (narrow-lines only) and eight type 1 AGN (with nuclear blue continuum and broad Balmer lines) have successful observations with Magellan. While the type 1 AGN could be analyzed using methods that handle the nuclear emission [e.g., @Husemann2015], it is beyond the scope of this paper. In this paper, we will focus on the sample of twelve type 2 AGN (Tab. \[tab:sample\]), where the [[\[O [III]{}\]]{}$\lambda$5007]{} line measurement is less affected by the bright nuclei. Magellan Long-Slit Observations ------------------------------- Our sample was observed with the Inamori-Magellan Areal Camera & Spectrograph (IMACS) spectrograph [@Dressler2011] at the Magellan Baade telescope on Las Campanas on 23-24 June 2014. The seeing was between 05 and 1. We used the Centerfield Slit-viewing mode with the 300 lines/mm grating on the f/4 camera. We placed objects on the adjacent 1.0 and 1.3 slits[^2], each about 17 long and separated by 1, to simultaneously cover the central and extended regions of our galaxies. The spectral resolutions are 5.1 and 6.7 ${\mbox{\normalfont\AA}}$ (FWHM) for the two slits respectively, which corresponds to about 260 and 340 [km s$^{-1}$]{} for the [[\[O [III]{}\]]{}$\lambda$5007]{} line measurements. The 075 slit is also used for background subtraction, but not for measurements. The wavelength coverage is 3800 to 9400 ${\mbox{\normalfont\AA}}$ with three CCD chip gaps, each 75 ${\mbox{\normalfont\AA}}$ wide. Each object is observed for 15 to 60 minutes with one to three slit positions, as listed in Table \[tab:sample\]. The slit positions are chosen based on the SDSS image to cover extended [*r*]{}-band emission. For each object, there is at least one slit position along the major axis. The atmospheric dispersion corrector is used. Two flux calibrator stars, Feige 110 and EG 274, and a set of velocity template stars consisting of K to A giants/dwarfs are also observed with the 13 slit. Data Reduction {#sec:data:reduction} -------------- Basic data reduction, including bias subtraction, flat fielding, wavelength calibration, rectification, and 2-D sky subtraction [@Kelson2003] are performed using the Carnegie Observatories reduction package COSMOS[^3]. Cosmic ray removal using LACosmic[^4] [@VanDokkum2001] is applied before rectification. We found an excess of red continuum background at $\lambda >$ 8200 ${\mbox{\normalfont\AA}}$ that was independent of slit width, which is most likely due to scattered light in the spectrograph. This red background excess can be well-subtracted by a 2-D sky subtraction if there are emission-free regions on both sides of each slit. In cases where one slit is full of galaxy light, we subtract the background by inferring the sky spectrum from the convolved 075 slit and correcting for the red background excess. This excess background does not affect the [[\[O [III]{}\]]{}$\lambda$5007]{} and [H$\beta$]{} line measurements. The flux calibration and atmospheric extinction corrections are performed using PyRAF[^5] version 2.1.7. We use the flux standard stars to determine the sensitivity functions and the atmospheric extinction function. The calibrated fluxes are consistent with the SDSS spectra within a scatter of 20%, taking into account that the SDSS and Magellan apertures are different[^6]. We adopt a fractional uncertainty on the flux calibration of 20%. For the slit positions that have multiple exposures, we align and stack those spectra of the same position together (the total exposure time is listed in Tab. \[tab:sample\]). The wavelength solution is applied after heliocentric-correction and air-to-vacuum conversion using PyAstronomy[^7]. For the emission line measurements we subtract the stellar continuum using a featureless 2-D model for the continuum spectrum. This model is determined by smoothing and interpolating the line-free part of the stacked 2-D spectra, excluding the contamination from the AGN emission lines and sky lines. This method can operate at the outskirts of the galaxies where the signal-to-noise ratio is low. As the [H$\beta$]{} emission line is affected by the stellar absorption, we correct for this effect using the absorption line profiles obtained from the pPXF stellar population synthesis fits described in Sec. \[sec:data:center\]. The average [H$\beta$]{} absorption correction is 12%. Therefore the dominant uncertainty on [$L_{\mathrm{H{\beta}}}$]{} is the flux calibration uncertainty of 20%. Two systems have no [H$\beta$]{} measurements (SDSS J0141$-$0945 and J2133$-$0712) due to chip gaps and strong sky lines. Position and Velocity References {#sec:data:center} -------------------------------- The position and velocity measurements in this paper are defined relative to the stellar component of the galaxies. The center position is defined as the peak of the stellar continuum light profile (nucleus), which has an uncertainty comparable to one pixel (0.2, or 0.3-0.6 kpc in our sample). The systemic velocity of each galaxy is determined using the stellar absorption features of the nuclear spectrum. To focus on the stellar absorption features, the emission lines, sky lines, galactic absorption, and chip gaps in the spectra are masked before the fitting. We fit the absorption lines with single stellar population (SSP) templates from @Bruzual2003 using the stellar kinematics fitting code pPXF [@Cappellari2003]. The templates include 10 solar-metallicity SSP spectra of ages ranging from 5 Myr to 11 Gyr with a two degree additive and three degree multiplicative polynomial. Two aperture sizes, 1 and 3, are used to extract the nucleus spectra, which give consistent systemic velocities within $\sim 15$ [km s$^{-1}$]{}. The final systemic redshifts are taken as the average of the two apertures and are listed in Table \[tab:sample\]. We adopt an uncertainty of 15 [km s$^{-1}$]{} on the systemic velocity[^8]. The average stellar velocity dispersion is 200 [km s$^{-1}$]{} and each object has a few absorption lines with signal-to-noise ratios $\gtrsim 10$. While this systemic velocity is not used to measure the [[\[O [III]{}\]]{}$\lambda$5007]{} linewidth [$w_{80}$]{} in this paper (see Sec. \[sec:sizes:spec\]), it is used as a reference to determine the velocity threshold for the high velocity emission (the blue and the red wings beyond $\pm 300$ [km s$^{-1}$]{}, Sec. \[sec:sizes:resolve\]), which is used to measure the extent of the kinematically disturbed region [$R_{\mathrm{KDR}}$]{} together with the [$w_{80}$]{} linewidth profile (Sec. \[sec:sizes:rv\]). Compared to the uncertainties due to the spatial PSF, the uncertainty in the systemic velocity is not the dominant source of error for [$R_{\mathrm{KDR}}$]{}. Our redshifts agree with the SDSS redshifts within 285 [km s$^{-1}$]{} with an average discrepancy of $\langle{} |z-z_{SDSS}| \rangle{} = $95 [km s$^{-1}$]{}, while the latter is fitted to both the emission and the absorption lines. AGN Luminosities from *WISE* {#sec:data:WISE} ---------------------------- In this paper, mid-infrared *WISE* luminosity at rest-frame 15 is used as the primary AGN luminosity indicator. Mid-infrared luminosity traces hot dust heated by the AGN and has been found to correlate with the AGN hard X-ray luminosities [e.g., @Lutz2004; @Matsuta2012], which is presumably an isotropic AGN luminosity indicator (although see below). As mid-IR luminosity is independent of the properties of the narrow-line region and is presumably more robust against dust extinction compared to optical lines, it is commonly used in studies of the AGN narrow line regions and outflows [e.g. @Hainline2014; @Liu2013b; @Liu2013]. The mid-IR *WISE* luminosities correlate with the [[\[O [III]{}\]]{}$\lambda$5007]{} luminosities among type 2 AGN [@Rosario2013; @Zakamska2014], see also Sec. \[sec:data:selection\] and Fig. \[fig:selection\]. We expect the mid-infrared luminosity of our sample to be AGN-dominated as opposed to star-formation dominated. AGN heated dust is much hotter ($T \gtrsim 100 $ K) and its emission peaks at shorter wavelengths ($\lambda_{\rm{peak}} \lesssim 30$ ) than dust heated by stars ($T \sim 25 $ K, $\lambda_{\rm{peak}} \sim 100$ ) [@Richards2006; @Kirkpatrick2012; @Zakamska2016; @Sun2014]. In fact, most of our objects have AGN-like *WISE* colors W1 $\langle 3.4 \rangle -$ W2 $\langle 4.6 \rangle >$ 0.8 in Vega magnitude [@Stern2012], indicating AGN-dominated luminosities. The two exceptions are SDSS J1419+0139 and SDSS J0141$-$0945 with only slightly bluer W1 $-$ W2 colors of 0.656 and 0.526, respectively.[^9] Therefore, the mid-infrared luminosities of our sample should be AGN-dominated and not significantly affected by star formation. In the mid-infrared, type 2 AGN are found to be redder and less luminous than their type 1 counterparts with the same [$L_{\mathrm{[O{\tiny III}]}}$]{} luminosities [@Liu2013; @Zakamska2016], indicating that the mid-infrared may not be a perfectly isotropic indicator of the AGN bolometric luminosity. As discussed in Appendix \[sec:append:WISE\], we find that this discrepancy is more severe at shorter wavelengths, e.g., 8 , than at longer wavelengths, e.g., 15 or 22 . To compare with other studies targeting higher redshift type 2 AGN [z $\approx$ 0.5 @Liu2013b; @Liu2014a; @Hainline2014d], where the rest-frame 22 flux is not available, we use the rest-frame 15 luminosity as our AGN luminosity indicator with a bolometric correction of 9 [@Richards2006], see Tab. \[tab:sample\] and \[tab:sample\_liu13\]. These mid-infrared luminosities at rest-frame 8, 15, and 22 are referred to as [$\nu L_{\nu,8}$]{}, [$\nu L_{\nu,15}$]{}, and [$\nu L_{\nu,22}$]{}. They are interpolated or extrapolated from the *ALLWISE* source catalog 3-band photometry at 4.6 (W2), 12 (W3), and 22 (W4) using a second-order spline in log-log space. We ignore the filter response function and adopt a magnitude to flux density conversion of a flat spectrum, which may lead to a few percent error depending on the source spectral shape [@Cutri2013]. Sizes of the Narrow-Line and the Kinematically Disturbed Regions {#sec:sizes} ================================================================ The goal of this section is to quantify the extent of the AGN influence on the interstellar medium of the host galaxy. To evaluate the ionization state of the gas, it is important to measure the extent of the photo-ionized region, also called the narrow-line region (NLR; its radius [$R_{\mathrm{NLR}}$]{}, see Sec. \[sec:sizes:riso\]) . One possibility is to measure the extent of a bright emission line (e.g., [[\[O [III]{}\]]{}$\lambda$5007]{}) down to a fixed surface brightness level. Another is to measure the extent of the AGN-ionized region based on ionization diagnostics, e.g., using the [[\[O [III]{}\]]{}$\lambda$5007]{} to [H$\beta$]{} ratio. These options have been used extensively in long-slit or IFU spectroscopy [e.g., @Bennert2006; @Fraquelli2003; @Greene2011; @Liu2013b; @Hainline2013; @Hainline2014d; @Husemann2014], narrow-band imaging [e.g., @Bennert2002; @Schmitt2003], or even broad-band imaging studies [e.g., @Schirmer2013]. However, to understand whether the energy from the AGN can be coupled kinematically to the interstellar medium or even drive outflows, we need a kinematic measure of the extent of the AGN influence. We define a kinematically disturbed region (KDR), where the ionized gas is kinematically disturbed, and a corresponding radius [$R_{\mathrm{KDR}}$]{} ( see Sec. \[sec:sizes:rv\]). Together, [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{} quantify the extent of the AGN influence on its host galaxy through two different channels: photoionization and mechanical feedback respectively. It is important to investigate how these two radii relate to each other and how they depend on the AGN luminosity. In this section, we present [[\[O [III]{}\]]{}$\lambda$5007]{} spectra of the twelve type 2 AGN (Sec. \[sec:sizes:spec\]), and measure both the narrow-line region radius [$R_{\mathrm{NLR}}$]{} and the kinematically disturbed region radius [$R_{\mathrm{KDR}}$]{} (Sec. \[sec:sizes:riso\] and \[sec:sizes:rv\]). With these radii, we can revisit the narrow-line region size-luminosity relation with our lower luminosity objects, and explore the kinematic size-luminosity relation (Sec. \[sec:sizes:relation\]). The kinematic size [$R_{\mathrm{KDR}}$]{} will also be used to estimate the outflow energetics (Sec. \[sec:energetics:definition\]) and to study the relationship between AGN luminosity and outflow properties (Sec. \[sec:energetics:regress\]). Surface Brightness and Velocity Profiles of [\[O [III]{}\]]{} {#sec:sizes:spec} ------------------------------------------------------------- In the upper left panels of Fig. \[fig:spec2100\], \[fig:spec2101\], and Appendix \[sec:append:objs\], we show the Magellan slit positions of the SDSS images and the continuum subtracted two-dimensional [[\[O [III]{}\]]{}$\lambda$5007]{} spectra of our objects. The lower left panels show the extracted nuclear spectra of the central 1 covering the [H$\beta$]{}, [\[O [III]{}\]]{}$\lambda$4959, and [[\[O [III]{}\]]{}$\lambda$5007]{} lines. Three objects were observed with multiple slit positions (J1055+1102, J1255$-$0339, and J2133$-$0712). For these objects, we choose the slit with the most extended [[\[O [III]{}\]]{}$\lambda$5007]{} emission as the representative slit and derive all the measurements from this slit. The size variation between different slit orientations are $\lesssim$ 20%. With the two-dimensional spectra, we can measure the [[\[O [III]{}\]]{}$\lambda$5007]{} line surface brightness profile (upper right panels), which is integrated within a velocity range of $-2000$ to $2000$ [km s$^{-1}$]{} to cover the entire line. The signal-to-noise ratios of these surface brightness measurements are all above 10 and can reach $\sim$10$^3$ at the nucleus (middle right panels). We find that the line-emitting gas is mostly AGN-ionized instead of star-formation ionized with an [[\[O [III]{}\]]{}$\lambda$5007]{} to [H$\beta$]{} line ratio between 3 and 10. The only exception is part of the nuclear region of SDSS J1255$-$0339, where the ratio is close to two. In addition to photoionization, we are also interested in the mechanical feedback that can disturb or accelerate the gas, which can be traced by the emission line profiles. From the two-dimensional spectra, we can measure the line velocity and the line width as a function of position (upper and middle right panels). To avoid the biases introduced by parametric fitting, we calculate the median velocity [$v_{\mathrm{med}}$]{} and the 80 percent linewidth [$w_{80}$]{} in a non-parametric way. We take the cumulative integral of the original spectrum to find its 10th, 50th, and 90th percentile velocity. The integrated spectrum is spline interpolated to avoid discretization. The median velocity [$v_{\mathrm{med}}$]{} is the 50th percentile velocity and [$w_{80}$]{} is the velocity difference between the 10th and 90th percentiles. [$w_{80}$]{} roughly corresponds to the FWHM for Gaussian profiles, but is more sensitive to line wings and therefore suitable to capture high velocity motions [@Liu2013; @Harrison2014]. Both [$v_{\mathrm{med}}$]{} and [$w_{80}$]{} are measured in each 0.6 bin to achieve a good signal-to-noise ratio. The [$w_{80}$]{} measurement is used to derive other important quantities in this paper, such as [$R_{\mathrm{KDR}}$]{}. However, it may be biased by the spectral PSF and affected by noise. To quantify these effects, we perform a series of simulations as described in Appendix \[sec:append:sim\_w80\]. Both the PSF biases and the uncertainties due to the noise are $\lesssim$ 10% for lines wider than [$w_{80}$]{}$>600$ [km s$^{-1}$]{} with a SNR above 30. The noise uncertainties become negligible ($\lesssim 10$ [km s$^{-1}$]{}) for typical lines with SNR $>100$. We correct for the PSF bias and assign the errors on [$w_{80}$]{} according to the best fit results from the simulation (shown as the solid blue dots in the lower right panels of Fig. \[fig:spec2100\], \[fig:spec2101\], and Appendix \[sec:append:objs\]). This correction has a minimal effect on the results of this paper, except in the case of SDSS J2154+1131, which is ruled out as a kinematically disturbed region after the correction (see Sec. \[sec:sizes:rv\]). The [$w_{80}$]{} measurement is not the major source of uncertainty for [$R_{\mathrm{KDR}}$]{} and its derived quantities. For each object, we calculate [$w_{80,\rm{AVG}}$]{} as the luminosity weighted quadratic mean of the [$w_{80}$]{} profile, which is tabulated in Tab. \[tab:measurements\]. We assign conservative errors of 20 [km s$^{-1}$]{} on [$w_{80,\rm{AVG}}$]{} to encompass other unaccounted sources of errors. Spatial Resolution and Size Uncertainties {#sec:sizes:resolve} ----------------------------------------- To measure the spatial extent of the ionized region and the kinematically disturbed region, it is important to consider the smearing effect of the spatial PSF, which can exaggerate these size measurements depending on the resolution. This effect is especially important when the nucleus outshines the extended emission. For type 1 AGN, one way to robustly recover the true size of the extended emission is to measure the point-spread function (PSF) from the broad emission lines and subtract it from the nucleus [e.g, @Husemann2013; @Husemann2015]. After the subtraction, @Husemann2015 reveals that many objects in their type 1 AGN sample still retain their extended high velocity [\[O [III]{}\]]{} nebula. This technique cannot be easily applied to our type 2 sample. We instead quantify the effect of the spatial-PSF with a series of 2-D simulations described in Appendix \[sec:append:sim\_pv\]. The simulations consider a range of source kinematic structures, including a high velocity component and disk rotation. They also adopt empirical spectral and spatial PSFs measured from the data. Based on the simulations, we determine whether the narrow line region or the kinematically disturbed region is spatially resolved, and adopt representative errors on the size measurements. The [[\[O [III]{}\]]{}$\lambda$5007]{} line surface brightness profile is compared to the PSF to determine whether the narrow line region is spatially resolved, see the upper right panels of Fig. \[fig:spec2100\], \[fig:spec2101\], and Appendix \[sec:append:objs\]. The fiducial PSF is conservatively taken from a flux calibration star with the worst seeing (FWHM = 10), whereas the median seeing for the targets is only 07. Four objects that have surface brightness profiles consistent with the PSF are determined to have unresolved NLR (SDSS J1419+0139, J2102$-$0647, J2133$-$0712, and J2142+0001, also see Tab. \[tab:measurements\]). Their [$R_{\mathrm{NLR}}$]{} are treated as upper limits. Even when the total [[\[O [III]{}\]]{}$\lambda$5007]{} emission of an object is resolved, its high velocity gas may not be. To determine whether a kinematically disturbed region is resolved, we use the surface brightness profiles of the high velocity gas beyond $\pm$300 [km s$^{-1}$]{}. These velocity cuts are made with respect to the systemic velocities fitted to the stellar absorption features as described in Sec. \[sec:data:center\], and are higher than the typical galaxy rotation. Four objects are classified as having unresolved KDRs, as these surface brightness profiles are consistent with the PSF (J1419+0139, J2102$-$0647, J2133$-$0712, J2142+0001), and the other five KDRs are considered as resolved (J0141$-$0945, J1000+1242, J1010+1413, J1255$-$0339, J2333+0049). However, there can be cases where the light profiles deviate from the PSF because of contamination from galaxy rotation, even if the KDR is compact (see Appendix \[sec:append:sim\_pv\]). SDSS J2154+1131 is one example where the light profiles could be affected by rotation, while it has intrinsically narrow linewidths [$w_{80}$]{}$<600$ [km s$^{-1}$]{}. We visually inspect the 2-D spectra of these five KDRs and determine that they are resolved and that the broad high velocity surface brightness profiles are not due to rotation. In the end, among the twelve objects, three have no kinematically disturbed regions, four have unresolved ones, and five have resolved ones (see Tab. \[tab:measurements\]). The PSF can also bias the [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{} size measurements. According to the simulations in Appendix \[sec:append:sim\_pv\], this bias is less than 1 with an average of $\lesssim 0\farcs5$, but the exact amount depends on the structure of the [[\[O [III]{}\]]{}$\lambda$5007]{} line so cannot be easily corrected. To account for this uncertainty, we assign an error of $0\farcs5$, which is also about half of the PSF FWHM, to the [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{} measurements. This is the dominant source of errors for these sizes. We also incorporate studies of type 2 AGN at higher AGN luminosities [@Liu2013b; @Hainline2014d]. @Hainline2014d determines that 5 out of 30 objects in their sample have unresolved narrow-line regions, while the other 25 are resolved. @Liu2013b determines that all of their narrow-line-regions are resolved based on either the surface brightness profile or structures in the velocity fields. We will determine [$R_{\mathrm{KDR}}$]{} for the @Liu2013b sample to extend our luminosity baseline, but we cannot use identical criteria to determine whether the kinematically disturbed regions are resolved. However, structures in the [$w_{80}$]{} profile are seen in all of their objects, and the measured sizes of the kinematically disturbed regions (Sec. \[sec:sizes:rv\]) are all larger than the PSF. Moreover, HST narrow-band images of these objects reveal resolved high velocity dispersion components on several kpc scales (Wylezalek et al in prep.). We thus treat the 13 kinematically disturbed regions in the @Liu2013b sample as spatially resolved. We do not apply seeing corrections to these object but adopt size errors on [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{} equivalent to half of the PSF FWHM to encompass the potential size bias, which is $0\farcs35$ for @Liu2013b and $0\farcs5$ for @Hainline2014d. This error is larger than the one estimated by @Liu2013b, 5-14%, which does not take the uncertainties due to seeing into account. Narrow-Line Region Radius [$R_{\mathrm{NLR}}$]{} {#sec:sizes:riso} ------------------------------------------------ The size of the influence of AGN photoionization can be quantified by the narrow-line region radius [$R_{\mathrm{NLR}}$]{}. We adopt a common definition of [$R_{\mathrm{NLR}}$]{} as the semi-major axis of the $10^{-15}~(1+z)^{-4}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ isophote of the [[\[O [III]{}\]]{}$\lambda$5007]{} line [@Liu2013b; @Hainline2013; @Hainline2014d]. This isophote corresponds to a fixed intrinsic surface brightness ($5.1\times10^{39}$ erg s$^{-1}$ kpc$^{-2}$), such that this measurement can be compared across studies and is independent of the redshift or the depth of the observation, provided the observations reach this depth. [$R_{\mathrm{NLR}}$]{} is designed to measure the largest extent of the [\[O [III]{}\]]{} region along its most elongated axis. To match our measurements with the ones from IFU studies [e.g., @Liu2013b], where the semi-major axis can be easily determined, we align our Magellan slits along the semi-major axis of the SDSS $r$-band images, which contains the [[\[O [III]{}\]]{}$\lambda$5007]{} line. When multiple slit positions are available, we take the one with the largest [$R_{\mathrm{NLR}}$]{} as the representative slit for all the measurements. @Liu2013b suggests that narrow-line regions are often round, in which case the size [$R_{\mathrm{NLR}}$]{} may not depend strongly on the slit orientation, although some of our objects that have IFU observations show irregular [\[O [III]{}\]]{} morphology [J1000+1242 and J1010+1413, @Harrison2014]. The measured [$R_{\mathrm{NLR}}$]{} are listed in Table \[tab:measurements\] and demonstrated on the upper right panels of Fig. \[fig:spec2100\] and \[fig:spec2101\], etc. The four unresolved objects in our sample are treated as [$R_{\mathrm{NLR}}$]{} upper limits. One object SDSS J1255$-$0339 has a particularly large [$R_{\mathrm{NLR}}$]{}$=33$ kpc because it has a pair of extended but kinematically cold tidal features (Appendix \[sec:append:objs\]). We incorporate 14 [$R_{\mathrm{NLR}}$]{} measurements of luminous type 2 AGN from @Liu2013b, as well as 20 [$R_{\mathrm{NLR}}$]{} measurements and 5 unresolved upper limits from @Hainline2014d (Fig. \[fig:sizelum\_whichR\]). @Liu2013b uses the Gemini GMOS IFU while @Hainline2014d uses Gemini GMOS long-slit spectroscopy and both reach similar depths as ours. Five objects in @Hainline2014d are excluded due to duplication with @Liu2013b (4/10) or *WISE* source confusion (1/10). Kinematically Disturbed Region Radius [$R_{\mathrm{KDR}}$]{} {#sec:sizes:rv} ------------------------------------------------------------ The radius of the kinematically disturbed region [$R_{\mathrm{KDR}}$]{} measures the spatial extent of the high velocity gas. We use two criteria to define the kinematically disturbed region. First, the [[\[O [III]{}\]]{}$\lambda$5007]{} line width [$w_{80}$]{} has to be larger than a threshold of 600 [km s$^{-1}$]{}. This is similar to the criterion used by @Harrison2014 to identify high velocity non-virialized motions. While this value of 600 [km s$^{-1}$]{} is somewhat arbitrary, it is also conservative, since galaxy velocity dispersions rarely exceed 300 [km s$^{-1}$]{}. Typical ellipticals have velocity dispersion $\sigma \sim 200$ [km s$^{-1}$]{} [@Sheth2003], thus their [$w_{80}$]{} should be under 500 [km s$^{-1}$]{} assuming virialized motions with Gaussian profiles. The second criterion is to require the surface brightness of the high velocity gas (the red $>300$ [km s$^{-1}$]{} or the blue $<-300$ [km s$^{-1}$]{} side) to be higher than the isophotal threshold defined in Sec. \[sec:sizes:riso\]. Without this surface brightness threshold, in some cases [$R_{\mathrm{KDR}}$]{} could be severely biased by the spatial PSF when the PSF propagates high line widths to large radii, see Appendix \[sec:append:sim\_pv\]. [$R_{\mathrm{KDR}}$]{} is taken as the largest measured radius where both criteria are met. The resulting [$R_{\mathrm{KDR}}$]{} are tabulated in Table \[tab:measurements\] and plotted in Fig. \[fig:sizelum\_whichR\]. Three objects, SDSS J1055+1102, J1351+0728, and J2154+1131, do not have kinematically disturbed regions as their [$w_{80}$]{} are below 600 [km s$^{-1}$]{}. These are plotted as empty squares in Fig. \[fig:sizelum\_whichR\]. The four unresolved objects, SDSS J1419+0139, J2102$-$0647, J2133$-$0712, and J2142+0001, are treated as [$R_{\mathrm{KDR}}$]{} upper limits, and plotted as down-facing triangles. The other five resolved KDRs, J0141$-$0945, J1000+1242, J1010+1413, J1255$-$0339, and J2333+0049, are treated as [$R_{\mathrm{KDR}}$]{} measurements and are plotted as circles. We adopt an error of $0\farcs5$ on [$R_{\mathrm{KDR}}$]{} as discussed in \[sec:sizes:resolve\]. To increase the sample size, we include the @Liu2013b sample, where the median [$w_{80}$]{} as a function of radius is also measured. Among the fourteen objects, only one (J0842+3625, the empty red square in Fig. \[fig:sizelum\_whichR\]), does not have [$w_{80}$]{} above $600$ [km s$^{-1}$]{}. We calculate the [$R_{\mathrm{KDR}}$]{} of the other thirteen objects based on their [$w_{80}$]{} profiles (empty red squares in Fig. \[fig:sizelum\_whichR\]). We cannot apply the surface brightness requirement on the [$R_{\mathrm{KDR}}$]{} measurements as the high velocity light profiles are not available for their sample. Without the surface brightness requirement, [$R_{\mathrm{KDR}}$]{} can be largely overestimated when there is no spatially extended narrow component, such that the high [$w_{80}$]{} of the broad component is propagated to large radius by the PSF, see Appendix \[sec:append:sim\_pv\]. The observed drop in [$w_{80}$]{} at large radius in the @Liu2013b sample indicates that there is a narrow extended component, which in our simulations makes it very difficult for an unresolved broad component to impact [$w_{80}$]{} at large scales and give strongly biased [$R_{\mathrm{KDR}}$]{} measurement. We adopt an error of $0\farcs35$ on [$R_{\mathrm{KDR}}$]{} (see Sec. \[sec:sizes:resolve\]). These numbers are tabulated in Table \[tab:measurements\_liu13\]. The combination of these two samples covers a wide dynamic range in AGN luminosity from $10^{45}$ to $10^{47}$ [erg s$^{-1}$]{}. The Size-Luminosity Relations {#sec:sizes:relation} ----------------------------- In this section we investigate the relationship between AGN luminosity and our two size measurements - [$R_{\mathrm{NLR}}$]{}, which depends on photoionization, and [$R_{\mathrm{KDR}}$]{}, which is based on kinematics. These two radii typically extend from a few to 15 kpc (with the exception of SDSS J1255$-0339$ where [$R_{\mathrm{NLR}}$]{}$=33$ kpc), corresponding to a light travel time of $\sim10^4$ years. The relation between [$R_{\mathrm{NLR}}$]{} and the AGN luminosity has been studied extensively and there are tentative signs that it flattens at high AGN luminosity [@Hainline2013; @Liu2013b; @Hainline2014d; @Liu2014a]. In Fig. \[fig:sizelum\_whichR\], left, we revisit this relation, supplementing it with our new high quality [$R_{\mathrm{NLR}}$]{} measurements. Our objects populate a lower luminosity range compared to previous studies, allowing us to extend the luminosity baseline. Furthermore, we use a different AGN luminosity indicator – 15 luminosity [$\nu L_{\nu,15}$]{}, which is arguably less sensitive to the anisotropy of the infrared emission (Appendix \[sec:append:WISE\]). On the right-hand side of Fig. \[fig:sizelum\_whichR\] we show the dependence on AGN luminosity of our derived kinematically disturbed region radius [$R_{\mathrm{KDR}}$]{}, which illustrates the effect of the AGN luminosity on the mechanical feedback operating on galaxy scales. Using only the valid size measurements (circles in Fig. \[fig:sizelum\_whichR\], the [$R_{\mathrm{NLR}}$]{} outlier, J1255$-$0339, is not included), we find that both radii are positively correlated with the 15 luminosity, with Pearson’s [*r*]{} correlation coefficient above 0.6 and the $p$-values below 0.01. An essential property of [$R_{\mathrm{NLR}}$]{} is that it includes any photoionized gas in the vicinity of the galaxy, independent of its origin, including tidal features or illuminated companion galaxies. As an extreme example of illuminated tidal features, SDSS J1255$-$0339 has a pair of extended tidal tails emitting in [\[O [III]{}\]]{}. They can be seen in the SDSS [*r*]{}-band image and they yield a very large [$R_{\mathrm{NLR}}$]{} measurement. This object is a distinct outlier in the [$R_{\mathrm{NLR}}$]{} - [$\nu L_{\nu,15}$]{} relation (blue cross in the left panel of Fig. \[fig:sizelum\_whichR\]). But in the [$R_{\mathrm{KDR}}$]{} - [$\nu L_{\nu,15}$]{} space, this object follows the trend defined by other AGN because this extended feature has quiescent kinematics. As our new sample can improve the constraints on the low end slope of the [$R_{\mathrm{NLR}}$]{} size-luminosity relation, and we are interested in quantitatively comparing the [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{} size-luminosity relations, we fit these two relations with a single power-law (gray lines in Fig. \[fig:sizelum\_whichR\]) and a flattened power-law (black line). To determine whether the flattening of the size-luminosity relations is significant, we use the Bayesian Information Criteria (BIC) to distinguish which model is preferred by the data (Tab. \[tab:whichR\]). Only objects with valid size measurements (circles in Fig \[fig:sizelum\_whichR\]) rather than limits are included for this analysis. We find that the BIC of both the [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{} size-luminosity relations prefer a flattened power law ($\Delta$BIC = 13.8 and 12.6). Both relations saturate at a radius of about 10 kpc. But the saturation for [$R_{\mathrm{KDR}}$]{} occurs at a higher luminosity ([$\nu L_{\nu,15}$]{}$=10^{45.3}$ [erg s$^{-1}$]{}) compared to [$R_{\mathrm{NLR}}$]{} ([$\nu L_{\nu,15}$]{}$=10^{44.8}$ [erg s$^{-1}$]{}). [$R_{\mathrm{KDR}}$]{} are in general lower than [$R_{\mathrm{NLR}}$]{} by $\sim$ 0.5 dex below the saturation luminosity. So using a flux-based measurement can lead to overestimation of the outflow sizes, as suggested by @Karouzos2016. Therefore, we confirm the findings of @Liu2013b and @Hainline2013 [@Hainline2014d] that, beyond a luminosity of about [$L_{\mathrm{bol}}$]{}$>10^{46}$ [erg s$^{-1}$]{}, the [$R_{\mathrm{NLR}}$]{} -[$\nu L_{\nu,15}$]{} relation flattens such that [$R_{\mathrm{NLR}}$]{} is a constant ($\sim10$ kpc) with respect to the AGN luminosity. One possible explanation of the observed limit to the narrow-line region size is a change in the ionization state of the gas at large radii. For example, as the density of the gas drops, the clouds can transition from an ionization-bounded to matter-bounded state, such that the O$^{2+}$ ions become ionized to O$^{3+}$ [@Liu2013b]. Our measured slope of the [$R_{\mathrm{NLR}}$]{}-[$\nu L_{\nu,15}$]{} relation at the low luminosity end (0.72) as fitted by the flatted power law is steeper than the value of 0.47 found by @Hainline2013, likely because our objects provide a better sampling of the relationship at lower luminosities. However, the flattening of [$R_{\mathrm{KDR}}$]{} can also partly be due to the drop in the [[\[O [III]{}\]]{}$\lambda$5007]{} intensity at $\sim$10 kpc, as the [$R_{\mathrm{KDR}}$]{} measurement is also limited by the surface brightness of [[\[O [III]{}\]]{}$\lambda$5007]{}. If we use only objects where [$w_{80}$]{} drops below 600 [km s$^{-1}$]{} at large radii, such that [$R_{\mathrm{KDR}}$]{} marks the edge of the high velocity gas, not just the [[\[O [III]{}\]]{}$\lambda$5007]{} luminous gas, then the confidence for the flattened power law preference becomes lower (BIC = 5.0). More observations are needed to confirm whether the sizes of the kinematically disturbed regions indeed saturate at 10 kpc. Nonetheless, the size of the AGN-disturbed region [$R_{\mathrm{KDR}}$]{} seems to scale with the AGN luminosity until a high luminosity of [$\nu L_{\nu,15}$]{}$\sim10^{45}$ or [$L_{\mathrm{bol}}$]{}$\sim10^{46}$ [erg s$^{-1}$]{}. The slope of the scaling at the low luminosity end is not well constrained and requires a larger sample. It is possible that the size of the kinematically disturbed region continues to decrease in lower luminosity AGN. For example, NGC 1068, a local type 2 AGN at a lower AGN luminosity of [$L_{\mathrm{bol}}$]{}$\sim 10^{44-45}$ [erg s$^{-1}$]{} (@Goulding2010 [@Alonso-Herrero2011; @Garcia-Burillo2014] and references therein), hosts ionized outflows with deprojected velocities as fast as $\sim$ 1300 [km s$^{-1}$]{}, but with a much smaller outflow size on the scale of $\sim$ 200 pc [@Cecil1990; @Crenshaw2000]. In summary, we find that both [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{} correlate with the AGN luminosity and both saturate at about 10 kpc at high AGN luminosities. [$R_{\mathrm{KDR}}$]{} is in general lower than [$R_{\mathrm{NLR}}$]{} by $\sim$ 0.5 dex before the saturation and saturates at a higher luminosity ([$\nu L_{\nu,15}$]{}$=10^{45.3}$ [erg s$^{-1}$]{} versus $10^{44.8}$ [erg s$^{-1}$]{}). [$R_{\mathrm{KDR}}$]{} is also less affected by the presence of tidal tails or companions. Outflow Properties and Energetics {#sec:energetics} ================================= The large [[\[O [III]{}\]]{}$\lambda$5007]{} linewidths ([$w_{80}$]{} $= 600-1500$ [km s$^{-1}$]{}) commonly seen in our sample suggest that many of these systems have high velocity non-virialized gas motions, likely outflows. While AGN outflows on galactic scales are thought to be an important agent to regulate star-formation, their size distributions, energy efficiencies, and dependence on the AGN luminosity are not well-understood. Therefore, it is important to measure the properties of these outflows, including size, velocity, and energetics, and study their dependence on the AGN luminosity. In Sec. \[sec:energetics:vprof\], we discuss kinematic models to explain the observed [[\[O [III]{}\]]{}$\lambda$5007]{} velocity profiles. In Sec. \[sec:energetics:definition\], we define and calculate the outflow properties including the sizes, velocities, and time scales. We also use two methods to estimate the outflow kinetic power. In Sec. \[sec:energetics:regress\], we study the correlation between the outflow properties and the AGN bolometric luminosities and discuss the outflow energy efficiency. We find that the outflow size, velocity, and energy are correlated with the AGN luminosity. Although the actual outflow efficiency cannot be constrained with high accuracy (we estimate $\eta=\dot{E}/{\ensuremath{L_{\mathrm{bol}}}}=0.01\% - 30\%$), our results are consistent with a hypothesis that the energy efficiencies of AGN outflows are roughly constant for AGN in the luminosity range of $10^{45-47}$ [erg s$^{-1}$]{}. Velocity Profiles and a Kinetic Model of the Outflow {#sec:energetics:vprof} ---------------------------------------------------- With only three exceptions, the objects in our sample have gas velocities ([$w_{80}$]{}$=600-1500$ [km s$^{-1}$]{}) faster than virialized motions in a typical galactic potential ([$w_{80}$]{}$<500$ [km s$^{-1}$]{}, see Sec. \[sec:sizes:rv\]). In this section, we discuss possible outflow models that can explain the observed linewidth and its profile. @King2005 and @King2011 found that for an energy-conserving (no radiative loss) spherical outflow propagating in a galaxy with an isothermal potential and gas distribution, the outflow’s shock front expands at a constant velocity, which for black holes on the $M-\sigma$ relation accreting at their Eddington rate is of order 1000 [km s$^{-1}$]{}. At the same time, gas at large radii that has not yet been shocked and accelerated by the outflow should remain at its original velocity. Therefore, there should be a sharp drop in the gas velocity profile corresponding to the shock front. The resolved KDRs in our sample show [$w_{80}$]{} profiles that are consistently high ([$w_{80}$]{}$\sim 600-1500$ [km s$^{-1}$]{}) within the central few kpc (Fig. \[fig:spec2100\], \[fig:spec2101\], and Appendix \[sec:append:objs\]). Those that still have high signal-to-noise ratio measurements at large radii (SDSS J1000+1242, J1010+1413, J1255$-$0339) do show a sudden velocity drop at radii of 5-10 kpc. Such a high linewidth plateau followed by a sudden drop in [$w_{80}$]{} are also commonly seen in other studies of type 2 AGN [e.g. @Greene2011; @Liu2013; @Karouzos2016]. @Liu2013 suggests that flat [$w_{80}$]{} profiles correspond to a constant outflow velocity $v\sim$ [$w_{80}$]{}$/1.3=460 -1100$ [km s$^{-1}$]{}, if the outflow is spherical/quasi-spherical with a power-law intensity profile. Such spherical morphology is seen in their IFU spectroscopic data, but whether the outflows in our objects are spherical cannot be verified with our long-slit data alone[^10]. This plateau-shaped velocity profile is broadly consistent with the prediction of the @King2011 model, and the kinematically disturbed region radius [$R_{\mathrm{KDR}}$]{}, defined based on the line width threshold of 600 [km s$^{-1}$]{}, is able to capture the location of the velocity drop. We adopt this constant-velocity spherical outflow model as a simplified framework to interpret our observations and use the linewidth [$w_{80,\rm{AVG}}$]{} as a measure of the outflow velocity and [$R_{\mathrm{KDR}}$]{} as the outflow size. Outflow Properties Definition {#sec:energetics:definition} ----------------------------- In this section, we define the outflow properties, including the radius, velocity, dynamical time scale, and energetics. As discussed in Sec. \[sec:energetics:vprof\], we use [$R_{\mathrm{KDR}}$]{} – the radius of the kinematically disturbed region where [$w_{80}$]{}$> 600$ [km s$^{-1}$]{} – as the radius of the outflow. The errors in [$R_{\mathrm{KDR}}$]{} are taken to be half of the seeing FWHM, which is $0\farcs5$ for our sample and $0\farcs35$ for the @Liu2013b sample (see Sec. \[sec:sizes:resolve\]). Following @Liu2013, the outflow velocity $v$ is taken as $v=$[$w_{80,\rm{AVG}}$]{}/1.3, where the factor of 1.3 is the projection correction for quasi-spherical outflows. As described in Sec. \[sec:sizes:spec\], for our objects, [$w_{80,\rm{AVG}}$]{} is represented by the luminosity-weighted quadratic mean of [$w_{80}$]{} (spectral-PSF corrected, see Appendix \[sec:append:sim\_w80\]), and a conservative error of 20 [km s$^{-1}$]{} is assumed. The [$w_{80,\rm{AVG}}$]{} for the @Liu2013b sample is the [$w_{80}$]{} measured from their SDSS fiber spectrum [Column 7, Table 1 @Liu2013b]. These are not corrected for the spectral resolution so we adopt conservative errors of 75 [km s$^{-1}$]{} corresponding to half of the SDSS spectral FWHM. We then derive the outflow dynamical time scale as [$t_{\mathrm{dyn}}$]{}=[$R_{\mathrm{KDR}}$]{}$/v$. All of these quantities and their errors are tabulated in Table \[tab:measurements\]. As discussed in @Greene2012, measuring the mass of the outflow can be challenging, which is the biggest uncertainties in estimating the energetics. As the emissivity scales with density squared, strong emission lines, such as [[\[O [III]{}\]]{}$\lambda$5007]{} and [H$\beta$]{}, trace only the densest ionized gas clouds. These clouds occupy only a small fraction of the total volume ($\sim10^{-2}$), and there can be a large amount of diffuse ionized gas unaccounted for. Parts of the outflows could even be in different phases, such as molecular or hot plasma, that are not traced with these lines. We adopt two methods to bracket the range of possible kinetic power of the outflow. Assumptions in gas densities are made for both methods using reasonable values for type 2 AGN. While the exact values of energy depend on these assumptions, these measurements are only order-of-magnitude estimations and we focus on trends in outflow properties with AGN luminosity, which do not depend as strongly on these assumptions. For the first method, we estimate the mass of the dense ionized gas from the [H$\beta$]{} luminosity assuming case B recombination[^11]. We follow @Osterbrock2006 and use the case B recombination equation at $10^4$ K $$M_\mathrm{H} = 6.8 \times 10^8~L_{\mathrm{H\beta,43}}~n^{-1}_{e,100}~{\ensuremath{M_{\odot}}},$$ where $L_{\mathrm{H\beta,43}}$ is the [H$\beta$]{} luminosity in units of $10^{43}$ erg s$^{-1}$, and $n_{e,100}$ is the electron density in units of 100 cm$^{-3}$. The electron densities inferred from the [[\[\]]{}]{} $\lambda6716/\lambda6731$ ratios are typically a few hundred cm$^{-3}$ for AGN outflows [e.g., @Nesvadba2006; @Villar-Martin2008; @Greene2011], while much higher densities up to $10^5$ cm$^{-3}$ have been measured using other diagnostics [e.g., @Holt2010]. Such measurements are likely biased to the densest gas clumps of small volume-filling factors in the outflow [@Greene2011]. Studies of extended AGN scattered light, which is not biased to the dense gas, infers much lower densities $<$ 1 cm$^{-3}$ [@Zakamska2006]. For the purpose of an order-of-magnitude estimation, we assume an electron density of 100 cm$^{-3}$, which represents the dense clumps in the outflow. [$L_{\mathrm{H{\beta}}}$]{} is measured from the Magellan slits assuming the [H$\beta$]{} surface brightness profile is azimuthally symmetric. We then calculate the kinetic power as $$\dot{E}_{\mathrm{kin}} = \frac{\frac{1}{2} M_{H} v^2}{t_{dyn}} = \frac{1}{2} M_{H} R_{\mathrm{KDR}}^{-1} v^3,$$ where [$t_{\mathrm{dyn}}$]{} is the dynamical time scale, [$R_{\mathrm{KDR}}$]{} is the size of the kinematically disturbed region, and $v=$[$w_{80,\rm{AVG}}$]{}/1.3 is the deprojected velocity of the outflow. Here $M_{H}$ includes the mass of all the velocity components, not just the high velocity parts, so $M_{H} v^2$ represent the total kinetic energy of the dense ionized gas. We assume the kinetic energy of the outflow dominates the total kinetic energy such that $\dot{E}_{\mathrm{kin, outflow}} \sim \dot{E}_{\mathrm{kin, total}}$. This value can still be an underestimate of the true kinetic energy of the outflow as the [H$\beta$]{} emission line traces only the densest ionized gas. The second method is similar to the Sedov-Taylor solution for a supernova remnant where a spherical bubble is expanding into a medium of constant density. This method is motivated by observations of such organized outflows in similar type 2 AGN, e.g. SDSS J1356+1026 [@Greene2012]. We adopt a simple definition of [$\dot{E}_{\mathrm{ST}}$]{} as $$\dot{E}_{\mathrm{ST}} = \frac{1}{2} \dot{M} v^2 = 2 \pi \rho_0 R_{\mathrm{KDR}}^2 v^3,$$ where $$\dot{M}= 4 \pi \rho_0 R_{\mathrm{KDR}}^2 v$$ is the rate at which ambient gas enters the outflow, $\rho_0$ is the ambient gas density, [$R_{\mathrm{KDR}}$]{} is the size of the kinematically disturbed region, and $v=$[$w_{80,\rm{AVG}}$]{}/1.3 is the deprojected velocity of the outflow. The ambient gas density $\rho_0$ is assumed to be a constant $\rho_0=m_p\times(0.5~\mathrm{cm}^{-3})$. Such a density is supported by scattering measurements of type 2 AGN by @Zakamska2006. This definition is within 20% of the Sedov-Taylor solution described in eq. 39.9 of @Draine2011, and e.q. 7.56 of @Dyson1980, and about 30% lower than the one adopted by @Nesvadba2006 and @Greene2012. This method likely overestimates the kinetic power, as it assumes that all of the ambient gas is entrained in the outflow. Indeed, the resulting [$\dot{E}_{\mathrm{ST}}$]{} is higher than [$\dot{E}_{\mathrm{kin}}$]{} by 1 to 3 orders of magnitude (Sec. \[sec:energetics:regress\]). All of these quantities – [$R_{\mathrm{KDR}}$]{}, [$w_{80,\rm{AVG}}$]{}, [$t_{\mathrm{dyn}}$]{}, [$\dot{E}_{\mathrm{kin}}$]{}, and [$\dot{E}_{\mathrm{ST}}$]{} – as well as their errors are tabulated in Tables \[tab:measurements\] and \[tab:measurements\_liu13\]. The errors on [$R_{\mathrm{KDR}}$]{} and [$w_{80,\rm{AVG}}$]{} are 05 \[035\] and 20 [km s$^{-1}$]{} \[75 [km s$^{-1}$]{}\] for our sample \[the @Liu2013b sample\]. The errors on [$t_{\mathrm{dyn}}$]{}, [$\dot{E}_{\mathrm{kin}}$]{}, and [$\dot{E}_{\mathrm{ST}}$]{} are propagated from the input quantities, where the errors on [$L_{\mathrm{H{\beta}}}$]{} and the gas densities, $n_{e,100}$ and $\rho_0$, are assumed to be 20% and 50%, respectively. The size upper limits for unresolved objects are also propagated to the derived quantities. The absolute values of [$\dot{E}_{\mathrm{kin}}$]{} and [$\dot{E}_{\mathrm{ST}}$]{} should be taken as order-of-magnitude estimations and are only used to bracket the true value of the outflow kinetic power. Relation between the Outflow Properties and the AGN Luminosities {#sec:energetics:regress} ---------------------------------------------------------------- In this section, we investigate how outflow size, velocity, dynamical time-scale, and energy correlate with AGN luminosity. We adopt the 15 $\micron$ luminosity [$\nu L_{\nu,15}$]{} as the AGN luminosity indicator, as discussed in Sec. \[sec:data:WISE\]. The outflow size, velocity, dynamical time scale, and energetics are defined in section \[sec:energetics:definition\]. The relations between these outflow quantities ($y$) and the AGN luminosity indicator [$\nu L_{\nu,15}$]{}, are quantified by a single power law, $$\begin{aligned} \log(y) = \alpha + \beta \times \log(\nu L_{\nu, 15}). \end{aligned}$$ We adopt a Bayesian linear regression approach developed by @Kelly2007 using a Markov chain Monte Carlo sampling method, which accounts for the measurement errors, intrinsic scatter, and upper or lower limits. The measurement errors are as summarized in Sec. \[sec:energetics:definition\]. We assume that there is no error in the AGN luminosity indicator [$\nu L_{\nu,15}$]{}. The intrinsic scatter is fitted as a hidden variable. The upper- and lower limits are included in the fits as censored data. Three objects in our sample and one object from @Liu2013b have no kinematically disturbed region. They are only used for the [$w_{80}$]{} - luminosity relation. Two objects in our sample and one object in the @Liu2013b sample have no [H$\beta$]{} measurement and are unavailable for the [$\dot{E}_{\mathrm{kin}}$]{} relation, see Tables \[tab:measurements\] and \[tab:measurements\_liu13\] for details. To access the statistical significance of the correlations, we calculate the Pearson’s $r$ correlation coefficient and its $p$-value using only the valid measurements (solid circles in Fig. \[fig:kimlum\_regress\]). The results are shown in Fig. \[fig:kimlum\_regress\] and tabulated in Table \[tab:regress\]. We find that the outflow radius [$R_{\mathrm{KDR}}$]{} correlates strongly with the AGN luminosity with a Pearson’s $r$ $p$-value of $3\times10^{-4}$ and a power-law index of $0.60 ^{+0.13}_{-0.13}$. The correlations of the [$w_{80,\rm{AVG}}$]{} - luminosity and the [$t_{\mathrm{dyn}}$]{} - luminosity relations are not as strong, with $p$-values of only about $10^{-2}$ and power-law indices of $0.17^{+0.06}_{-0.07}$ and $0.52 ^{+0.16}_{-0.16}$, respectively. The Sedov-Taylor power estimate [$\dot{E}_{\mathrm{ST}}$]{} also correlates with the AGN luminosity with a power-law index of $1.76 ^{+0.31}_{-0.31}$ but the kinetic power estimate [$\dot{E}_{\mathrm{kin}}$]{} shows no strong correlation with the luminosity. The errors represent 1-$\sigma$ errors. We compare the two energy estimates in Fig. \[fig:kimlum\_energetics\]. The [$\dot{E}_{\mathrm{ST}}$]{} are typically 1 to 3 orders of magnitudes higher than [$\dot{E}_{\mathrm{kin}}$]{}, meaning that we cannot constrain the outflow energetics precisely. These two methods bracket a very large range of feedback energy efficiency $\eta=\dot{E}/{\ensuremath{L_{\mathrm{bol}}}}=0.01\% - 30\%$, reflecting big uncertainties in the outflowing mass. The dependence of this energy efficiency on the AGN luminosity also cannot be constrained precisely. Our data do not rule out the scenario where $\eta$ is a constant within the luminosity range of [$L_{\mathrm{bol}}$]{}$\sim 10^{45-47} $ [erg s$^{-1}$]{}. It is possible that most AGN in this luminosity range are capable of driving outflows with energy proportional to their AGN luminosity. We find that the outflow properties, including the radius and velocity, correlate and increase with the AGN bolometric luminosity. An AGN outflow should be a common phenomenon within the luminosity range of $L_{\nu,~\rm{15 \micron}} \sim 10^{44-46}$ [erg s$^{-1}$]{} or ${\ensuremath{L_{\mathrm{bol}}}}\sim 10^{45-47} $ [erg s$^{-1}$]{}. If there is a critical luminosity threshold for AGN feedback, below which outflows cannot be driven, it must occur at yet lower AGN luminosities. Outflow Occurrence Rates and Timescales {#sec:timescale} ======================================= In this section we discuss the occurrence rates and the sizes of the extended ionized outflows in luminous type 2 AGN and implications for characteristic timescales and variability of accretion. Kpc-scale ionized outflows are found to be common among luminous type 2 AGN. If we focus on objects with [$L_{\mathrm{bol}}$]{} $\sim10^{46}$ [erg s$^{-1}$]{}, 13 of the 14 objects in the combined @Liu2013b plus our sample host 10-kpc scale extended outflows based on our kinematic requirement. This gives a high occurrence rate of extended outflows $\gtrsim 90\%$. At a lower luminosity of [$L_{\mathrm{bol}}$]{}$\sim10^{45-46}$ [erg s$^{-1}$]{}, a high fraction of these objects also host outflows (9/12), but the typical sizes of these outflows are smaller $\sim 1-3$ kpc. Using Gemini GMOS IFU to study luminous type 2 AGN ([$L_{\mathrm{bol}}$]{}$=10^{45-46}$ [erg s$^{-1}$]{}) at $z=0.1-0.2$, @Harrison2014 also finds all 16 of their AGN have outflows $> 6$ kpc. The @Liu2013b sample is selected purely based on [\[O [III]{}\]]{} luminosity, but the @Harrison2014 sample could be biased by their high [[\[O [III]{}\]]{}$\lambda$5007]{} line width selection. Likewise, our broadband image selection could potentially bias our sample. But the luminous type 2 AGN ([$L_{\mathrm{bol}}$]{}$>10^{46}$ [erg s$^{-1}$]{}) in the parent @Mullaney2013 sample also have a high fraction (59%)[^12] of objects with high linewidths ([$w_{80}$]{} $> 600$ [km s$^{-1}$]{}), likely indicating outflows. While it is a concern that beam smearing could lead to an overestimation of the outflow sizes, the occurrence rate of extended outflows is still high after such effects are taken into account. After subtracting the unresolved nuclear component, @Husemann2015 still recover high line widths ([$w_{80}$]{}$>600$ [km s$^{-1}$]{}) in the extended nebula in seven out of twelve ($60\%$) type 1 AGN from @Liu2014a. In the $z\sim 0.5$ sample of @Liu2013b, where the effect can be most severe, if we conservatively take out all four objects that could be considered as being marginally resolved[^13] we still arrive at a occurrence rate of 60%. Therefore, while most type 2 AGN studies suggest a high extended outflow occurrence rate of $\sim 90$% among luminous AGN (${\ensuremath{L_{\mathrm{bol}}}}\gtrsim 10^{46}$ [erg s$^{-1}$]{}), we can place a conservative lower limits of $60$% accounting for beam-smearing effects. To maintain such a high occurrence rate, each AGN outflow episode must be much longer than the outflow dynamical timescale, to reduce the probability of catching undersized outflows as they grow. As it takes [$t_{\mathrm{dyn}}$]{}$\sim 10^7$ years (Sec. \[sec:energetics:regress\] and Fig. \[fig:kimlum\_regress\]) to inflate a 10 kpc-scale bubble with an observed velocity of $\sim 1000$ [km s$^{-1}$]{}, these extended outflows have to be launched at least $\sim 10^7$ yr in the past. If $80\%$ of the luminous AGN were active $\sim 10^7$ yr ago, the entire outflow episode has to last for $\gtrsim 5\times10^7$ years. It seems unlikely that the AGN stay luminous ([$L_{\mathrm{bol}}$]{}$> 10^{46}$ [erg s$^{-1}$]{}) throughout the entire $\sim 10^8$ yr episode, as this timescale is very similar to the total growth time of a massive black hole $\sim 10^{7-8}$ yr [e.g., @Soltan1982; @Martini2001; @Yu2002 inferred from quasar clustering and black hole mass density]. Also, with this constant energy supply, the outflow would continue to expand at a velocity of $\sim 1000$ [km s$^{-1}$]{} and eventually reach a size of $\sim$ 100 kpc in $10^8$ years, if the outflow is described by the energy-conserving model of @King2011. However, most systems in our sample and the @Liu2013b sample with good signal-to-noise ratio at large radii do not show signs of extended outflows beyond $\sim 10$ kpc, but rather have clear velocity drops on these scales. Instead, we suggest it is far more natural that the AGN flickers on and off throughout this $\sim 10^8$ yr episode. In an analytical model by @King2011 of an energy conserving outflow expanding in an isothermal potential of $\sigma = 200$ [km s$^{-1}$]{}, when the AGN is accreting close to its Eddington rate, the outflow will expand at a constant velocity $\sim 2000$ [km s$^{-1}$]{}. If the AGN is shut off after $10^6$ years, the outflow will still continue to expand due to its internal thermal energy, but it will slowly decelerate, until $\sim 10^7$ years later the velocity will drop below, say, 300 [km s$^{-1}$]{} and then stall. At this point the outflow has reached a size of $\sim$ 10 kpc, as calculated by @King2011. Therefore, to maintain the high observed duty cycle of outflows with sizes of a few to 10 kpc and velocities about $1000$ [km s$^{-1}$]{}, there should be several AGN bursts, each $\sim 10^6$ year long with $\sim 10^7$ year intervals between them, so that if we observe a high luminosity AGN, often times it lights up the extended bubble driven by the previous AGN burst. Each AGN burst may even be shorter [e.g., $10^5$ years, @Schawinski2015] and more frequent, as long as it supplies enough energy to sustain extended outflows throughout the episode. There are other reasons to favor such an AGN flickering model. Theoretically, it is expected due to the episodic nature of gas cooling and feedback [@Novak2011]. We have posited an AGN cadence of $\sim 10^6$-year bursts with $10^7$-year intervals to explain one particular system with multi-scaled ionized and molecular outflows [@Sun2014]. AGN variability on timescales $\lesssim 10^{7}$ years has also been proposed to statistically tie star formation and AGN activity, in a model that can successfully reproduce observed AGN luminosity functions [@Hickox2014]. Therefore, short-term AGN variability ($\lesssim 10^7$ yr) over a long-term episode ($\sim 10^8$ yr) appears to be a feasible scenario to explain the sizes and the occurrence rate of extended outflows. If the type 2 AGN studies [this paper, @Liu2013; @Harrison2014] underestimate the impact of seeing and the occurrence rate is actually 60% or lower, long outflow episodes with $\sim 10^8$-year duration would no longer be required. Furthermore, flickering may be in conflict with the energy requirements inferred from SZ observations of luminous AGN [@Crichton2016]. Finally, we note that these objects are all selected by virtue of their high [\[O [III]{}\]]{} luminosities, so we may be biased to objects in an outflow-dominated phase. In summary, we estimate that extended (few - 10 kpc) ionized outflows are present in $> 60\%$ and possibly $90\%$ of all luminous type 2 AGN ([$L_{\mathrm{bol}}$]{}$> 10^{46}$ [erg s$^{-1}$]{}). Given that the outflow formation times are $\sim 10^7$ years, such a high occurrence rate implies a long duration for each outflow episode of $\sim 10^8$ years. It is unlikely that the AGN maintains a high luminosity ([$L_{\mathrm{bol}}$]{}$>10^{46}$ [erg s$^{-1}$]{}) throughout this $10^8$-year episode. Instead, our observations suggest that AGN flicker on a shorter time scale ($\lesssim 10^7$ years) and spend only $\sim$ 10% of their time in such a high luminosity state, and still maintain a high occurrence rate of extended outflows. Summary {#sec:summary} ======= We observe twelve luminous ([$L_{\mathrm{bol}}$]{}$\sim10^{45.0-46.5}$ [erg s$^{-1}$]{}) nearby ($z\sim0.1$) type 2 (obscured) AGN with the Magellan IMACS long-slit spectrograph to study their ionized outflow properties using primarily the [[\[O [III]{}\]]{}$\lambda$5007]{} line. These objects are selected from a parent sample of $\sim 24~000$ $z<0.4$ spectroscopically identified AGN from SDSS [@Mullaney2013] to have high [\[O [III]{}\]]{} and *WISE* mid-IR luminosities as well as extended emission in SDSS images signaling extended ionized nebula. To increase the sample size for statistical and correlation analysis, we include two external samples from @Liu2013b and @Hainline2014d of luminous type 2 AGN to cover AGN luminosities from [$L_{\mathrm{bol}}$]{}$=10^{45}$ to $10^{47}$ [erg s$^{-1}$]{}. The AGN luminosities in this paper are inferred from *WISE* mid-IR luminosity at rest-frame 15 . The main results are as follows: \(i) The radii of the narrow-line regions [$R_{\mathrm{NLR}}$]{}, as defined by the [[\[O [III]{}\]]{}$\lambda$5007]{} isophotal radius, are 2 - 16 kpc in our sample. The exceptions are four unresolved objects and one that has a particularly large [$R_{\mathrm{NLR}}$]{} of 33 kpc, which is most likely an ionized tidal feature. We find that [$R_{\mathrm{NLR}}$]{} increases with the AGN luminosity at low AGN luminosities but flattens beyond a radius of $\sim$ 10 kpc, possibly due to change in the ionization state (Sec. \[sec:sizes:relation\]; Fig. \[fig:sizelum\_whichR\]). [$R_{\mathrm{NLR}}$]{} is sensitive to the presence of gas at large radii such as extended tidal features. \(ii) A large fraction (9/12) of our objects have high [[\[O [III]{}\]]{}$\lambda$5007]{} line-widths ([$w_{80}$]{}$>600$ [km s$^{-1}$]{}) indicating disturbed motions that are most likely outflows, five of which are spatially resolved. To quantify the size of these outflows, we define [$R_{\mathrm{KDR}}$]{} as the radius of the kinematically disturbed region where the [[\[O [III]{}\]]{}$\lambda$5007]{} line-width [$w_{80}$]{} is higher than $600$ [km s$^{-1}$]{} and the high velocity component ($|v| > 300$ [km s$^{-1}$]{}) is brighter than an isophotal threshold (see Sec. \[sec:sizes:rv\]). The resolved [$R_{\mathrm{KDR}}$]{} are between 2 and 8 kpc and are typically smaller than [$R_{\mathrm{NLR}}$]{} by a few kpc. [$R_{\mathrm{KDR}}$]{} correlates strongly with the AGN luminosity. It is possible that the [$R_{\mathrm{KDR}}$]{}-[$\nu L_{\nu,15}$]{} relation also follows a flattened power-law that saturates at about 10 kpc at a higher luminosity, but more observations are needed to confirm. The best-fit power-law index of the [$R_{\mathrm{KDR}}$]{}-[$\nu L_{\nu,15}$]{} relation is $0.60 ^{+0.13}_{-0.13}$ assuming a single power law. \(iii) Both the velocities and the dynamical time scales of the outflows show correlations with AGN luminosity (Sec. \[sec:energetics:regress\], Fig. \[fig:kimlum\_regress\]). The outflow velocities range from a few hundred to 1500 [km s$^{-1}$]{} and scale with luminosity to a small power of $0.17 ^{+0.06}_{-0.07}$ and a large scatter. The dynamical time-scales are about [$t_{\mathrm{dyn}}$]{}$\sim 10^{6.5-7}$ years and have a steeper scaling with luminosity of $0.52 ^{+0.16}_{-0.16}$. \(iv) The outflow masses and energetics are uncertain due to the unknown clumping factor of the [\[O [III]{}\]]{} emitting gas. We use two methods, which provide upper and lower limits, to constrain the energetics and the energy efficiency. The constraint on the efficiency is loose ($\eta=\dot{E}/{\ensuremath{L_{\mathrm{bol}}}}=0.01\% - 30\%$) and there is no evidence that the outflow energy efficiency depends on the AGN luminosity (Sec. \[sec:energetics:regress\], Fig. \[fig:kimlum\_energetics\]). \(v) There are three objects in our sample that have a high [[\[O [III]{}\]]{}$\lambda$5007]{} linewidth plateau of [$w_{80}$]{}$\sim 600-1500$ [km s$^{-1}$]{} followed by a sudden linewidth drop at a few kpc (SDSS J1000+1242, J1010+1413, and J1255-0339). Such a [$w_{80}$]{} profile is consistent with a constant-velocity outflow. The location of the velocity drop, which is captured by [$R_{\mathrm{KDR}}$]{}, could correspond to the edge of the outflow where the shock fronts encounter the undisturbed galactic medium (Sec. \[sec:energetics:vprof\]). \(vi) The occurrence rate of extended outflows is high among luminous type 2 AGN ($> 60\%$, [$L_{\mathrm{bol}}$]{}$\gtrsim 10^{46}$ [erg s$^{-1}$]{}). Given the outflow dynamical time scales of $\sim 10^7$ years, to have such a high occurrence rate, each outflow episode should last for $\sim 10^8$ years. While the AGN is unlikely to remain at a high luminosity the entire time, the AGN could flicker on shorter time scales. For example, it could have several $\sim 10^6$-year-long bursts with $\sim 10^7$-year intervals between them. If the outflows are energy-conserving, each burst may drive a kpc-scale outflow that lasts for $\sim 10^7$ years [@King2011] to reproduce the high occurrence rate (Sec. \[sec:timescale\]). In this paper, we find that extended ionized outflows are common among luminous type 2 AGN, with their sizes positively correlated with the AGN luminosities. It is important to extend these measurements to lower luminosity AGN to test if this relation continues. On the other hand, the extended outflows identified in this paper (e.g., SDSS J1000+1242, SDSS J1010+1413) provide good candidates for multi-wavelength follow-up, e.g. in the sub-millimeter and X-ray, that can probe the other relevant phases of the outflow (e.g., cold molecular and hot plasma) and provide a more complete picture of the feedback processes. This work also confirms that optical broadband images can help identify extended ionized nebula. It is important to explore the potential of broadband imaging selection to find extended outflows in large imaging surveys, e.g., SDSS, HSC, or in the future LSST. Such a technique could help us explore the demographics of the most energetic AGN feedback systems. A.-L. Sun is thankful for A. Dressler, D. Kelson, and E. Villanueva for assistance with the Magellan data reduction. A.-L. Sun thanks G. Liu and D. Wylezalek for communicating their research results. J.E. Greene acknowledges funding from the National Science Foundation under Grant No. AAG: \#1310405. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This research made use of Astropy, a community-developed core Python package for Astronomy[^14]. This work made use of packages PyAstronomy. PyRAF is a product of the Space Telescope Science Institute, which is operated by AURA for NASA \[sec:append\] Individual Objects {#sec:append:objs} ================== The Magellan spectroscopic data for all of our sources are displayed in this appendix (Fig. \[fig:spec2200\] to \[fig:spec3005\]), except for the two objects SDSS J1000+1242 and J1010+1413 that are shown in Fig. \[fig:spec2100\] and \[fig:spec2101\], as they are described in the main text. The majority of the objects have not been studied in detail in the literature, except for SDSS J1000+1242, SDSS J1010+1413, and SDSS J1419+0139, described below. Another object, SDSS J1255$-$0339 is also discussed here for its abnormally extended narrow line region. This system was observed by @Harrison2014 with Gemini GMOS IFU. Their IFU observation reveals regions of broad line widths (up to [$w_{80}$]{} of 850 [km s$^{-1}$]{}) with kinematic size of 14 kpc, roughly consistent with our observation. As this broad component shows a clear velocity gradient, they suggest that it is a pair of bi-polar super-bubbles. The more extended narrow-line component is also partly seen in this IFU data, but is limited to the central 3-4 due to its small field-of-view. Our observation confirms that this component extends to about 10, roughly the same as the SDSS optical image. As J1000+1242, this system was observed by @Harrison2014 with the Gemini GMOS IFU, which reveals a very broad [\[O [III]{}\]]{} component of [$w_{80}$]{}=1450 [km s$^{-1}$]{}, an unambiguous sign of high velocity outflows. The size of outflow was not constrained by @Harrison2014 due to the limited field-of-view, but is measured in our Magellan data to have a radius of [$R_{\mathrm{KDR}}$]{} = 8 kpc. Our Magellan slit is placed along the minor axis of the galaxy to capture the two bright green blobs in the SDSS image, which signal [\[O [III]{}\]]{} emission. @Harrison2014 observed the inner parts of these two features and found narrow [\[O [III]{}\]]{} emissions separated by $\sim 350$ [km s$^{-1}$]{} in velocity. Our Magellan spectra confirms that these narrow emission clouds extent to $\sim$ 16 kpc each from the nucleus. They could be galactic medium being passively illuminated by ionization cones or parts of bipolar outflows. While @Harrison2014 selects targets based on broad [\[O [III]{}\]]{} line widths and ours are based on the [\[O [III]{}\]]{} extent, it is interesting that both samples pick up the two powerful outflows J1000+1242 and J1010+1413. Possibly both the high velocity and extended [\[O [III]{}\]]{} are results of powerful AGN feedback. This object has received little attention in the literature, but it has a spectacular pair of extended green spiral features of size about 60 kpc in the SDSS image, most likely tidal tails. Our Magellan spectra reveal narrow [\[O [III]{}\]]{} of width [$w_{80}$]{}$\lesssim$ 300 [km s$^{-1}$]{} all along these features, making it the most extended narrow line region in the sample. These tidal features are likely ionized by the central AGN, as the [\[O [III]{}\]]{} to [H$\beta$]{} ratios are about 10. The system’s high infrared luminosity (classified as a ULIRG by @KilerciEser2014) and complex nuclear morphology also suggest that it may be in the late stages of a merger. This target was observed by @McElroy2014 with AAT’s SPIRAL IFU, which finds a spatially resolved [\[O [III]{}\]]{} emitting region with a moderate line width [$w_{80}$]{}$_\mathrm{, max}$ of 529 [km s$^{-1}$]{}, consistent with our observations. Its SDSS image reveals a extended tidal tail indicating merging activities. Slit Widths of the Magellan IMACS Centerfield Slit-viewing Spectrograph {#sec:append:slit} ======================================================================= We inspect the slit widths of the Magellan IMACS Centerfield Slit-viewing Spectrograph, and find that the widest of its five slits, referred to as the 1.5 slit in the IMACS User Manual, has an actual slit width of 1.3. This result is confirmed by comparing the line widths of the calibrating arc lamp observed through these five slits. As shown in Fig. \[fig:slitwidths\], the line widths of the first four slits follow the relation $$w_l^2 = w_0^2 + r W_s^2,$$ where $w_l$ is the observed arc line width, and $W_s$ is the slit width – 0.25, 0.50, 0.75, and 1.0. The intercept $w_0^2$ and the slope $r$, are fixed by linear regression of these four slits. However, the fifth slit has an arc line width narrower than expected if the slit width were 1.5. It is instead consistent with a slit width of 1.3. *WISE* Luminosities of Type 1 and Type 2 AGN {#sec:append:WISE} ============================================ *WISE* mid-IR luminosities have been used to determine the AGN bolometric luminosities. However, type 2 AGN in general have redder *WISE* colors compared to their type 1 counterparts [@Yan2013; @Liu2013; @Zakamska2016], such that the inferred bolometric luminosities for type 2 AGN can be underestimated compared to type 1 at shorter mid-IR wavelengths. Therefore, one should be cautious when using mid-IR to compare the luminosities between type 1 and type 2 AGN. We investigate the difference in *WISE* mid-IR luminosities between type 1 and type 2 AGN at three different wavelengths – rest-frame 8 , 15 , and 22 – using the sample of SDSS spectroscopically selected luminous AGN from @Mullaney2013. We use the luminous AGN at redshifts $0<z<0.2$ that have [[\[O [III]{}\]]{}$\lambda$5007]{} luminosities above $L_{\rm{[OIII]}}>5\times10^{41}$ [erg s$^{-1}$]{}, similar to our Magellan sample. 365 of these objects are type 1 and 546 are type 2. As shown on the lower right of Fig. \[fig:KStest\], the type 1 and type 2 AGN have similar $L_{\rm{[OIII]}}$ distributions that are indistinguishable by a KS test with a high $p$-value of 0.41. As shown in Fig. \[fig:KStest\], at fixed [\[O [III]{}\]]{} luminosities, we find that the 8 luminosities of the type 1 AGN are higher than the type 2 AGN by 0.2 dex. This difference is statistically significant with a KS-test $p$-value of $4\times10^{-10}$. This discrepancy is much smaller at 15 (0.07 dex, $p$-value of 0.02), and negligible at 22 (0.002 dex, $p$-value of 0.67). At a fixed X-ray luminosity, such a discrepancy has also been found between type 1 and type 2 AGN [@Burtscher2015]. These tests suggest that at a given intrinsic luminosity, the mid-IR luminosity of an AGN depends on its spectral type. Such an effect is especially severe at lower wavelengths, e.g. 8 , and grows less significant for longer wavelengths, e.g. 15 - 22 . With a sample of both type 1 and type 2 AGN, @Liu2014a find a flattening at the high luminosity end of the [\[O [III]{}\]]{} nebula size - 8 luminosity relation. However, they suspect that the flattening is an artifact caused by the higher mid-IR luminosity of type 1 AGN. We revisit this relation with a larger sample of objects from this paper, @Liu2013b, @Liu2014a, and @Hainline2014d. We also include the eight type 1 AGN observed in the same Magellan run as in this paper. As shown in Fig. \[fig:sizelum\_whichWISE\], we find that the type 1 and type 2 AGN follow different nebula size - 8 luminosity relations, such that adding luminous type 1 AGN to a sample of type 2 AGN can indeed result in or exaggerate the apparent flattening of the relation. However, if we use longer mid-IR wavelengths, say, 15 (right panel), where the effect is less significant, the separation between the size - luminosity relations of the type 1 and type 2 AGN becomes smaller, and the flattening becomes less obvious. Therefore, combining type 1 and type 2 AGN samples to study their nebula size - mid-IR luminosity relations can be misleading, especially at shorter wavelengths such as 8 . To use mid-IR luminosities as an AGN luminosity indicator, longer wavelengths, such as 15 can be more robust against variations in AGN spectral types. Simulations of Bias and Uncertainty in [$w_{80}$]{} {#sec:append:sim_w80} =================================================== The [$w_{80}$]{} measurement on a 1-D line spectrum could be affected by the instrumental spectral PSF and the noise. To quantify the biases and the uncertainties in [$w_{80}$]{} due to these effects, we perform a series of 1-D simulations. We simulate the 1-D spectrum of the [[\[O [III]{}\]]{}$\lambda$5007]{} line with double Gaussian profiles with a range of line widths ($\sigma_{narrow}=$ 100 - 500 [km s$^{-1}$]{}), flux ratios ($F_{broad}/F_{narrow}= 0.1-10$), and width ratios ($\sigma_{broad}/\sigma_{narrow}= 1.5-3$). These simulated lines are convolved with the empirical spectral PSF measured from the arc frames. Gaussian noise is then inserted into the convolved double Gaussian line profiles. We measure the [$w_{80}$]{} of the original spectrum ($w_{80, \mathrm{model}}$), the one convolved with the PSF ($w_{80, \mathrm{convl}}$), and the one with noise ($w_{80, \mathrm{noise}}$), using the same method as described in Sec. \[sec:sizes:spec\]. As shown on the left panel of Fig. \[fig:sim\_w80\], the bias of [$w_{80}$]{} due to the PSF is not a strong function of the detailed line shape but just depends on the line width. This relation is well fitted by the quadratic mean function $$w_{80, \mathrm{convl}}^2 = w_{80, \mathrm{model}}^2 + w_{80, \mathrm{inst}}^2,$$ where $w_{80, \mathrm{inst}}$ is the constant instrumental resolution, which is 243 [km s$^{-1}$]{} for the 10 slit and 282 [km s$^{-1}$]{} for the 13 slit. The random noise introduces random uncertainties and a bias to the [$w_{80}$]{} measurements. The bias is negligible for SNR $> 10$ but can be significant for low signal-to-noise data. We define $$w_{80,\/ \mathrm{err}} = \langle w_{80,\/ \mathrm{noise}} - w_{80,\/ \mathrm{convl}}\rangle_{\mathrm{RMS}}$$ to encompass both effects. We find that $w_{80,\/ \mathrm{err}}$ depends on both the signal-to-noise ratio and the width of the line, which can be fitted by a 2-dimensional 3rd-order polynomial function (right panel of Fig. \[fig:sim\_w80\]), and used to assign uncertainties to our [$w_{80}$]{} measurements. According to these results, for a typical line with a measured [$w_{80}$]{} of 600 [km s$^{-1}$]{} and a peak signal-to-noise ratio of 30, both the bias and the random uncertainty on [$w_{80}$]{} are about 10 % (60 [km s$^{-1}$]{}). For wider lines or higher signal-to-noise ratios the correction and the noise level are even lower. We apply this spectral PSF correction and assign the errors for the [$w_{80}$]{} measurements using the best-fit functions described above. The corrected [$w_{80}$]{} profiles and their errors are shown in Fig. \[fig:spec2100\], \[fig:spec2101\], and Appendix \[sec:append:objs\]. Those corrections do not affect our conclusions. Simulations for Size Biases due to the spectral and spatial PSF {#sec:append:sim_pv} =============================================================== The finite spatial and spectral resolution could lead to overestimation of the [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{} measurements, or lead us to tag an object as resolved that is not. To quantify this effect, we perform a series of 2-D spectrum ($pv$-diagram) simulations, see Fig. \[fig:sim\_pv\]. The components of the galaxies are modeled as 2-D Gaussians. But the spectral and spatial PSF are empirically measured from the data, not Gaussian functions. A flux calibration star with a seeing of FWHM=10 is used to measure the spatial PSF. We use the results of these simulations to determine the criteria for whether the narrow line region or the kinematically disturbed region is spatially resolved and to estimate any bias in the size measurements. To cover the wide variety of kinematic structures measured in our sources, the simulated $pv$-diagram consists of four components: a narrow nuclear component, a blue-shifted broad nuclear component ($-200$ [km s$^{-1}$]{}), and a pair of narrow rotating components on the blue and red sides ($\pm200$ [km s$^{-1}$]{}; to represent typical edge-on galaxies). Each component is represented by a 2-D Gaussian. The velocity widths of the narrow and broad components are fixed to be $\sigma=$100 [km s$^{-1}$]{} and 600 [km s$^{-1}$]{}, respectively. The rotating components have symmetric spatial offsets from the nucleus. The rotating and the broad nuclear components, when used, have fluxes 20% and 50% of the narrow nuclear component, respectively. The sizes of all the components and the spatial offsets in the rotating components can take a range of values from 01 to 5. We then convolve this simulated $pv$-diagram with the empirical spatial and spectral PSF, and compare the changes in the total light profiles, the red and blue wing light profiles, the [$w_{80}$]{} profiles, and the measured [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{}. We find that the narrow nuclear and the rotating components alone cannot produce [$w_{80}$]{} $>$ 600 [km s$^{-1}$]{}. So the [$w_{80}$]{} = 600 [km s$^{-1}$]{} cut is a good discriminant for the presence of the broad component, independent of its size. Compact objects that are of sizes $\sigma \lesssim$ 03 have their light profiles consistent with the PSF, independent of its velocity structure. So the total light profile is a good indicator for whether the narrow line region is resolved, see panel (a.) of Fig. \[fig:sim\_pv\]. For the kinematically disturbed region, we find that when the broad component is compact ($\sigma \lesssim$ 03), the core of its blue ($v<-300$ [km s$^{-1}$]{}) and red ($v>300$ [km s$^{-1}$]{}) wing light profiles are consistent with the PSF, even if the narrow components are extended or have rotation. But the extended rotation features can affect the wing light profiles at a fainter level ($<10^{-1}$ of the core) to make them deviate from the PSF, see panel (b.) of Fig. \[fig:sim\_pv\] . This could be the reason why SDSS J2154+1131 appears to have a resolved broad component from its red wing light profile while its [$w_{80}$]{} is low. So for a kinematically disturbed region to be determined as unambiguously resolved, it has to have the main core of its red or blue light profiles deviated from the PSF or mismatched with each other. For the sizes, we mimic the methods described above to measure [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{} for the simulated data. For [$R_{\mathrm{NLR}}$]{}, we adopt an isophotal threshold a factor of 10 lower than the peak intensity, which is comparable to the real measurements. The PSF convolved [$R_{\mathrm{NLR}}$]{} is always about 1 for compact unresolved objects ($\sigma \lesssim$ 03), so it is important to treat the [$R_{\mathrm{NLR}}$]{} measurements of those objects as upper limits. For resolved objects, the bias in [$R_{\mathrm{NLR}}$]{} due to the PSF is between 0and 05, and becomes negligible for large objects of [$R_{\mathrm{NLR}}$]{}$>4$, but the level of bias depends on the detailed shape of the light profile and cannot be easily corrected. For the [$R_{\mathrm{KDR}}$]{} measurements here, we also correct for the [$w_{80}$]{} bias according to Appendix \[sec:append:sim\_w80\] before measuring the [$R_{\mathrm{KDR}}$]{}. The [$R_{\mathrm{KDR}}$]{} of an unresolved ($\sigma \lesssim$ 03), kinematically disturbed region is over-estimated, so it is also important to treat those numbers as upper limits. In general, the [$R_{\mathrm{KDR}}$]{} of resolved objects can also be over-estimated by up to 1 with an average of $\lesssim$ 05 (which corresponds to $1/2\times$ seeing FWHM) for both the 10 and the 13 slit, see panel (c.) of Fig. \[fig:sim\_pv\]. This amount also depends on the object and cannot be easily corrected. The only exception is when the size of the broad component is comparable to or larger than the narrow component, in which case the same broad line shape is propagated to large radii by the PSF, making the [$w_{80}$]{}$>$600 [km s$^{-1}$]{} region unrealistically large, see panel (d.) of Fig. \[fig:sim\_pv\]. But this issue can be resolved by adding a surface brightness constraint to the kinematically disturbed region. The bias in [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{} sizes due to the PSF, which is estimated to be $\sim$ 05, should dominate over the noise to be the main uncertainties on [$R_{\mathrm{NLR}}$]{} and [$R_{\mathrm{KDR}}$]{}. But the exact amount of the bias depends on the structure of the 2-D spectrum and thus cannot be easily quantified. We do not apply PSF correction but assign $\pm$ 05 error to our size measurements to encompass this uncertainty. [**Appendices:** ]{} [^1]: The correlation between the [[\[O [III]{}\]]{}$\lambda$5007]{} luminosity and the mid-infrared 15 luminosity (see Sec. \[sec:data:WISE\]) disappears after the extinction correction, indicating that significant uncertainties could be introduced. [^2]: This widest 13 slit, referred to in the IMACS User Manual as the 15 slit, was confirmed to have an actual slit width of 13, see Appendix \[sec:append:slit\]. [^3]: http://code.obs.carnegiescience.edu/cosmos [^4]: http://www.astro.yale.edu/dokkum/lacosmic/ [^5]: http://www.stsci.edu/institute/software$\_$hardware/pyraf [^6]: Because the SDSS fibers (3$\arcsec$) are wider than the Magellan slits (1 or 13), the SDSS fluxes are higher than the Magellan fluxes by a factor of 1.7. [^7]: https://github.com/sczesla/PyAstronomy [^8]: We run a Monte Carlo simulation and find that the root-mean-square uncertainty on the systemic velocity is $\sim$ 15 [km s$^{-1}$]{} for a Gaussian line with a dispersion of $\sigma=200$ [km s$^{-1}$]{} and a signal-to-noise ratio of 10. [^9]: The mid-infrared luminosities of these two objects are likely to be AGN-dominated as well. Their blue W1 $-$ W2 colors can come from the Rayleigh–Jeans tail of the old stellar population, while their W3 $-$ W4 colors are redder. Their high rest-frame 24 luminosities require much higher star formation rates ($\sim$ 90 and 166 [$M_{\odot}$]{}/yr [@Rieke2009]) than typically seen in luminous type 2 AGN [$< 18$ [$M_{\odot}$]{}/yr @Zakamska2016]. In the case of SDSS J1419+0139, it is also higher than the star formation rate of 55 [$M_{\odot}$]{}/yr inferred from the IRAS 60 and 100 luminosities [@Kennicutt1998; @Solomon1997], which should be taken as an upper limit because AGN heated dust could also contribute to the 60 and 100 fluxes. [^10]: Some objects show irregular morphologies of the emission lines from the SDSS images (e.g, J1000+1242 and J1010+1413), but those irregular morphologies could be due to the extended narrow emission and do not necessarily reflect the morphology of the outflow. [^11]: As SDSS J0141$-$0945 and J2133$-$0712 don’t have [H$\beta$]{} measurements from the Magellan spectra, their $M_\mathrm{H}$ and [$\dot{E}_{\mathrm{kin}}$]{} estimates are not available. [^12]: To measure this fraction, we take the double Gaussian fits from @Mullaney2013 and measure [$w_{80}$]{} from the profiles. The fraction of [[\[O [III]{}\]]{}$\lambda$5007]{} [$w_{80}$]{} $> 600$ [km s$^{-1}$]{} objects is a strong function of the luminosity cut, which is 18% (38%) for [$L_{\mathrm{bol}}$]{}$>10^{44}$ ($10^{45}$) [erg s$^{-1}$]{}. [^13]: SDSS J0841+2042 and J1039+4512 have [[\[O [III]{}\]]{}$\lambda$5007]{} surface brightness profiles close to the PSF; SDSS J0149$-$0048, J0841+2014, and J0210$-$1001 have flat [$w_{80}$]{} profiles that could be dominated by the nuclear component. [^14]: http://www.astropy.org
--- abstract: 'Consider a square matrix with independent and identically distributed entries of zero mean and unit variance. It is well known that if the entries have a finite fourth moment, then, in high dimension, with high probability, the spectral radius is close to the square root of the dimension. We conjecture that this holds true under the sole assumption of zero mean and unit variance, in other words that there are no outliers in the circular law. In this work we establish the conjecture in the case of symmetrically distributed entries with a finite moment of order larger than two. The proof uses the method of moments combined with a novel truncation technique for cycle weights that might be of independent interest.' address: - 'CNRS & Université de Toulouse, France' - 'Università Roma Tre, Italy' - 'Université Paris-Dauphine, France' - 'University of Alberta, Canada' author: - 'Ch. Bordenave' - 'P. Caputo' - 'D. Chafaï' - 'K. Tikhomirov' date: 'Summer 2016, compiled ' title: On the spectral radius of a random matrix --- [^1] Introduction ============ Let $X_N$ denote the random $N\times N$ matrix $(X_{i,j})_{i,j=1,\dots,N}$, where $X_{i,j}$ are independent copies of a given complex valued random variable ${\mathbf x}$ with mean zero and unit variance: $$\label{ass} {\mathbb{E}}[{\mathbf x}]=0\quad\text{and}\quad{\mathbb{E}}\big[|{\mathbf x}|^2\big]=1.$$ Let $\rho(X_N)$ denote the spectral radius of $X_N$: $$\label{eq:rad} \rho(X_N):=\max\Big\{|\lambda|:\text{$\lambda$ eigenvalue of $X_N$}\Big\}\,.$$ The well known circular law states that, in probability, the empirical distribution of the eigenvalues of $N^{-1/2}X_N$ weakly converges to the uniform law on the unit disc of the complex plane [@tao-vu-cirlaw-bis; @around]. In particular, it follows that with high probability $$\label{circ} \rho(X_N){\geqslant}(1-\delta)\sqrt N \,,$$ for any $\delta>0$ and large enough $N$. Here and below we say that a sequence of events holds with high probability if their probabilities converge to one. The corresponding upper bound on $\rho(X_N)$ has been established by Bai and Yin [@baiyin] under a finite fourth moment assumption: if ${\mathbb{E}}[|{\mathbf x}|^4]<\infty$, then with high probability $\rho(X_N){\leqslant}(1+\delta)\sqrt N$, for any $\delta>0$ and large enough $N$; see also Geman and Hwang [@Geman-Hwang] and Geman [@Geman] for an independent proof under stronger assumptions. Together with , this says that if $ {\mathbb{E}}[|{\mathbf x}|^4]<\infty$ then, in probability, $\rho(X_N)/\sqrt N\to 1$, as $N\to\infty$. We refer to [@Geman; @baiyin] and references therein for related estimates and more background and applications concerning the spectral radius of a random matrix. Surprisingly, there seems to be little or no discussion at all in the literature – even in the recent works [@tao] and [@borcap] – about the necessity of the fourth moment assumption for the behavior $\rho(X_N)\sim\sqrt{N}$. We propose the following conjecture, which is illustrated by Figure \[fig1\]. \[conju\] The convergence in probability $$\label{conju1} \lim_{N\to\infty}\frac{\rho(X_N)}{\sqrt N}=1,$$ holds under the sole assumptions . Another way to put this is to say that there are no outliers in the circular law. This phenomenon reveals a striking contrast between eigenvalues and singular values of $X_N$, the latter exhibiting Poisson distributed outliers in absence of a fourth moment, see for instance [@Soshnikov; @Auffingeretal]. A tentative heuristic explanation of this phenomenon may proceed as follows. Suppose ${\mathbf x}$ has a heavy tail of index $\alpha$, that is $ {\mathbb{P}}(|{\mathbf x}|>t)\sim t^{-\alpha}$, as $t\to\infty$. If $\alpha\in(2,4)$, then with high probability in the matrix $X=X_N$ there are elements $X_{i,j}$ with $|X_{i,j}|>N^{\beta}$, for any $1/2<\beta<2/\alpha$. Any such element is sufficient to produce a singular value diverging as fast as $N^\beta$. On the other hand, to create a large eigenvalue, a single large entry is not sufficient. Roughly speaking one rather needs at least one sequence of indices $i_1,i_2,\ldots,i_{k+1}$ with $i_1=i_{k+1}$ with a large product $\prod_j|X_{i_j,i_{j+1}}|$, i.e. one cycle with a large weight if we view the matrix as an adjacency matrix of an oriented and weighted graph. It is not difficult to see that the sparse matrix consisting of all entries $X_{i,j}$ with $|X_{i,j}|>N^{\beta}$ is acyclic with high probability, as long as $\alpha\beta>1$. Somewhat similar phenomena should be expected for heavy tails with index $\alpha\in(0,2)$. As shown in [@heavygirko], in that case the circular law must be replaced by a new limiting law $\mu_\alpha$ in the complex plane. More precisely, the empirical distribution of the eigenvalues of $X/N^{1/\alpha}$ tends weakly as $N\to\infty$ to a rotationally invariant light tailed law $\mu_\alpha$, while the empirical distribution of the singular values of $X/N^{1/\alpha}$ tends weakly as $N\to\infty$ to a heavy tailed law $\nu_\alpha$. By the above reasoning, no significant outliers should appear in the spectrum. The precise analogue of in this case is however less obvious since the support of $\mu_\alpha$ is unbounded. From the tail of $\mu_\alpha$, one might expect that the spectral radius is of order $N^{1/\alpha} (\log N)^{1/\alpha +o(1)}$ while typical eigenvalues are of order $N^{1/\alpha}$. In this paper we prove that the conjectured behavior holds if ${\mathbf x}$ is symmetric and has a finite moment of order $2+{\varepsilon}$ for an arbitrary ${\varepsilon}>0$. We say that ${\mathbf x}$ is symmetric if the law of ${\mathbf x}$ coincides with the law of $-{\mathbf x}$. \[main\*\] Suppose that ${\mathbf x}$ is symmetric and that ${\mathbb{E}}\left[|{\mathbf x}|^{2}\right]=1$. Suppose further that ${\mathbb{E}}\left[|{\mathbf x}|^{2+{\varepsilon}}\right]<\infty $ for some ${\varepsilon}>0$. Then, in probability, $$\label{main1} \lim_{N\to\infty}\frac{\rho(X_N)}{\sqrt N}=1.$$ In view of , to prove the theorem one only needs to establish the upper bound $\rho(X_N){\leqslant}(1+\delta)\sqrt N$ with high probability, for every $\delta>0$. We shall prove the following stronger non-asymptotic estimate, covering variables ${\mathbf x}$ whose law may depend on $N$. \[main\] For any ${\varepsilon},\delta>0$ and $B>0$, there exists a constant $C=C({\varepsilon},\delta,B)>0$ such that for any $N\in{\mathbb{N}}$, for any symmetric complex random variable ${\mathbf x}$ with ${\mathbb{E}}\left[|{\mathbf x}|^{2}\right]{\leqslant}1$ and ${\mathbb{E}}\left[|{\mathbf x}|^{2+{\varepsilon}}\right]{\leqslant}B$, we have $$\label{main3} {\mathbb{P}}\big(\rho(X_N){\geqslant}(1+\delta)\sqrt N\big){\leqslant}\frac{C}{(\log N)^2}.$$ The rest of this note is concerned with the proof of Theorem \[main\]. We finish this introduction with a brief overview of the main arguments involved. Overview of the proof --------------------- The proof of Theorem \[main\] combines the classical method of moments with a novel cycle weight truncation technique. For lightness of notation, we write $X$ instead of $X_N$. The starting point is a standard general bound on $\rho(X)$ in terms of the trace of a product of powers of $X$ and $X^*$. Let $\|X\|$ denote the operator norm of $X$, that is the maximal eigenvalue of $\sqrt{X^*X}$, which is also the largest singular value of $X$. Recall the Weyl inequality $\rho(X){\leqslant}\|X\|$. For any integer $m{\geqslant}1$ one has $$\rho(X)=\rho(X^m)^{1/m}{\leqslant}\|X^m\|^{1/m} \quad\text{and}\quad \|X^m\|^2{\leqslant}\mathrm{Tr}((X^*)^mX^m).$$ It follows that for any integer $k{\geqslant}2$, setting $m=k-1$, $$\label{main4} \rho(X_N)^{2k-2}{\leqslant}\mathrm{Tr}((X^*)^{k-1}X^{k-1}) = \sum_{i,j}[X^{k-1}]_{i,j}[(X^*)^{k-1}]_{j,i}.$$ Expanding the summands in one obtains $$\label{main5} \rho(X_N)^{2k-2}{\leqslant}\sum_{i,j}\sum_{P_1,P_2:i\mapsto j} w(P_1)\bar w(P_2),$$ where the internal sum ranges over all paths $P_1$ and $P_2$ of length $k-1$ from $i$ to $j$, the weight $w(P)$ of a path $(i_1,\dots,i_{k})$ is defined by $$\label{main6} w(P):=\prod_{\ell=1}^{k-1}X_{i_{\ell},i_{\ell+1}}\,,$$ and $\bar w(P)$ denotes the complex conjugate of $w(P)$. So far we have not used any specific form of the matrix entries. As a warm up, it may be instructive to analyze the following simple special case. Assume that $X_{i,j}$ has the distribution $$\label{ex1} {\mathbf x}= \begin{cases} \pm q^{-\tfrac{1-{\varepsilon}}2} & \text{with probability $\frac{q}{2}$},\\ 0 & \text{with probability $1-q$}, \end{cases}$$ where $q=q_N\in(0,1]$ is a parameter that may depend on $N$, while ${\varepsilon}\in(0,1)$ is a fixed small constant. If $q_N\equiv 1$, then we have a uniformly random $\pm1$ matrix, while if $q_N\to 0$, $N\to\infty$ one has a matrix that may serve as a toy model for the sparse matrices from the heuristic discussion given above. Notice that the assumptions of Theorem \[main\] are satisfied with the same parameter ${\varepsilon}$ and with $B=1$, since $${\mathbb{E}}[|{\mathbf x}|^{2}]=q^{{\varepsilon}}\quad\text{and}\quad{\mathbb{E}}[|{\mathbf x}|^{2+{\varepsilon}}]{\leqslant}q^{{\varepsilon}/2}.$$ We can now take expectation in . Using the symmetry of ${\mathbf x}$ we may restrict the sum over paths $P_1,P_2$ satisfying the constraint that in the union $P_1\cup P_2$ each directed edge $(i_\ell,i_{\ell+1})$ appears an even number of times. We say that $P_1\cup P_2$ is even. In this case ${\mathbb{E}}[w(P_1)\bar w(P_2)]= q^{-(1-{\varepsilon})(k-1)}q^{n}$, where $n$ is the number of edges in $P_1\cup P_2$ without counting multiplicities. Let $P$ denote the closed path obtained as follows: start at $i$, follow $P_1$, then add the edge $(j,i)$, then follow $P_2$, then end with the edge $(j,i)$ again. Thus, $P$ is an even closed path of length $2k$. Notice that $${\mathbb{E}}[w(P_1)\bar w(P_2)] {\leqslant}q^{-{\varepsilon}}{\mathbb{E}}[w(P)].$$ Since the map $(P_1,P_2)\mapsto P$ is injective we have obtained $$\label{main50} {\mathbb{E}}[\rho(X_N)^{2k-2}]{\leqslant}q^{-{\varepsilon}}\sum_{P}{\mathbb{E}}[w(P)],$$ where the sum ranges over all even closed paths of length $2k$. Observe that $${\mathbb{E}}[w(P)]{\leqslant}q^{-(1-{\varepsilon})k}q^{\ell},$$ where $\ell$ is the number of distinct vertices in $P$. Therefore, letting ${\mathcal{N}}(k,\ell)$ denote the number of even closed paths of length $2k$ with $\ell$ vertices, is bounded above by $$\label{main7} \sum_{\ell=1}^k {\mathcal{N}}(k,\ell)q^{-{\varepsilon}}q^{-(1-{\varepsilon})k}q^{\ell}.$$ Combinatorial estimates to be derived below, see Lemma \[paths and graphs\] and Lemma \[graphs counting\], imply that ${\mathcal{N}}(k,\ell){\leqslant}k^2(4k)^{6(k-\ell)}N^\ell$. Putting all together we have found $$\label{main8} {\mathbb{E}}[\rho(X_N)^{2k-2}]{\leqslant}k^2 N^{k}\sum_{\ell=1}^k a(k,N,q)^{k-\ell}$$ where $a(k,N,q)=(4k)^{6}(Nq^{(1-{\varepsilon})})^{-1}$. We choose $k\sim (\log N)^2$. Suppose that $q{\geqslant}N^{-1-{\varepsilon}}$. Then $Nq^{(1-{\varepsilon})}{\geqslant}N^{{\varepsilon}^2}$ and therefore $a(k,N,q){\leqslant}1$ if $N$ is large enough. It follows that ${\mathbb{E}}[\rho(X_N)^{2k-2}]{\leqslant}k^3 N^k$, and by Markov’s inequality, for all fixed $\delta>0$: $$\begin{aligned} \label{main30} {\mathbb{P}}\left(\rho(X_N) {\geqslant}(1+\delta)\sqrt N\right)&{\leqslant}(1+\delta)^{-2k+2}N^{-k+1}{\mathbb{E}}[\rho(X_N)^{2k-2}]\nonumber \\ & {\leqslant}(1+\delta)^{-2k+2}k^3N.\end{aligned}$$ Since $k\sim (\log N)^2$ this vanishes faster than $N^{-\gamma}$ for any $\gamma>0$. On the other hand, if $q{\leqslant}N^{-1-{\varepsilon}}$, then a different, simpler argument can be used. Indeed, since an acyclic matrix is nilpotent, it follows that if $\rho(X_N)>0$ then there must exist a cycle with nonzero entries from the matrix $X$. The probability of a given such cycle is $q^{\ell}$ where $\ell$ is the number of vertices of the cycle. Estimating by $N^\ell$ the number of cycles with $\ell$ vertices one has $$\label{main80} {\mathbb{P}}[\rho(X_N)>0]{\leqslant}\sum_{\ell=1}^\infty (qN)^\ell.$$ Thus, if $q{\leqslant}N^{-1-{\varepsilon}}$, then ${\mathbb{P}}[\rho(X_N)>0]{\leqslant}2qN{\leqslant}2N^{-{\varepsilon}}$. This concludes the proof of in the special case of the model . The given argument displays, albeit in a strongly simplified form, some of the main features of the proof of Theorem \[main\]: the role of symmetry, the role of combinatorics, and the fact that cycles with too high weights have to be ruled out with a separate probabilistic estimate. The latter point requires a much more careful handling in the general case. Since it represents the main technical novelty of this work, let us briefly illustrate the main idea here. Consider the collection ${\mathcal{C}}_m$ of all possible oriented cycles with $m$ edges of the form $C=(i_1,\dots,i_{m+1})$ with $i_j\in\{1,\dots,N\}$, and with no repeated vertex except for $i_1=i_{m+1}$. Let $\nu_m$ denote the uniform distribution over the set ${\mathcal{C}}_m$. Given the matrix $X_N$, we look at the weight $|w(C)|^{2t}$ corresponding to the cycle $C$ repeated $2t$ times, where $w(C)$ is defined in . Since one can restrict to even closed paths, and each such path can be decomposed into cycles that are repeated an even number of times, it is crucial to estimate the empirical averages $$\nu_m[|w(C)|^{2t}] = \frac1{|{\mathcal{C}}_m|}\sum_{C\sim{\mathcal{C}}_m}|w(C)|^{2t},$$ where the sum runs over all cycles with $m$ edges and $|{\mathcal{C}}_m|$ denotes the total number of them. Broadly speaking, we will define an event ${\mathcal{E}}_k$ by requiring that $$\label{mainid} \nu_m[|w(C)|^{2}]{\leqslant}k^2\,,\quad \text{and} \quad\nu_m[|w(C)|^{2+{\varepsilon}}]{\leqslant}k^2 B^m,$$ for all $m{\leqslant}k$, where as before $k\sim(\log N)^2$. The assumptions of Theorem \[main\] ensure that ${\mathcal{E}}_k$ has large probability by a first moment argument. Thus, in computing the expected values of $w(P)$ we may now condition on the event ${\mathcal{E}}_k$. Actually, on the event ${\mathcal{E}}_k$ we will be able to estimate deterministically the quantities $\nu_m\left[|w(C)|^{2t}\right]$. To see this, observe that if $$w_{\max}(m) := \max_{C\sim{\mathcal{C}}_m}|w(C)|$$ denotes the maximum weight for a cycle with $m$ edges, then $$w_{\max}^2{\leqslant}\Big(\sum_{C\sim{\mathcal{C}}_m}|w(C)|^{2+{\varepsilon}}\Big)^{\frac1{1+{\varepsilon}/2}}.$$ If ${\varepsilon}$ is small enough, on the event ${\mathcal{E}}_k$, from one has $w_{\max}^2{\leqslant}(|{\mathcal{C}}_m| k^2B^m)^{1-{\varepsilon}/4}$. Since $|{\mathcal{C}}_m|{\leqslant}N^m$, a simple iteration proves that for any $t{\geqslant}1$: $$\label{mainid2} \nu_m[|w(C)|^{2t}]{\leqslant}(k^2N^mB^m)^{t(1-{\varepsilon}/4)} {\leqslant}N^{mt(1-{\varepsilon}/8)},$$ for all $N$ large enough. The bound turns out to be sufficient to handle all paths $P$ of the form of a cycle $C\sim{\mathcal{C}}_m$ repeated $2t$ times, for all $m{\leqslant}k$. To control more general even closed paths $P$ one needs a more careful analysis involving the estimate of larger empirical averages corresponding to various distinct cycles at the same time. We refer to Section \[statistics\] below for the details. The combinatorial estimates are worked out in Section \[combi\]. Finally, in Section \[mainproof\] we complete the proof of Theorem \[main\]. Counting paths and digraphs {#combi} =========================== We first introduce the basic graph theoretic terminology and then prove some combinatorial estimates. Multi digraphs and even digraphs -------------------------------- For each natural $N$, $[N]$ denotes the set $\{1,2,\dots,N\}$. A directed graph, or simply digraph, on $[N]$, is a pair $G =(V,E)$, where $V\subset[N]$ is the set of vertices and $E\subset [N]\times[N]$ is the set of directed edges. We also consider multisets $E$, where a directed edge $e\in E$ appears with its own multiplicity $n_e\in{\mathbb{N}}$. In this case we say that $G=(V,E)$ is a *multi digraph*. Given a vertex $v$ of a multi digraph, the out-degree $\deg_+(v)$ is the number of edges of the form $(v,j)\in E$, counting multiplicities. Similarly, the in-degree $\deg_-(v)$ is the number of edges of the form $(j,v)\in E$, counting multiplicities. Notice that each loop of the form $(v,v)$ is counted once both in $ \deg_+(v)$ and $\deg_-(v)$. \(1) [1]{}; (2) \[right of=1\] [2]{}; (3) \[below of=2\] [3]{}; (4) \[right of=3\] [4]{}; (5) \[right of=4\] [1]{}; (6) \[right of=5\] [2]{}; (7) \[above of=6\] [3]{}; (8) \[right of=6\] [4]{}; (1) edge \[bend right=10\] node (2) edge \[bend left=10\] node (2) edge \[bend right=10\] node (3) edge \[loop left\] node (1) (2) edge node (3) edge \[loop right\] node (2) edge node (4) (3) edge \[bend right=10\] node (1) (4) edge node (3) ; (5) edge \[bend right=10\] node (6) edge \[bend left=10\] node (6) \(6) edge \[bend left=10\] node (7) edge \[bend left=30\] node (7) edge \[bend right=10\] node (8) edge \[bend left=10\] node (8) (7) edge \[bend right=10\] node (5) edge \[bend left=10\] node (5) edge \[bend left=10\] node (6) edge \[bend left=30\] node (6) (8) edge \[bend right=10\] node (7) edge \[bend left=10\] node (7) ; \[fig:fig1\] Given natural $m$, a *path* of length $m$ is a sequence $(i_1,\dots,i_{m+1})\in[N]^{m+1}$. The path $P$ is *closed* if the first and the last vertex coincide. Each path $P=(i_1,\dots,i_{m+1})$ naturally generates a multi digraph $G_P=(V,E)$, where $V=\{i_1,\dots,i_{m+1}\}$ and $E$ contains the edge $(i,j)$ with multiplicity $n$ if and only if the path $P$ contains exactly $n$ times the adjacent pair $(i,j)$. Notice that in general there is more than one path generating the same multi digraph. If the path $P$ is closed, then $G_P$ is strongly connected, that is for any $u,v\in V$ one can travel from $u$ to $v$ by following edges from $E$. A closed path without repeated vertices except for the first and last vertices is called a [ *cycle*]{}. A loop $(i,i)$ is considered a cycle of length $1$. A multi digraph will be called a *double cycle* if it is obtained by repeating two times a given cycle. In particular, a double cycle is not allowed to have loops unless its vertex set consists of just one vertex. We say that $P$ is an *even path* if it is closed and every adjacent pair $(i,j)$ is repeated in $P$ an even number of times. A multi digraph is called an *even digraph* if it is generated by an even path; see Figure \[fig:fig1\] for an example. Thus, an even digraph is always strongly connected. The following lemma can be proved by adapting the classical theorems of Euler and Veblen. \[veblen\] For a strongly connected multi digraph $G$, the following are equivalent: 1) $G$ is an even digraph; 2) $\deg_+ (v) = \deg_-(v)$ is even for every vertex $v$; 3) $G$ can be partitioned into a collection of double cycles. Equivalence classes and rooted digraphs --------------------------------------- Two multi digraphs $G=(V,E)$ and $G'=(V',E')$ are called *isomorphic* if there is a bijection $f:V\to V'$ such that $(i,j)\in E$ if and only if $(f(i),f(j))\in E'$ and the multiplicities of the corresponding edges coincide. The associated equivalence classes are regarded as unlabeled multi digraphs. Given an unlabeled multi digraph ${\mathcal{U}}$, we will write $G\sim {\mathcal{U}}$ for any multi digraph $G$ belonging to the class ${\mathcal{U}}$. An edge-rooted multi digraph $G = (V,E,\rho)$, or simply a *rooted digraph*, is defined as a multi digraph with a distinguished directed edge $\rho \in E$. The definition of equivalence classes is extended to rooted digraphs as follows. Two rooted digraphs $G=(V,E,\rho)$ and $G'=(V',E',\rho')$ are called [isomorphic]{} if there is a bijection $f:V\to V'$ such that $(i,j)\in E$ if and only if $(f(i),f(j))\in E'$, multiplicities of corresponding edges coincide, and $f(\rho) = \rho'$. With minor abuse of notation we will use the same terminology as above, and write $G\sim {\mathcal{U}}$ for rooted digraphs $G$ belonging to the equivalence class ${\mathcal{U}}$. Counting -------- We turn to an estimate on the number of paths generating a given even digraph. Let $G=(V,E)$ be an even digraph with $|E|=2k$ edges. Unless otherwise specified, multiplicities are always included in the edge count $|E|$. By Lemma \[veblen\] every vertex $v$ has even in- and out-degrees satisfying $$\label{eqdeg} \deg_+(v)=\deg_-(v). $$ Thus $G$ has at most $k$ vertices. Moreover, since the number of edges in $G$ is $2k$, we have $$\label{eqdeg2} \sum\limits_{v\in V}\deg_+ (v)=\sum\limits_{v\in V}\deg_- (v) = 2k.$$ \[paths and graphs\] Let $G=(V,E)$ be an even digraph with $|E|=2k$ and $|V|=\ell$. The number of paths generating $G$ does not exceed $$\ell (4k-4\ell )!$$ There are $\ell$ possibilities for the starting points of the path. The path is then characterized by the order in which neighboring vertices are visited. At each vertex $v$, there are $\deg_+(v)$ visits, and at most $\deg_+(v) /2$ out-neighbors. If $\deg_+(v) = 2$, there is only one possible choice for the next neighbor. If $\deg_+ (v) {\geqslant}4$, then there are at most $\deg_+(v)!$ possible choices considering all visits to the vertex $v$. Hence, the number of paths generating $G$ is bounded by $$\textstyle{\ell\, \prod_{v : \,\deg_+ (v) {\geqslant}4} ( \deg_+(v) ! ) {\leqslant}\ell \left( \sum_{v : \,\deg_+ (v) {\geqslant}4} \deg_+(v) \right) !}$$ where we have used that the product of factorials does not exceed the factorial of the sum. Now, let $q$ be the number of vertices $v$ such that $\deg_+ (v) {\geqslant}4$. From , we have $$\label{eq:boundqo} \textstyle{\sum_{v : \, \deg_+ (v) {\geqslant}4 } \deg_+(v) + 2 ( \ell - q ) = 2 k. }$$ Estimating the sum in from below by $4q$ one has $ 4 q + 2 ( \ell - q) {\leqslant}2 k. $ Hence, $$\label{eq:boundq} q {\leqslant}k - \ell.$$ Using in one finds $$\label{eq:summj} \textstyle{ \sum_{v : \,\deg_+ (v) {\geqslant}4 } \deg_+(v) {\leqslant}4 k -4 \ell. }$$ For integers $1 {\leqslant}\ell {\leqslant}\min\{ k, N\}$, let ${\mathcal{G}}_N (k,\ell) $ be the set of rooted even digraphs $G=(V,E)$ with $V\subset[N]$ such that $|V|=\ell$ and $|E|=2k$. \[graphs counting\] For any $k,N\in{\mathbb{N}}$, $1{\leqslant}\ell{\leqslant}\min\{k,N\}$, the cardinality of ${\mathcal{G}}_N (k,\ell) $ satisfies $$\label{canon} |{\mathcal{G}}_N(k,\ell)| {\leqslant}N^\ell k ^{2 (k - \ell)+1}.$$ We first choose $\ell$ vertices among $N$. There are $$\binom{N}{\ell}{\leqslant}\frac{N^\ell}{\ell!}$$ choices. Without loss of generality we assume that the set of vertices is given by $\{ 1, \ldots , \ell \}$. Next, we assign an admissible degree to each vertex of $\{ 1, \ldots , \ell \}$. Let $m(j)\in{\mathbb{N}}$ be defined as $m(j)=\deg_\pm(j)/2$. In view of and , one has $m(j){\geqslant}1$ and $\sum_{j=1}^\ell m(j)=k$. Thus there are $$\binom{k-1}{\ell-1}{\leqslant}k^{k-\ell}$$ choices for the vector $(m(1),\dots,m(\ell))$. Next, we need to count the number of multi digraphs with the given degree sequence. To this end, we may use the configuration model. Namely, we think of every vertex $j$ as having $m(j)$ heads and $m(j)$ tails. Altogether, there will be $k$ heads and $k$ tails. Each head is thought of as a pair of loose out-edges (without an assigned out-neighbor) while each tail is thought of as a pair of loose in-edges (without an assigned in-neighbor). The number of multi digraphs with the given degree sequence is bounded by the number of bipartite matchings of heads and tails, which gives $k!$ possible choices. Thus, using $k!/\ell!{\leqslant}k^{k-\ell}$, we see that the total number of even multi digraphs with $\ell$ vertices and $2k$ edges is bounded above by $$N^\ell k^{2(k-\ell)}.$$ It remains to choose the root edge. Since there are at most $k$ choices, the proof is complete. Statistics of even digraphs =========================== Every edge $(i,j)\in[N]\times[N]$ is given the random weight $X_{i,j}$, where $X_{i,j}$ are independent copies of a random variable ${\mathbf x}$ satisfying the assumptions of Theorem \[main\]. The weight of an even digraph $G=(V,E)$, is defined as $$\begin{gathered} \label{pg} p(G) := \prod_{(i,j) \in E} |X_{i,j} |^{ n_{i,j} }, \end{gathered}$$ where each edge $(i,j)\in E$ has multiplicity $n_{i,j}{\geqslant}2$. Note that in this formula we interpret “$(i,j)\in E$” without taking into account the multiplicity in the multiset $E$. Given an unlabeled even graph ${\mathcal{U}}$, consider the equivalence class of even digraphs $\{G:\,G\sim {\mathcal{U}}\}$. We are interested in estimating $$\begin{gathered} \label{statistico} {\mathcal{S}}_h({\mathcal{U}}):=\frac{2^h|\{G\sim {\mathcal{U}}:\, p(G){\geqslant}2^h\}|}{|\{G:\,G\sim {\mathcal{U}}\}|},\end{gathered}$$ for $h=0,1,2,\dots$ Moreover, we define $$\begin{gathered} \label{statistics} {\mathcal{S}}({\mathcal{U}}):=\max(1,\max_{h\in\{0,1,2,\dots\}}{\mathcal{S}}_h({\mathcal{U}}))\,.\end{gathered}$$ We refer to $S({\mathcal{U}})$ as the *statistics* of the unlabeled even digraph ${\mathcal{U}}$. We extend the above definitions to rooted even digraphs as follows. The weight of a rooted even digraph $G = (V,E,\rho)$ is defined by $$\begin{gathered} \label{prg} p_r(G)= \prod_{(i,j) \in E} |X_{i,j} |^{ n_{i,j} -2\mathbf{1}_{(i,j)=\rho}}.\end{gathered}$$ Note that $$p_r(V,E,\rho) = |X_\rho|^{-2}p(V,E),$$ is well defined even if $X_\rho=0$ since the root edge $\rho$ satisfies $\rho\in E$ and thus $n_\rho{\geqslant}2$. If ${\mathcal{U}}$ is an unlabeled rooted even digraph, that is an equivalence class of rooted even digraphs, then ${\mathcal{S}}_h({\mathcal{U}})$ and ${\mathcal{S}}({\mathcal{U}})$ are defined as in and , provided $p(G)$ is replaced by $p_r(G)$ in that expression. Estimates for the statistics ${\mathcal{S}}({\mathcal{U}})$ will be derived from a basic estimate for double cycles. Let ${\mathcal{C}}_m$ be the unlabeled double cycle with $2m$ edges. Similarly, ${\mathcal{C}}^\star_m$ will denote the unlabeled rooted double cycle with $2m$ edges. From the assumptions of Theorem \[main\], for any double cycle $C\sim {\mathcal{C}}_m$ we have $$\label{eqb1} \mathbb{E}[ p(C)]{\leqslant}1\,,\quad \mathbb{E}[p(C)^{1+\varepsilon/2}]{\leqslant}B^m.$$ Note that the same bounds apply for any rooted double cycle $C \sim {\mathcal{C}}_m^\star$, with the weights $p(C)$ replaced by $p_r(C)$. \[cycle stats\] For any $k{\geqslant}1$, and $m{\leqslant}k$, define the event $${\mathcal{A}}_{k}:= {\mathcal{A}}^1_{k}\cap{\mathcal{A}}^2_{k}\cap{\mathcal{A}}^3_{k}$$ where $$\begin{aligned} {\mathcal{A}}^1_{k}& :=\bigcap_{m=1}^k\bigg\{\sum\limits_{h=0}^\infty{\mathcal{S}}_h({\mathcal{C}}_m){\leqslant}k^2\bigg\},\\ {\mathcal{A}}^2_{k}& :=\bigcap_{m=1}^k\bigg\{ \sum\limits_{h=0}^\infty{\mathcal{S}}_h({\mathcal{C}}^\star_m){\leqslant}k^2\bigg\},\\ {\mathcal{A}}^3_{k}& :=\bigcap_{m=1}^k\bigg\{\sum\limits_{h=0}^\infty 2^{h\varepsilon/2}{\mathcal{S}}_h({\mathcal{C}}_m){\leqslant}k^2 B^m\bigg\}. \end{aligned}$$ Then $${\mathbb{P}}({\mathcal{A}}_{k}){\geqslant}1-\frac{6}{k}.$$ For any $a{\geqslant}0$ one has $$\label{asz} \frac12\sum\limits_{h=0}^\infty 2^h\mathbf{1}_{a{\geqslant}2^h} {\leqslant}a {\leqslant}1 + 2\sum\limits_{h=0}^\infty 2^h\mathbf{1}_{a{\geqslant}2^h}.$$ Take any $C\sim{\mathcal{C}}_m$. The first inequality in yields $$\label{cycles0} \frac12\sum\limits_{h=0}^\infty 2^h\mathbf{1}_{p(C){\geqslant}2^h}{\leqslant}p(C).$$ Taking the expectation, implies $$\sum\limits_{h=0}^\infty 2^h{\mathbb{P}}(p(C){\geqslant}2^h){\leqslant}2. $$ On the other hand, by symmetry any $C\sim{\mathcal{C}}_m$ satisfies $$\label{cycles10} 2^h{\mathbb{P}}(p(C){\geqslant}2^h) = {\mathbb{E}}[{\mathcal{S}}_h({\mathcal{C}}_m)].$$ Hence, from Markov’s inequality and a union bound over $1 {\leqslant}m{\leqslant}k$, one has $$\label{cycles1} {\mathbb{P}}({\mathcal{A}}^1_{k}){\geqslant}1-\frac{2}{k}. $$ for all $m{\leqslant}k$. Next, as in one shows that $$\label{cycles00} p(C)^{1+{\varepsilon}/2}{\geqslant}\frac12\sum\limits_{h=0}^\infty 2^{h(1+{\varepsilon}/2)}\mathbf{1}_{p(C){\geqslant}2^h}.$$ Then and imply $$\sum\limits_{h=0}^\infty2^{h\varepsilon/2} {\mathbb{E}}[{\mathcal{S}}_h({\mathcal{C}}_m)] =\sum\limits_{h=0}^\infty2^{h+h\varepsilon/2}{\mathbb{P}}(p(C){\geqslant}2^h) {\leqslant}2\, {\mathbb{E}}\left[ p(C)^{1+{\varepsilon}/2}\right] {\leqslant}2B^m.$$ Therefore, from Markov’s inequality and a union bound over $1 {\leqslant}m{\leqslant}k$, $$\label{cycles11} {\mathbb{P}}({\mathcal{A}}^3_{k}){\geqslant}1-\frac{2}{k}.$$ Finally, we observe that the same argument leading to can be repeated for rooted cycles, with no modifications. It follows that $$\label{cycles011} {\mathbb{P}}({\mathcal{A}}^2_{k}){\geqslant}1-\frac{2}{k}.$$ From - and the union bound over $i=1,2,3$, it follows that $${\mathbb{P}}({\mathcal{A}}_{k}){\geqslant}1-\frac{6}{k}.$$ In the remainder of this section, on the event ${\mathcal{A}}_k$, we will deterministically upper bound the statistics of any unlabeled rooted even digraph; see Proposition \[exp prop\] below. The proof will use the following induction statement. \[induction\] Fix integers $1{\leqslant}r{\leqslant}m{\leqslant}k\ll \sqrt N$. Let ${\mathcal{U}}'$ be an unlabeled rooted even digraph with at most $k$ vertices and assume that $\;{\mathcal{U}}'$ can be decomposed as $\;{\mathcal{U}}'={\mathcal{U}}\cup {\mathcal{C}}_m$ for some unlabeled rooted even digraph ${\mathcal{U}}$ and a double cycle ${\mathcal{C}}_m$ of length $2m$ having $r$ common vertices with ${\mathcal{U}}$. Suppose that ${\mathcal{A}}_k$ holds. Then 1) $\mathcal{S}({\mathcal{U}}'){\leqslant}3ek^2 N^r \mathcal{S}({\mathcal{U}})$; 2) If $m\log B{\leqslant}\frac{\varepsilon}{4}r\log N$, then $\mathcal{S}({\mathcal{U}}'){\leqslant}5e k^2N^{r(1-\varepsilon/8)} \mathcal{S}({\mathcal{U}})$. Fix an even rooted digraph $G'\sim {\mathcal{U}}'$ and denote by $C\sim {\mathcal{C}}_m$ and $G\sim {\mathcal{U}}$, respectively, the double cycle with $2m$ edges and the even rooted digraph isomorphic to ${\mathcal{U}}$ so that $G'=G\cup C$. Further, let $\pi$ be a uniform random permutation of $[N]$, which we assume to be defined on a different probability space. Any permutation induces a mapping on rooted digraphs via vertex relabeling, so that the rooted digraph $\pi[G']$ is uniformly distributed on the set $\{H:\,H\sim {\mathcal{U}}'\}$. Hence we may write $$\begin{gathered} \label{id1} {\mathcal{S}}_h({\mathcal{U}}')=2^h{\mathbb{P}}_\pi(p_r(\pi[G']){\geqslant}2^h),\;\;h=0,1,\dots \end{gathered}$$ where ${\mathbb{P}}_\pi$ denotes the probability w.r.t. the random permutation $\pi$. For any $a,b{\geqslant}0$, $$\begin{aligned} \mathbf{1}_{ab{\geqslant}2^h} & = \mathbf{1}_{ab{\geqslant}2^h}\bigg(\sum_{\ell=1}^h\mathbf{1}_{2^{\ell-1}{\leqslant}a<2^\ell} +\mathbf{1}_{a<1} +\mathbf{1}_{a{\geqslant}2^h}\bigg)\\ & {\leqslant}\sum_{\ell=1}^h\mathbf{1}_{b{\geqslant}2^{h-\ell}; \, a{\geqslant}2^{\ell-1}} + \mathbf{1}_{b{\geqslant}2^h} + \mathbf{1}_{a{\geqslant}2^h}. \end{aligned}$$ Using this and $p_r(\pi[G'])=p_r(\pi[G])\,p(\pi[C])$, one may estimate $$\begin{aligned} \label{estima1} {\mathbb{P}}_\pi\left(p_r(\pi[G']){\geqslant}2^h\right) &{\leqslant}\sum\limits_{\ell=1}^h{\mathbb{P}}_\pi\left(p_r(\pi[G]){\geqslant}2^{h-\ell};\,p(\pi[C]){\geqslant}2^{\ell-1}\right)\nonumber\\ &\qquad +\,{\mathbb{P}}_\pi\left(p(\pi[C]){\geqslant}2^{h}\right)+{\mathbb{P}}_\pi\left(p_r(\pi[G]){\geqslant}2^{h}\right). \end{aligned}$$ Let us condition on a fixed realization $R$ of $\pi$ restricted to the vertices $V$ of $G$. Thus, ${\mathbb{P}}_\pi(\cdot\,|\,R)$ represents a uniform average over all permutations that agree with the given $R$ on $V$. We write $C'\sim(C;R)$ for any digraph $C'$ that has the form $C'=\pi[C]$ for some $\pi$ that agrees with $R$ on $V$. Since $C$ has $m-r$ free vertices (those which do not fall into $V$), and we can pick them among $N-|V|$ available vertices, the cardinality of $\{C'\sim(C;R)\}$ is at least $$(N-|V|)(N-|V|-1)\cdots(N-|V|-(m-r-1)) {\geqslant}(N-k)^{(m-r)},$$ where we use that the total number of vertices satisfies $|V|+(m-r) {\leqslant}k$. Since the number of double cycles of length $2m$ is $\binom{N}{m}(m-1)!{\leqslant}N^m$, we can write for any $\tau>0$: $$\begin{aligned} {\mathbb{P}}_\pi( p(\pi[C]){\geqslant}\tau\,|\,R)& = \frac{|\{C'\sim(C;R):\,p(C'){\geqslant}\tau\}|}{|\{C'\sim(C;R)\}|}\\ & {\leqslant}(N-k)^{r-m}|\{C'\sim{\mathcal{C}}_m:\,p(C'){\geqslant}\tau\}| \\&{\leqslant}(N-k)^{r-m}N^m{\mathbb{P}}_\pi(p(\pi[C]){\geqslant}\tau) {\leqslant}eN^r{\mathbb{P}}_\pi(p(\pi[C]){\geqslant}\tau),\end{aligned}$$ where we use $ r{\leqslant}m{\leqslant}k\ll \sqrt N$ to bound $(1-\tfrac{k}N)^{r-m}{\leqslant}e$. Since the above estimate is uniform over the realization $R$, for any $\ell=1,2,\dots,h$ we have $$\begin{aligned} &{\mathbb{P}}_\pi\left(p_r(\pi[G]){\geqslant}2^{h-\ell};\;p(\pi[C]){\geqslant}2^{\ell-1}\right)\\ &\qquad {\leqslant}{\mathbb{P}}_\pi\left( p_r(\pi[G]){\geqslant}2^{h-\ell}\right)\,\sup\limits_R\,{\mathbb{P}}_\pi\left(\pi[C]{\geqslant}2^{\ell -1}\,|\,R\right) \\ &\qquad {\leqslant}eN^r{\mathbb{P}}_\pi\left(p_r(\pi[G]){\geqslant}2^{h-\ell}\right){\mathbb{P}}_\pi\left(p(\pi[C]){\geqslant}2^{\ell-1}\right). \end{aligned}$$ Using the definition of ${\mathcal{S}}({\mathcal{U}})$ and the identity applied to $G$ and $C$ we obtain, for all $\ell=1,\dots,h$: $$\begin{aligned} \label{esta1} {\mathbb{P}}_\pi\left(p_r(\pi[G]){\geqslant}2^{h-\ell};\;p(\pi[C]){\geqslant}2^{\ell-1}\right) {\leqslant}eN^r2^{1-h}{\mathcal{S}}({\mathcal{U}}){\mathcal{S}}_{\ell-1}({\mathcal{C}}_m). \end{aligned}$$ From one has $$\begin{aligned} {\mathbb{P}}_\pi(p(\pi[G']){\geqslant}2^h)&{\leqslant}eN^r2^{1-h}{\mathcal{S}}({\mathcal{U}})\sum\limits_{\ell=0}^{h-1}{\mathcal{S}}_{\ell}({\mathcal{C}}_m)+ 2^{-h}{\mathcal{S}}_h({\mathcal{C}}_m) + 2^{-h}{\mathcal{S}}({\mathcal{U}}). \end{aligned}$$ Since ${\mathcal{S}}({\mathcal{U}}){\geqslant}1$, on the event ${\mathcal{A}}_k$ of Lemma \[cycle stats\] one can estimate $$2^h{\mathbb{P}}_\pi(p(\pi[G']){\geqslant}2^h){\leqslant}2eN^r{\mathcal{S}}({\mathcal{U}})\sum\limits_{\ell=0}^{\infty} {\mathcal{S}}_{\ell}({\mathcal{C}}_m) +{\mathcal{S}}({\mathcal{U}}) {\leqslant}3ek^2N^r{\mathcal{S}}({\mathcal{U}}).$$ Taking the supremum over $h$, the above relation proves the first assertion of the lemma. Let us prove the second assertion. On the event ${\mathcal{A}}_k$ of Lemma \[cycle stats\], for any $T\in{\mathbb{N}}$, $$\sum_{\ell=T}^\infty{\mathcal{S}}_{\ell}({\mathcal{C}}_m){\leqslant}2^{-{\varepsilon}T/2} k^2B^m.$$ Fix $T= \lceil\log_2(N^{r(1-\varepsilon/8)})\rceil$. If $m\log B{\leqslant}\frac{\varepsilon}{4}r\log N$, then $$\sum_{\ell=T}^\infty{\mathcal{S}}_{\ell}({\mathcal{C}}_m) {\leqslant}k^2N^{-\varepsilon r/8}.$$ Estimating as in for all $\ell{\geqslant}T+1$, we obtain $$\sum_{\ell=T+1}^h{\mathbb{P}}_\pi\left(p_r(\pi[G]){\geqslant}2^{h-\ell};\;p(\pi[C]){\geqslant}2^{\ell-1}\right) {\leqslant}2^{-h+1}ek^2{\mathcal{S}}({\mathcal{U}})N^{r(1-\varepsilon/8)}.$$ On the other hand, using ${\mathbb{P}}_\pi\left(p_r(\pi[G]){\geqslant}2^{h-\ell}\right){\leqslant}2^{-h+\ell}{\mathcal{S}}({\mathcal{U}})$, we find $$\sum_{\ell=1}^T{\mathbb{P}}_\pi(p_r(\pi[G]){\geqslant}2^{h-\ell};\;p(\pi[C]){\geqslant}2^{\ell-1}) {\leqslant}2^{-h}{\mathcal{S}}({\mathcal{U}})2^{T+1} {\leqslant}2^{-h+2}{\mathcal{S}}({\mathcal{U}})N^{r(1-\varepsilon/8)}.$$ From it follows that $$\begin{aligned} {\mathbb{P}}_\pi(p(\pi[G']){\geqslant}2^h) &{\leqslant}2^{-h+2}ek^2{\mathcal{S}}({\mathcal{U}})N^{r(1-\varepsilon/8)} +2^{-h}{\mathcal{S}}_h({\mathcal{C}}_m)+2^{-h}{\mathcal{S}}({\mathcal{U}}). \end{aligned}$$ On the event ${\mathcal{A}}_k$ one has ${\mathcal{S}}_h({\mathcal{C}}_m){\leqslant}k^2{\leqslant}k^2{\mathcal{S}}({\mathcal{U}})$, and therefore $$\begin{aligned} 2^h{\mathbb{P}}_\pi(p(\pi[G']){\geqslant}2^h)&{\leqslant}5ek^2 N^{r(1-\varepsilon/8)} \mathcal{S}({\mathcal{U}}). \end{aligned}$$ Taking the supremum over $h$, we obtain the second assertion of the lemma. We turn to the main statement of this section \[exp prop\] Suppose $N^{\varepsilon/16}{\geqslant}5ek^2$, and let ${\mathcal{U}}$ be an unlabeled rooted even graph with $2k$ edges and $x$ vertices. Define $$y_x:=\max\left(0,k-x-\frac{4k\log B}{\varepsilon\log N}\right).$$ Then, on the event ${\mathcal{A}}_k$ we have $$\mathcal{S}({\mathcal{U}}){\leqslant}N^{k-x}N^{-\varepsilon y_x/16}k^2\bigl(3ek^2\bigr)^{\frac{4k\log B}{\varepsilon\log N}}.$$ By Lemma \[veblen\] we may represent ${\mathcal{U}}$ as the union of double cycles $C_1,\dots,C_q$, such that: 1) $C_1$ is rooted; 2) for all $i\in[q]$, $C_i$ has $2m_i$ edges; 3) for $i{\geqslant}2$, $C_i$ has $r_i{\geqslant}1$ common vertices with $\cup_{j=1}^{i-1}C_j$. Define the rooted even digraphs $U_i=\bigcup_{j=1}^i C_j$, $i=1,2,\dots,q$. Let ${\mathcal{U}}_i$ denote the associated equivalence classes. Let $J$ be the set of indices $i{\geqslant}2$ such that $$m_i\log B{\leqslant}\frac{\varepsilon}{4}r_i\log N.$$ Since $m_i>\frac{\varepsilon\log N}{4\log B}$ for any $i{\geqslant}2$, $i\notin J$, using $\sum_{i{\geqslant}2} m_i{\leqslant}k$ we see that $$\label{eqrq} \left|\{2,\dots,q\}\setminus J\right| {\leqslant}\frac{4k\log B}{\varepsilon\log N}.$$ Since ${\mathcal{U}}_1$ is a rooted double cycle with at most $2k$ edges, and we are assuming the validity of the event ${\mathcal{A}}_k$, by Lemma \[cycle stats\] we have $\mathcal{S}({\mathcal{U}}_1){\leqslant}k^2$. Moreover, by Lemma \[induction\], one has $$\begin{gathered} {\mathcal{S}}({\mathcal{U}}_i){\leqslant}3ek^2\mathcal{S}({\mathcal{U}}_{i-1})N^{r_i},\qquad i\in\{2,\dots,q\}\setminus J\\ {\mathcal{S}}({\mathcal{U}}_i){\leqslant}5ek^2\mathcal{S}({\mathcal{U}}_{i-1})N^{r_i-r_i\varepsilon/8}{\leqslant}\mathcal{S}({\mathcal{U}}_{i-1})N^{r_i-r_i\varepsilon/16},\qquad i\in J, \end{gathered}$$ where we used the assumption $5ek^2{\leqslant}N^{\varepsilon/16}$. Next, observe that $$\sum\limits_{i=2}^q r_i=k-x.$$ Thus, combining the above estimates one has $${\mathcal{S}}({\mathcal{U}}){\leqslant}N^{k-x}N^{-\varepsilon y'/16}k^2\bigl(3ek^2\bigr)^{\frac{4k\log B}{\varepsilon\log N}},$$ where $y'=\sum\limits_{i\in J}r_i$. Note that $$\sum\limits_{i\notin J }r_i{\leqslant}\sum\limits_{i\notin J }\frac{4m_i\log B}{\varepsilon\log N} {\leqslant}\frac{4k\log B}{\varepsilon\log N},$$ implying that $y'{\geqslant}k-x-\frac{4k\log B}{\varepsilon\log N}$. The proof is complete. Proof of Theorem \[main\] {#mainproof} ========================= Let ${\mathcal{B}}$ denote the event that $|X_{ij}|{\leqslant}N^{2}$ for all $(i,j) \in [N]\times [N]$. An application of Markov’s inequality and the assumption ${\mathbb{E}}[| X_{ij}|^2]{\leqslant}1$ shows that ${\mathbb{P}}({\mathcal{B}}){\geqslant}1-1/N^2$. Thus, if we define ${\mathcal{E}}_k:={\mathcal{A}}_k\cap {\mathcal{B}}$, where ${\mathcal{A}}_k$ is the event from Lemma \[cycle stats\], then $$\begin{aligned} \label{eqb20} {\mathbb{P}}({\mathcal{E}}_k){\geqslant}1 - N^{-2} - 6k^{-1}.\end{aligned}$$ We are going to choose eventually $k\sim (\log N)^2$. Therefore, thanks to , to prove the theorem it will be sufficient to prove the conditional statement $$\label{main33} {\mathbb{P}}\left(\rho(X_N){\geqslant}(1+\delta)\sqrt N\mid {\mathcal{E}}_k\right){\leqslant}C(\log N)^{-2}.$$ To prove this, we estimate the conditional moments ${\mathbb{E}}[\rho(X_N)^{2k-2}\mid{\mathcal{E}}_k]$. From the expansion in one has $$\label{main55} {\mathbb{E}}[\rho(X_N)^{2k-2}\mid{\mathcal{E}}_k] {\leqslant}\sum_{i,j}\sum_{P_1,P_2:i\mapsto j} {\mathbb{E}}[w(P_1)\bar w(P_2)\mid{\mathcal{E}}_k]\,,$$ where the internal sum ranges over all paths $P_1$ and $P_2$ of length $k-1$ from $i$ to $j$, the weight $w(P)$ of a path is defined by , and $\bar w(P)$ denotes the complex conjugate of $w(P)$. Notice that since $|X_{i,j}|{\leqslant}N^2$ on the event ${\mathcal{E}}_k$, all expected values appearing above are well defined. By the symmetry assumption we can replace the variables $X_{i,j}$ by $$X'_{i,j}=\theta_{i,j}X_{i,j}$$ where $\theta_{i,j}\in\{-1,+1\}$ are symmetric i.i.d. random variables, independent from the $\{X_{i,j}\}$. Conditioning on ${\mathcal{E}}_k$ the entries $X'_{ij}$ are no longer independent. However, since ${\mathcal{E}}_k$ is measurable with respect to the absolute values $\{|X_{ij}|\}$, the signs $\theta_{i,j}$ are still symmetric and i.i.d. after conditioning on ${\mathcal{E}}_k$. It follows that $${\mathbb{E}}\left[w(P_1)\bar w(P_2)\mid{\mathcal{E}}_k\right]=0,$$ whenever there is an edge with odd multiplicity in $P_1\cup P_2$. Thus, in we may restrict to $P_1,P_2$ such that each edge in $P_1\cup P_2$ has even multiplicity. Let $P$ denote the closed path obtained as follows: start at $i$, follow $P_1$, then add the edge $(j,i)$, then follow $P_2$, then end with the edge $(j,i)$ again. Thus, $P$ is an even closed path of length $2k$. Note that according to our definition , if $G_P$ is the rooted even digraph generated by the path $P$, with root at the edge $(j,i)$, then $$|w(P_1)\bar w(P_2)| =p_r(G_P).$$ Since the map $(P_1,P_2)\mapsto P$ is injective, allows us to estimate $$\label{main501} {\mathbb{E}}\left[\rho(X_N)^{2k-2}\mid{\mathcal{E}}_k\right] {\leqslant}\sum_{P}{\mathbb{E}}\left[p_r(G_P)\mid {\mathcal{E}}_k\right],$$ where the sum ranges over all even closed paths $P=(i_1,\dots,i_{2k+1})$ of length $2k$ and $G_P$ is defined as the rooted even digraph generated by the path $P$, with root at the edge $(i_k,i_{k+1})$. By Lemma \[paths and graphs\], the sum in can be further estimated by $$\label{paths to graphs} k \sum_{x=1}^k (4k)^{4(k-x)}\!\!\! \sum_{G\in \mathcal{G}_N(k,x)}\!\!{\mathbb{E}}\left[p_r(G)\mid {\mathcal{E}}_k\right],$$ where we used $x (4k-4x)!{\leqslant}k (4k)^{4(k-x)}$, and $\mathcal{G}_N(k,x)$ denotes the set of all rooted even digraphs with $2k$ edges and $x$ vertices. Below we estimate $\sum_{G\in \mathcal{G}_N(k,x)}p_r(G)$ deterministically on the set ${\mathcal{E}}_k$. Using the second inequality in one has, for any $G\in \mathcal{G}_N(k,x)$: $$p_r(G){\leqslant}1 + 2\sum_{h=0}^{\infty}2^{h}\mathbf{1}_{p_r(G){\geqslant}2^h}.$$ Since on the event ${\mathcal{E}}_k$ all entries satisfy $|X_{i,j}|{\leqslant}N^2$, it follows that $p_r(G){\leqslant}N^{4k-4}$. Therefore the above sum can be truncated at $$H:=\lfloor4k\log_2 N\rfloor.$$ Let ${\mathcal{U}}$ be a given equivalence class of rooted even digraphs with $x$ vertices and $2k$ edges. Summing over all $G\sim {\mathcal{U}}$, and recalling , $$\sum_{G\sim{\mathcal{U}}}p_r(G){\leqslant}\Big(1+2\sum_{h=0}^H{\mathcal{S}}_h({\mathcal{U}})\Big)\left|\{G\sim{\mathcal{U}}\}\right|{\leqslant}3H{\mathcal{S}}({\mathcal{U}})\left|\{G\sim{\mathcal{U}}\}\right|.$$ From Proposition \[exp prop\], on the event ${\mathcal{E}}_k$ we can then estimate $$\sum_{G\sim{\mathcal{U}}}p_r(G){\leqslant}3H N^{k-x}N^{-\varepsilon y_x/16}k^2\bigl(3ek^2\bigr)^{\frac{4k\log B}{\varepsilon\log N}}\left|\{G\sim{\mathcal{U}}\}\right|,$$ where $y_x=\max\bigl(0,k-x-\frac{4k\log B}{\varepsilon\log N}\bigr)$. Summing over all equivalence classes ${\mathcal{U}}$ of rooted even digraphs with $x$ vertices with $2k$ edges, on the event ${\mathcal{E}}_k$ one obtains $$\label{thstg} \sum_{G\in \mathcal{G}_N(k,x)}p_r(G) {\leqslant}3H N^{k-x}N^{-\varepsilon y_x/16}k^2\bigl(3ek^2\bigr)^{\frac{4k\log B}{\varepsilon\log N}}\left|{\mathcal{G}}_N(k,x)\right|.$$ Going back to , using , and Lemma \[graphs counting\] to estimate $\left|{\mathcal{G}}_N(k,x)\right|$, one finds $$\label{maina} {\mathbb{E}}\left[\rho(X_N)^{2k-2}\mid{\mathcal{E}}_k\right] {\leqslant}3Hk^4N^{k}\bigl(3ek^2\bigr)^{\frac{4k\log B} {\varepsilon\log N}} \sum_{x=1}^k (4k)^{6(k-x)}N^{-\varepsilon y_x/16}$$ Fix $k\sim(\log N)^2$. If $x{\leqslant}k-\frac{8k\log B}{\varepsilon\log N}$, then $y_x{\geqslant}(k-x)/2$ and therefore $$(4k)^{6(k-x)} N^{-\varepsilon y_x/16} {\leqslant}(4k)^{6(k-x)} N^{-\varepsilon (k-x)/32} {\leqslant}1,$$ provided that $N$ is sufficiently large. It follows that $$\sum_{x=1}^k (4k)^{6(k-x)}N^{-\varepsilon y_x/16} {\leqslant}k + \tfrac{8k\log B}{\varepsilon\log N}(4k)^{\frac{48k\log B}{\varepsilon\log N}}.$$ From , for large enough $N$ and $k\sim (\log N)^2$, one has $$\label{mainas} {\mathbb{E}}\left[\rho(X_N)^{2k-2}\mid{\mathcal{E}}_k\right]{\leqslant}N^k (\log N)^{C\log N}, $$ where $C=C({\varepsilon},B)>0$ is a constant depending only on ${\varepsilon},B$. The proof of is concluded by using Markov’s inequality: for any $\delta>0$, $$\begin{aligned} {\mathbb{P}}(\rho(X_N){\geqslant}(1+\delta)\sqrt N\mid {\mathcal{E}}_k) &{\leqslant}(1+\delta)^{-2k+2} N^{-k+1} {\mathbb{E}}[\rho(X_N)^{2k-2}\mid{\mathcal{E}}_k] \\ &{\leqslant}(1+\delta)^{-2k+2}N (\log N)^{C\log N}. $$ Since $k\sim (\log N)^2$, for fixed $\delta>0$, the expression above is $\mathcal{O}(N^{-\gamma})$ for any $\gamma>0$. This ends the proof of Theorem \[main\]. [\[CGLP\]]{} A. Auffinger, G. Ben Arous, and S. Péché. Poisson convergence for the largest eigenvalues of heavy tailed random matrices. *Ann. Inst. Henri Poincaré Probab. Stat. *45(3):589–610, 2009. Z.D. Bai and Y.Q. Yin. Limiting behavior of the norm of products of random matrices and two problems of [G]{}eman-[H]{}wang. *Probab. Theory Related Fields* 73(4):555–569, 1986. Ch. Bordenave and M. Capitaine. Outlier eigenvalues for deformed i.i.d random matrices. To appear in *Comm. Pure Appl. Math. *(2016) preprint available at [arXiv:1403.6001](http://arxiv.org/abs/1403.6001) Ch. Bordenave and D. Chafaï. Around the circular law. *Probab. Surv. * 9:1–89 (2012). Ch. Bordenave, P. Caputo, and D. Chafaï. Spectrum of non-[H]{}ermitian heavy tailed random matrices. *Comm. Math. Phys. *307(2):513–560, 2011. S. Geman. The spectral radius of large random matrices. *Ann. Probab. *14(4):1318–1328, 1986. S. Geman and C.-R. Hwang. A chaos hypothesis for some large systems of random equations. *Z. Wahrsch. Verw. Gebiete* 60(3):291–314 (1982). A. Soshnikov. Poisson statistics for the largest eigenvalues of wigner random matrices with heavy tails. *Electron. Comm. Probab. *9:82–91, 2004. T. Tao. Outliers in the spectrum of iid matrices with bounded rank perturbations. *Probab. Theory Related Fields* 155(1-2):231–263 (2013). T. Tao and V. Vu. Random matrices: universality of [ESD]{}s and the circular law. *Ann. Probab. *38(5):2023–2065, 2010. With an appendix by Manjunath Krishnapur. [^1]: Partial support: A\*MIDEX project ANR-11-IDEX-0001-02 funded by the “Investissements d’Avenir” French Government program, managed by the French National Research Agency (ANR)
--- abstract: 'Using complex Langevin dynamics we examine the phase structure of complex unitary matrix models and compare the numerical results with analytic results found at large $N$. The actions we consider are manifestly complex, and thus the dominant contribution to the path integral comes from the space of complexified gauge field configuration. For this reason, the eigenvalues of unitary matrix lie off the unit circle and venture out in the complex plane. One example of a complex unitary matrix model, with Polyakov line as the unitary matrix, is an effective description of a QCD at finite density and temperature with $N$ number of colors and $N_f$ number of quark flavors defined on the manifold $S^1 \times S^3$. A distinct feature of this model, the occurrence of a series of Gross-Witten-Wadia transitions, as a function of the quark chemical potential, is reproduced using complex Langevin simulations. We simulate several other observables including Polyakov lines and quark number density, for large $N$ and $N_f$ and found excellent match with the analytic results.' author: - Pallab Basu - Kasi Jaswin - Anosh Joseph bibliography: - 'qcds.bib' title: Complex Langevin Dynamics in Large $N$ Unitary Matrix Models --- Introduction {#sec:intro} ============ A nonperturbative study of the phase structure of QCD at finite temperature and nonzero baryon chemical potential still remains an outstanding problem [@Muroya:2003qs; @deForcrand:2010ys]. This is due to the fact that the fermion determinant becomes complex and the theory has a sign problem. The standard methods to study the theory, lattice QCD algorithms based on importance sampling, fail to produce reliable simulations. There have been recent developments in tackling this problem. One method is the use of complex Langevin dynamics with stochastic quantization [@Klauder:1983sp; @Parisi:1984cs]. This method is not based on importance sampling but instead on a stochastic exploration of an enlarged (complexified) field configuration space. Another recently proposed method is the Lefschetz thimble method [@Cristoforetti:2012su; @Fujii:2013sra; @DiRenzo:2015foa; @Tanizaki:2015rda; @Fujii:2015vha; @Alexandru:2015xva], which is also based on complexification of the original real field variables. The complex Langevin method was proposed in the early 1980s by Klauder [@Klauder:1983nn; @Klauder:1983zm; @Klauder:1983sp] and Parisi [@Parisi:1984cs]. Though it became popular in the beginning certain problems were found immediately after. First one was the problem of runaways, where the simulations would not converge and the second one was the problem of convergence to a wrong limit. In recent years the complex Langevin method has been revived, with sometimes cases of impressive success [@Berges:2005yt; @Berges:2006xc; @Berges:2007nr; @Bloch:2017sex; @Aarts:2008rr; @Pehlevan:2007eq]. It has been shown recently that complex Langevin simulations produce seemingly correct answer, even when the fermion sign problem is severe, for one-, three- and four-dimensional field theories with nonzero chemical potential [@Aarts:2008wh; @Aarts:2009hn; @Aarts:2010gr; @Aarts:2011zn]. There have also been studies of supersymmetric matrix models based on complex Langevin dynamics. See Refs. [@Ito:2016efb; @Ito:2016hlj; @Anagnostopoulos:2017gos]. In this paper, we consider a large $N$ unitary matrix model at low temperature with a finite quark chemical potential and quark mass. This model is obtained from the one-loop formulation of QCD on $S^1 \times S^3$ at finite temperature with finite quark chemical potential $\mu$, quark mass $m$, and with $N$ number of colors and $N_f$ number of quark flavors. After integrating out the quark and gauge degrees of freedom we obtain the model of our interest – a conventional unitary matrix model with a complex action. The unitary matrix $U$ in this model is the holonomy (Wilson loop) of the gauge field around the thermal time circle in Euclidean space. We can use the expectation value of the trace of Polyakov line in the fundamental representation as order parameter for the phase transitions. It is zero in the confined phase and non-zero in the deconfined phase. The model is interesting as it exhibits a rich thermal phase structure. When the chemical potential passes one of the quark energy levels there is a third order Gross-Witten-Wadia (GWW) transition from a confined to a deconfined phase and back again. This model also exhibits another interesting feature known as the [*Silver Blaze*]{} behavior. When the quark mass is nonvanishing the bulk observables of the model are nearly zero until the onset transition to the deconfined phase, which occurs when the chemical potential reaches the value of the lightest quark mass. In the matrix model with complex action, the dominant contributions to the functional integral come from complexified gauge field configurations. Due to this reason, the saddle point eigenvalues of the unitary matrix $U$ lie off the unit circle, on a contour in the complex plane. The eigenvalues of $U$ can be written as $\exp(i\theta_i)$ with $\theta_i$ the angle variables and $i = 1, \cdots, N$. We can make a change of variables such that the functional integral reduces to an integral over $\{ \theta_i \}$. At large $N$, the functional integral is dominated by a single saddle point but since the action is complex this saddle point configuration lies out in the complex plane where the $\theta_i$ are no longer real. As a consequence, the Polyakov line and the inverse Polyakov line are not equal, that is, $\langle P \rangle \neq \langle P^{-1} \rangle$. Through complex Langevin simulations we indeed confirm this behavior. In fact the behavior of inverse Polyakov line precedes that of the Polyakov line as a function of chemical potential. This feature was observed analytically in an earlier work by Hands [*et al.*]{} in Ref. [@Hands:2010zp]. In this paper, we examine this large $N$ unitary matrix model using complex Langevin simulations. It is possible to generate representative field configurations by integrating a stochastic differential equation, known as the complex Langevin equation. The drift terms arising from the complex action force the field variables to evolve in an extended (complexified) field space, in which the large regions where the observables are plagued by phase fluctuations are avoided [@Aarts:2008rr]. When $N$ is large, we can consider the gauge field, corresponding to the angles of the Polyakov line, as a distribution on a contour. From the equation of motion, the saddle point distribution of the Polyakov line eigenvalues can be calculated analytically and plotted by mapping the angles from an arc on the unit circle to a contour over the same range of angles in the complex plane [@Hands:2010zp]. The theory is said to be in a confined phase when the contour on which the Polyakov line eigenvalues are distributed is closed. The contour opens up in between quark energy level transitions giving rise to a deconfined phase in the theory. The third derivative of the grand potential is discontinuous at each energy level crossing. These are characteristic features of a third order, GWW transition [@Gross:1980he; @Wadia:1980www; @Wadia:1980cp]. This paper is organized as follows. In Sec. \[sec:cLd\] we give a brief outline of the complex Langevin dynamics and stochastic quantization. In Sec. \[sec:ab-model\] we discuss a simple yet nontrivial matrix model called the ab-Model, which is a complexified version of the Gross-Witten-Wadia (GWW) model. This model has two phases, confined and deconfined, and it exhibits a third-order phase transition. In Sec. \[sec:gt-to-u-m-model\] we discuss another interesting large $N$ unitary matrix model, which arises in the one-loop formulation of QCD on compact spaces. This model possess a tower of quark energy levels due to compactification and is defined for positive and negative chemical potential values. We then focus on to a truncated cousin of this model - a single quark energy level matrix model with positive chemical potential. This model also has a complex action and captures the physics we are interested in without loss of generality. We can define a transition parameter (which is function of the temperature and chemical potential) in this model and as we change this parameter, the model exhibits confinement/deconfinement phase transitions. We show the eigenvalue distributions corresponding to the confined (closed) and deconfined (gapped) phases of the theory using complex Langevin simulations. We also simulate the behaviors of Polyakov lines and fermion number density as a function of the transition parameter. We simulate the model for a range of temperatures and chemical potentials to study its phase structure. We also show the phase diagram of the model, at low temperature, on the $(\mu, \beta)$ plane, in the vicinity where quark energy level equals the chemical potential. We then simulate the model at large quark mass and show that the bulk observables exhibit the Silver Blaze behavior – the observables are roughly zero until the onset transition to the deconfined phase, which occurs when the chemical potential equals quark mass. We then move on to discuss the single-level model with a simple nontrivial gauge interaction turned on. We study the behavior of observables as a function of the interaction parameter. We see that the model prefers to stay in the confined phase as the interaction strength is increased. In Sec. \[sec:conclusions\] we provide conclusions and discussions. In Appendix. \[sec:qcd-finite-cp\] we use complex Langevin dynamics to simulate QCD on $S^1 \times S^3$ at finite chemical potential and low temperature. We are able to reproduce the series of GWW transitions, as a function of the chemical potential, as described in Ref. [@Hands:2010zp]. Our simulations also reproduce the level structure feature of the bulk observables - fermion number density, pressure and energy - of the model. In Appendix \[sec:appendix\_c\] we investigate the reliability of complex Langevin method by studying the probability distribution for the magnitude of the drift term and the Langevin runtime history of the unitarity norm. We note that the probability distribution for the magnitude of the drift term falls of (possibly) with a power law even though the simulations show excellent agreement with analytical results. We think that these diagnostics need further investigations and we save it for future work. Complex Langevin Dynamics {#sec:cLd} ========================= The central idea of stochastic quantization is that expectation values of observables are obtained as equilibrium values of a stochastic process [@Parisi:1980ys; @Damgaard:1987rr]. In order to achieve this we evolve the system in a fictitious time $\tau$, subject to a stochastic noise. That is, the system evolves according to Langevin dynamics. When the action is complex it is still possible to consider Langevin dynamics. The force (gradient of the action) becomes complex in this case making the fields also complex during the evolution. In this work we make use of complex Langevin dynamics with stochastic quantization to study large $N$ unitary matrix models with complex actions. They exhibit sign problem due to the fact that the action is complex. Standard Monte Carlo methods fail to produce the correct equilibrium distributions of these models. We can use discretized complex Langevin equation with Euler method (which is a first order algorithm) to find the equilibrium field distributions of these models. We note that in unitary models with real action the domain of the angular variables $\theta_i$, with $i = 1, \cdots, N$, is $[0, 2\pi)$. After complexification the domain becomes a strip with the the domain $[0, 2\pi)$ along the real directions and $(-\infty, \infty)$ along the imaginary directions. The range of $e^{i \theta_i}$, that is, the complexified eigenvalues of $U$ has the whole complex plane as the range. Let us take $\theta_i(\tau)$ as the complexified angle variables of the gauge link $U(\tau)$ at a Langevin time $\tau$. (From now on we take $\theta_i$ to be complex, in this paper, unless otherwise specified.) We have the discrete Langevin evolution equation \[eq:lang-1\] \_i(+ ) &=& \_i() - +  \_i(), where $\Delta \tau$ is the Langevin time step, and $\eta_i(\tau)$ is a Gaussian random variable satisfying the conditions \[eq:randoms\] \_i() = 0,    \_i() \_j(’) = 2 \_[ij]{} \_[’]{}. If the action $S$ is of the order $N^2$, then strictly at infinite $N$ the fluctuation term in Eq. (\[eq:lang-1\]) could be safely dropped. Moreover, to reduce excursions in the imaginary directions of the field configurations, which would spoil the validity of the method, we should use real Gaussian random variables [@Aarts:2009uq; @Aarts:2011ax; @Aarts:2013uza]. We also need to impose the $SU(N)$ constraint on the complexified angular variables after each Langevin time step. That is, we need \[eq:SU-N-constraint\] \_[i=1]{}\^N \_i() = 0. This can be easily implemented by subtracting the average value $\theta_{\rm av}(\tau)$ from each $\theta_i(\tau)$ variable, i.e. \[eq:SU-N-step\] \_i \_i-\_[i=1]{}\^N \_i(). Note that this condition is implemented in a holomorphic way. That is, both of the real and imaginary parts of $\theta_{\rm av}(\tau)$ are subtracted. Ideally, one should eliminate one variable (say $\theta_1$) using the constraint Eq. and stochastically quantize the remaining variables. To proceed we need to justify that our method of imposing the constraint after each time step leads to the same result. A set of stochastic flow equations involving the gradient of the action like the one given in Eq. is invariant under the orthogonal transformation of variables \_i = \_j O\_[ij]{} \_j, where $O$ is an orthogonal matrix. In terms of the transformed variables, the set of equations is $$\begin{aligned} d\tilde \theta_i &= -\sum_j O_{ij} \left[\frac{\partial S}{\partial \theta_j(\tau)}\right] d \tau + \sqrt{d \tau}~ \sum_j O_{ij} \eta_j(\tau) \\ &=-\sum_k \sum_j O_{ij} O_{kj} \left[\frac{\partial S}{\partial \tilde \theta_k(\tau)}\right] d \tau+ \sqrt{d \tau}~ \sum_j O_{ij} \eta_j(\tau) \\ &= -\left[\frac{\partial S}{\partial \tilde \theta_i(\tau)}\right] d \tau+ \sqrt{d \tau}~ \tilde \eta_i(\tau),\end{aligned}$$ where we have used the orthogonality of matrix $O$. Orthogonality also guarantees that new random variables $\tilde \eta$s satisfy the condition Eq. . Now, we can always choose an $O$ such that $\tilde \theta_1 =\frac{1}{\sqrt{n}} \sum_i {\theta_i}$. In terms of the transformed variables it is easy to understand why our method works. The constraint Eq. is now rewritten simply as $\tilde \theta_1 = 0$. If we start with a set of variables which already satisfies this constraint then a valid Langevin time evolution step may be performed by simply discarding any evolution in $\tilde \theta_1$. This is precisely our method of imposing constraint after each time step, rewritten in terms of the new variables. To emphasize, one can straight forwardly argue that in terms of old variables, this step is same as Eq. . Our argument works for any arbitrary linear constraint. We note that there also exists another complementary method in which one could implement complex Langevin dynamics directly on the matrix variables $U(\tau)$. In this case the evolution equation takes the form U(+ ) = R() U() where the matrix $R$ is a stochastic unitary matrix. We note that this method can be used for studying similar models in higher spacetime dimensions. In this paper, we use the first method described above where the link field $U$ is diagonalized and the $SU(N)$ constraint has been imposed. We note that the complexification of the dynamical variables in the theory can change the Langevin evolution drastically. There can be unstable directions on the complexified field configuration space and the Langevin evolution can converge to wrong limits. One should be aware that the numerical integration must be performed carefully when the Langevin trajectory makes a large excursion into imaginary directions. One could, in principle, use a small step size but it still has two problems: $(i)$ it does not solve instabilities in all directions and $(ii)$ it will result in a slow evolution, which can be computationally very inefficient. In order to take care of both of these problems we follow the algorithm given by Aarts [*et al.*]{} in Ref. [@Aarts:2009dg]. We consider an adaptive step size in the discretized complex Langevin equations. We compute the absolute value of the maximum drift, $K_{\max}$, at a given Langevin time $\tau$ K\_() \_i , and the stepsize for the next evolution step is taken to be = , where $\gamma$ is a number chosen according to the model we want to simulate. In our simulations we typically take $\gamma$ to be ${\cal O}(1)$. ab-Model {#sec:ab-model} ======== To demonstrate the effectiveness of Complex Langevin Dynamics, we begin by studying a simple, yet nontrivial model – a complexified version of Gross-Witten-Wadia (GWW) Model [@Wadia:1980cp; @Gross:1980he; @Wadia:1980www; @Buividovich:2015oju]. We refer to our model as *ab-Model*. It has two phases, confined and deconfined, exhibiting a third-order phase transition. The action is given by $$S = N \left(a {{\rm Tr\;}}U + b {{\rm Tr\;}}U^{\dagger}\right), \label{abmodel}$$ where $a,b \in \mathbb{C} $, $U$ is an element of $SU(N)$, and when $a=b$ it becomes the Gross-Witten-Wadia model. Before proceeding further let us make a few generic comments. A linear term in ${{\rm Tr\;}}U$ breaks the center symmetry. Furthermore, the above action (or other polynomial generalization of it) is complex. If $a \neq b$, then the $\mathbb{Z}_2$ symmetry $U \rightarrow U^{\dagger}$ is broken. This implies $\langle {{\rm Tr\;}}U \rangle \neq \langle {{\rm Tr\;}}U^{\dagger} \rangle$. One may ask, that what it means in terms of manifestly gauge invariant operators. This means that the contribution from baryon and anti-baryon is different. Another related observation is one may naively expand Eq. (\[abmodel\]) in a series $$\begin{aligned} Z = \int DU e^{-S} = \int DU \left( 1 + N ab {{\rm Tr\;}}U {{\rm Tr\;}}U^{\dagger} + N^2 (ab)^2 ({{\rm Tr\;}}U {{\rm Tr\;}}U^{\dagger})^2 \cdots \right) + \\ \nn \left( N^N a^N {{\rm Tr\;}}U^N + N^N b^N {{\rm Tr\;}}U^{\dagger N} \right) + \cdots.\end{aligned}$$ Here we have separated the “mesonic” and “baryonic” contributions. Due to the center symmetry only a center symmetry invariant combination of ${{\rm Tr\;}}U$ and ${{\rm Tr\;}}U^{\dagger}$ contributes. By mesonic contribution we mean product of traces for which sum of powers all the occurrence of unitary matrix and its inverse sum to zero. For a baryonic operator, the sum is only zero up to modulo $N$, i.e., proportional to a non-zero integral power of $N$. If baryonic contributions are neglected then Eq. (\[abmodel\]) is equivalent to a model with parameters, $a = b = \sqrt{(ab)}$. We will later see that for center symmetry invariant operators, this equivalence is actually held in the ungapped phase. Expressing the action in diagonal gauge, the effective action becomes S\_[eff]{} &=& S\_[Vdm]{} + i N \_[i=1]{}\^N \_i + N (a \_[i=1]{}\^N e\^[i\_i]{} + b \_[i=1]{}\^N e\^[-i\_i]{} ), where the first term is the Vandermonde piece S\_[Vdm]{} = \_[i,j = 1, i j]{}\^N - ( \^2 ( )), and $\mathcal{M}$ is the Lagrange multiplier which ensures that $\det(U) = 1$. At large $N$, the theory is dominated by the saddle-point equation = 0, which gives the equation of motion i + i ( ae\^[i\_i]{} - be\^[-i\_i]{} ) = \_[j i]{} (). On substituting $z_i = e^{i\theta_i}$ the equation of motion becomes i + i az\_i - i ( ) = \_[j i]{} ( ), and $\mathcal{M}$ is given by = \_[i=1]{}\^N ( - az\_i ). In the saddle point, $\mathcal{M}$ may have a nonzero value and could be thought as effective baryon number. At $N \rightarrow \infty$ limit, we can replace the summation by an integral over a nondecreasing function \_[i=1]{}\^N \_[-]{}\^, and performing a change of variables from $s$ to complex variables $z(s)$ \[eq:change-var1\] = (z), the equation of motion becomes \[eq:eqofmotion\] + a z - ( ) = P \_c () ( ), and $P$ implies we are taking the principal value of the integral. Ungapped Phase -------------- In the GWW model, it is known that for small potential, i.e., $a < 0.5$, the theory is in an ungapped phase. Assuming a similar picture also holds for the *ab-model*, we solve it by taking an ansatz for $\rho(z)$ in ungapped phase as, () = A\_0 + + + then P \_C (w) ( ) = -A\_0 z + + Comparing with the left hand side of Eq. (\[eq:eqofmotion\]) we have A\_0 = -a [and]{} A\_2 = -b. Therefore $\rho$ becomes, (z) = - a + - + . We also find \[eq:lagrange\_multiplier\] = \_C (z)( - az ) = 0, which indicates that the theory is in an ungapped phase. Demanding normalization of $\rho(z)$ (z) = 1, we fix $A_1 = 1$. Therefore, \[eq:ungapped\_density\] (z) = - a - + . We can solve for the contour, where $\rho(z)$ is positive definite, by integrating Eq. \[eq:change-var2\] is = (z) - az + + c. Since $s$ is purely real, and assuming that z = r() e\^[i]{} ,  a = |a|e\^[i \_1]{} [and]{} b = |b|e\^[i\_2]{}, the above equation is satisfied only if the real part of the right hand side is zero. That is, (r()) - |a| r() (+ \_1) + (- \_2) + Re(c) = 0. To fix $c$, we invoke the condition that $\det(U) =1$, i.e., $\sum_{i=1}^N \theta_i =0$, which translates to \_C (z) (z) = 0, where the branch-cuts are taken from $z = 0$ to the point $z(\pm \pi) $. Replacing $\ln(z)$ using Eq. , the above equation becomes && (is + az - - c ) (z) = 0\ &&- c + (z) s = 0\ &&- c + i \_[-]{}\^ s = 0\ &&c = 0. Hence the contour is got by solving the transcendental equation \[eq:ungapped\_contour\] (r()) - |a| r() (+ \_1) + (- \_2) = 0. Now we can compare the distribution of eigenvalues from complex Langevin dynamics with the analytic result for any $(a, b)$ combination. In Fig. \[fig:ab-model\_a0p35\_b0p2\_no\_noise\] we show the analytical result and the data obtained through complex Langevin simulations without noise for parameters $a = 0.35$, $b = 0.2$ and $N = 100$. In Fig. \[fig:ab-model\_a0p35\_b0p2\_with\_noise\] we show the result with Gaussian noise turned on. We see an excellent agreement between the analytical and numerical results. ![The distribution of eigenvalues of $ab$-model with parameters $a = 0.35$, $b = 0.2$ and $N = 100$. The solid curve is the analytical result. The data are obtained through complex Langevin simulations without noise. We used a fixed Langevin step size $\Delta \tau = 0.00001$ and evolved the system for $45000$ steps. The dashed unit circle is guide to the eye. []{data-label="fig:ab-model_a0p35_b0p2_no_noise"}](FIGS/ap35bp2.pdf) ![The distribution of eigenvalues of $ab$-model with parameters $a = 0.35$, $b = 0.2$ and $N = 100$. The solid curve is the analytical result. The data are obtained through complex Langevin simulations with fixed Langevin step size $\Delta \tau = 0.00001$, thermalization steps $N_{\rm therm} = 45000$, generation steps $N_{\rm gen} = 5000$ and with measurements performed with an interval of $250$ steps. The dashed unit circle is guide to the eye. []{data-label="fig:ab-model_a0p35_b0p2_with_noise"}](FIGS/ap35bp2wnoise.pdf) We also note that the complex Langevin simulations show excellent agreement with analytical results when the parameters are also complex. In Fig. \[fig:ab-model\_a0p2i0p2\_bm0p1i0p1\_no\_noise\] we show the analytical result and the data obtained through complex Langevin simulations without noise for parameters $a = 0.2 + i 0.2$, $b = -0.1 + i 0.1$ and $N = 100$. In Fig. \[fig:ab-model\_a0p2i0p2\_bm0p1i0p1\_with\_noise\] we show the result with Gaussian noise turned on. ![The distribution of eigenvalues of $ab$-model with parameters $a = 0.2 + i 0.2$, $b = -0.1 + i 0.1$ and $N = 100$. The solid curve is the analytical result. The data are obtained through complex Langevin simulations without noise. We used a fixed Langevin step size $\Delta \tau = 0.00001$ and evolved the system for $45000$ steps. The dashed unit circle is guide to the eye.[]{data-label="fig:ab-model_a0p2i0p2_bm0p1i0p1_no_noise"}](FIGS/abimag_wout_noise.pdf) ![The distribution of eigenvalues of $ab$-model with parameters $a = 0.2 + i 0.2$, $b = -0.1 + i 0.1$ and $N = 100$. The solid curve is the analytical result. The data are obtained through complex Langevin simulations with fixed Langevin step size $\Delta \tau = 0.00001$, thermalization steps $N_{\rm therm} = 45000$, generation steps $N_{\rm gen} = 5000$ and with measurements performed with an interval of $250$ steps. The dashed unit circle is guide to the eye. []{data-label="fig:ab-model_a0p2i0p2_bm0p1i0p1_with_noise"}](FIGS/abimag_with_noise.pdf) Gapped Phase ------------ In the gapped phase, similar to GWW model, the eigenvalues lie on an open contour $C$. To study this phase, we employ resolvent/spectral-curve method used in Ref. [@Hands:2010zp], and reviewed in Ref. [@Marino:2004eq]. The resolvent is defined as (z) = - \_j ( ). At large $N$ limit, $\omega(z)$ is analytic everywhere in the complex plane, except along a square-root branch cut running along $C$, and expressed as \[eq:resolventdefn\] (z) = - \_C (z’) . For a given potential $V(z)$, the equation of motion (similar to Eq. ) z V’(z) = P \_C (z’) can be expressed in terms of $\omega(z)$ using the Plemelj formulae z V’(z) = ,  z C, where $z \pm \epsilon$ lies on either side of the branch cut and $\epsilon \rightarrow 0$ limit is taken. We can also express $\rho(z)$ as the discontinuity of $\omega(z)$ across the cut $C$ as \[eq:density\_gapped\] z (z) = . The expectation value of any function $G(z)$ can be found as \[eq:average\_value\] \_C (z) G(z) = \_ (z) G(z). For $ab$-model \[eq:eqofmotiongapped\] (z) = - -az + + f(z)  , where $\tilde{z}, \tilde{z}^*$ are the end points of branch cut $C$ and $f(z)$ is an unknown function, which remains to be fixed. Since $\omega(z)$ has to be regular over the entire plane except along $C$ and the origin we can fix the form of $f(z)$ as f(z) = c + . Therefore $\omega(z)$ becomes (substituting $\tilde{z} = R e^{i \phi}$) (z) = - - az + + ( c + ). Normalization of $\rho(z)$, from Eq. , translates to \_[|z| 0]{} (z) = 1 and \_[|z| ]{} (z) = -1. This fixes $f(z)$ as f(z) = a - . We also get two more relations between $R$, $\mathcal{M}$ and $\cos(\phi)$ \[eq:cond1\] aR + = 1+ , and \[eq:cond2\] a () R + = 1 - . To fix the three unknowns completely, we need a third equation, which comes from invoking the $\det(U) = 1$ condition, from Eq. \[eq:gapped\_log\_integral\] \_ (z) (z) = 0, where $\tilde{C}$ is a contour encircling the branch cut $C$, and the branch cut of $\ln(z)$ ranges from $(-\infty, 0)$. Deforming the contour Fig. \[fig:original\_contour\] to the one in Fig. \[fig:deformed\_contour\] and evaluating in $\epsilon \rightarrow 0$ and $\Gamma \rightarrow \infty$ limits, we find that the divergences arising from the cutoffs $\Gamma$ and $\epsilon$ cancel separately and we arrive at the following condition \[eq:cond3\] &&(a R - )\ &&= (a R + ) ( ) (R). Now for a given $a, b$ we can numerically solve the Eqs. , , and for $R$, $\mathcal{M}$ and $\cos(\phi)$, and hence fix $\omega(z)$ completely. Also from Eq. we can fix $\rho(z)$ (z) = ( - ) . From Eq. , we can numerically compute $\mathcal{M}$, both in ungapped and gapped phases, and compare it against analytical results. Choosing $b = 2.0 a$ and varying $a$ from $0$ to $1.2$, we find that it matches very well both in ungapped and gapped regimes – see Fig. \[fig:value-of-M-at-a-2a\]. (Gap opening point can be found from Fig. \[fig:phase\_space\].) ![The value of $\mathcal{M}$ at $(a, 2a)$ for the $ab$-model with $N = 100$. The solid curve is the analytical result. The data are obtained through complex Langevin simulations with adaptive step size $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 35000$, generation steps $N_{\rm gen} = 250000$ and with measurements performed with an interval of $500$ steps. []{data-label="fig:value-of-M-at-a-2a"}](FIGS/rho2M.pdf) Similarly we compare other observables, $\big \langle {{\rm Tr\;}}(U) \big \rangle $ and $\big \langle {{\rm Tr\;}}(U^{-1}) \big \rangle $. Analytically $ \big \langle {{\rm Tr\;}}(U) \big \rangle $ is given by, [[Tr]{}]{}(U) = ( - a - ) z = -b &\ \_ z = ( ) (a (- 1)R\^2 - 2b ) & ![The value of ${{\rm Tr\;}}(U)$ at $(a, 2a)$ for the $ab$-model with $N = 100$. The solid curve is the analytical result. The data are obtained through complex Langevin simulations with adaptive step size $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 35000$, generation steps $N_{\rm gen} = 250000$ and with measurements performed with an interval of $500$ steps. []{data-label="fig:tr-U"}](FIGS/rho2U.pdf) and $\big \langle {{\rm Tr\;}}(U^{-1}) \big \rangle $ is given by [[Tr]{}]{}(U\^[-1]{}) = ( - a - ) = -a &\ \_ = ( ) ( - 2 a ) & ![The value of ${{\rm Tr\;}}(U^{-1})$ at $(a, 2a)$ for the $ab$-model with $N = 100$. The solid curve is the analytical result. The data are obtained through complex Langevin simulations with adaptive step size $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 35000$, generation steps $N_{\rm gen} = 250000$ and with measurements performed with an interval of $500$ steps. []{data-label="fig:tr-U-inv"}](FIGS/rho2Uinv.pdf) In Fig. \[fig:tr-U\] and Fig. \[fig:tr-U-inv\] we show the observables $\big \langle {{\rm Tr\;}}(U) \big \rangle $ and $\big \langle {{\rm Tr\;}}(U^{-1}) \big \rangle$, respectively. We see that the analytical and numerical results show excellent agreement. Phase transition of $ab$-Model ------------------------------ The eigenvalue density Eq. on contour Eq. , is proportional to $ds$, which in terms of $r(\theta)$ is given by ds &=& d\ &=& d which is not positive definite for all $(a, b)$ combinations. It fails to do so, when the function inside the brackets, $\left[ \dots \right]$, becomes negative. Restricting to $a, b \in \mathbb{R}$, the condition simplifies as the gap opens about $\theta = 0$ \[eq:gap\_opening\_condition\] & 1 - ar(0) - & 0\ & & e.\ From Eq. $r(0)$ is given by \[eq:gap\_opening\_contour\] r(0) = . The phase diagram of the model is shown in Fig. \[fig:phase\_space\]. ![Phase diagram of the $ab$-model in the positive $ab$-plane. The solid red line indicates that the phase diagram is symmetric under the exchange of $a$ and $b$.[]{data-label="fig:phase_space"}](FIGS/Phase_Space.pdf) It would be interesting to know how quantities change across the gap opening transition and also the order of the phase transition. To study that we first restrict ourselves to a special case, $b = 0$ in our model. Then from Eqs. and the gap opens about $a = \frac{1}{e}$, $R = e$, and since the ungapped phase has no branch cuts in the eigenvalue distributions, $\phi$ should start from zero, about the gap-opening point. And the conditions Eqs. and simplifies to \[eq:acond1\] aR () = 1 and Eq. to \[eq:acond2\] ( ) ( ) = ( ) ( ). The observable $\big \langle {{\rm Tr\;}}(U) \big \rangle $ becomes [[Tr]{}]{}(U) = 0 &\ ( ) & Since the first derivative of free-energy $F[a]$ is the expectation value of ${{\rm Tr\;}}(U)$ &=&\ &=& [[Tr]{}]{}(U)(a [[Tr]{}]{}(U))\ &=& [[Tr]{}]{}(U) , we find that it is continuous across the gap. Upon expanding about a = + a,  () = 1 - 2p [and]{} R = e + R the variation of $\delta \big \langle {{\rm Tr\;}}(U) \big \rangle $ is given by [[Tr]{}]{}(U) = ( ) = - e p. From Eqs. and we get p = e a + , and \[eq:dpdRrel\] p (p) = . Eliminating $\delta R$ from above two equations we get the equation p (1 - (p)) = e a. To invert the above equation let us substitute $\delta p \rightarrow e^{k}$. Then we have (k-1) e\^[(k - 1)]{} = - a. The above equation is of the form, $x e^{x} = y$, which can be inverted to express $x$ as a function of $y$ and it is known as the Lambert-W function [@lambertW]. (It is often expressed as $W_{c}(y)$.) This function is in general a multivalued-complex function, where $ c \in \mathbf{Z}$, chooses each branch. Since $\delta a > 0$ and $\delta p \in \mathbf{R}$ we have two real valued branches: $W_{0}(y)$ (the principal branch) and $W_{-1}(y)$. Therefore, p = e\^[W\_0(-a) + 1]{}  e\^[W\_[-1]{}(-a) + 1]{}. For small values of $\delta a$ we know that $$\begin{aligned} \lim_{\delta a \rightarrow 0} W_{0}(-\delta a) &= 0,\\ \lim_{\delta a \rightarrow 0} W_{-1}(-\delta a) &\approx\ln (\delta a).\end{aligned}$$ Therefore, $\delta p$ will vanish as $\delta a \rightarrow 0$ only if we choose the second branch, i.e., $\delta p = e^{W_{-1}(-\delta a) + 1}$. Hence [[Tr]{}]{}(U) = -e\^[W\_[-1]{}(-a) + 2]{}. Now the second derivative of free energy = goes to zero as $\delta a \rightarrow 0$ and is continuous across the gap. However, the third derivative = - diverges as $\delta a \rightarrow 0$. Hence it has a third order phase transition. It can also be shown that similar arguments hold in the generic case $b \neq 0$. Thus we conclude that the $ab$-model displays a third order phase transition. Gauge Theory to Unitary Matrix Model {#sec:gt-to-u-m-model} ==================================== A unitary matrix model arises in a one-loop formulation of QCD \[and analogous $SU(N)$ gauge theories\] on compact spaces (often $S^1 \times S^3$). This was originally derived in Refs. [@Sundborg:1999ue; @Aharony:2003sx; @AlvarezGaume:2005fv; @AlvarezGaume:2006jg] for theories with more general matter content. The one-loop effective action of QCD on $S^1 \times S^3$ with inverse temperature $\beta$, chemical potential $\mu$ and quark mass $m$ has the following form [@Hands:2010zp], with thermal Polyakov line as the unitary matrix model \[eq:qcd-s1-s3-action\] S &=& \_[n=1]{}\^ z\_b() [[Tr]{}]{}U\^[ n]{}  [[Tr]{}]{}U\^[n]{}\ && + \_[n=1]{}\^ N\_f z\_f(, mR) , where $R$ is the radius of $S^3$ and $N_f$ is the number of flavors of fundamental fermions. The quadratic term in Polyakov loop is the contribution from adjoint fields and the linear term is the contribution from the fundamental matter fields. Here, we have taken the adjoint contribution to be bosonic and the the contribution from fundamental fields to be fermionic. To be noted is that in the free theory the effective action is determined in terms of single particle (bosonic and fermionic) partition functions z\_b() = 2 \_[l=1]{}\^l(l+2) e\^[-(l+1)/R]{}, and z\_f(, mR) = 2 \_[l=1]{}\^l(l+1) e\^[-]{}. Also note that we will be using dimensionless variables $\beta/R$, $\mu R$ and $mR$ in numerical simulations. An analogous action, for the simpler $0+1$ dimensional case would be, z\_b = 0, and z\_f = 2 e\^[-m]{}, where the parameter $m$ is the mass of the fundamental fermions. In the low temperature limit, $\beta \to \infty$, we have $z_b(\infty) = 0$ and so the gluonic contribution is negligible. Thus the action is S = S\_[Vdm]{} + S\_f, where $S_{\rm Vdm}$ is the Vandermonde piece of the action and $S_f$ is the fundamental fermionic contribution. The fermionic part could be summed in a logarithm \[eq:action-multi-level\] S\[U\] = - \_[l=1]{}\^\_l ( ), where \_l &=& 2l (l+1) ,\ \_l &=& . Observables ----------- We would like to simulate the action given in Eq. using complex Langevin method. We can study several interesting observables in this model. We briefly describe them below 1. Polyakov line $P$ and inverse Polyakov line $P^{-1}$ These are the most natural set of observables to study the confined/deconfined phases in the theory. 2. Fermion number $f_N$ It gives the number of fermions minus the number of anti-fermions in a given volume f\_N &=& ( ). In the model we study here we have a single chemical potential $\mu$. In general there can be chemical potential for each fermion flavor. The quark number susceptibility $\chi_f$ measures the response of the fermion number density to infinitesimal changes in the chemical potential, \_f = . This observable follows the behavior of the Polyakov line. Thus, it also serves as an indicator of confinement-deconfinement transitions for nonzero chemical potential. 3. Pressure $p$ p &=& ( ), with $V_3$ denoting the spatial volume. 4. Energy $E$ It can be constructed from pressure and fermion number density E &=& - p V\_3 + f\_N. It is also possible to compute the chiral condensate and average phase, though we will not compute them in this work. The chiral condensate $\langle {{\overline{\psi}}}\psi \rangle$ is given by &=& - \_[m 0]{}( ), and the average phase $\langle e^{i\phi}\rangle_{pq}$ has the form e\^[i]{}\_[pq]{} &=& , where $pq$ refers to the phase quenched theory. Single Level Model with Positive Chemical Potential {#sec:single-level-model} --------------------------------------------------- We can truncate the action given in Eq. (\[eq:action-multi-level\]) in a double scaling limit: $$\begin{aligned} \beta \rightarrow \infty, \\ \nn \mu \rightarrow \epsilon_0, \\ \nn \exp(\beta(\mu-\epsilon_l))=\xi,\end{aligned}$$ where $\epsilon_0$ is a fixed quark energy level and we call $\xi$ the transition parameter. Only contribution from a single level survives here and the action takes the form \[eq:act-single-level\] S\[U\] = - ( 1 + U ). The effective action on the complexified angle variables include the Vandermonde piece and a Lagrange multiplier. In the large $N$ limit, the integral over the angles is dominated by a saddle point obtained by solving the equation of motion that follows from the effective action involving Eq. (\[eq:act-single-level\]) = i N [[N]{}]{}- - \_[j (i)]{}\^N (). Here also the action is not hermitian, giving rise to the [*sign problem*]{} in the presence of a chemical potential. As a result the saddle point configuration will lie out in the complex plane. If we define $z_i = \exp(i \theta_i)$ then in the presence of the non-real potential the $z_i$ will move off the unit circle in the $z$-plane. We can explore the nature of eigenvalue distribution in the complex plane for various values of transition parameter $\xi$. We find that when $\xi$ is either very small or large, the potential vanishes and so we expect the $\{z_i\}$ to be uniformly distributed around the unit circle. Thus, when $\mu$ varies from $\mu \ll \epsilon$ to $\mu \gg \epsilon$ the quark energy level becomes occupied and the effective fermion umber jumps by factor $\sigma$. In Ref. [@Hands:2010zp] the authors provide a detailed description of this transition. Let us look at the various regimes of $\xi$ and see how it affects the eigenvalue distribution, following the analytical study given in Ref. [@Hands:2010zp]. 1. [*The small $\xi$ confined phase*]{} In the small $\xi$ confining phase the effective fermion number vanishes, ${{\cal N}}= 0$, and the Polyakov line expectation values are P = 0,  P\^[-1]{} = . Thus we have $P \neq P^{-1}$, as a result of the complex action. As $\xi$ is increased the contour of eigenvalue distribution opens into an arc, just as the matrix model solved by Gross and Witten [@Gross:1980he] and Wadia [@Wadia:1980www; @Wadia:1980cp]. The line of phase transitions in the $(\mu, T)$ plane corresponds to the straight line = - T . Note that is approximation is valid only in the low temperature ($\beta \to \infty$) limit. 2. [*The large $\xi$ confined phase*]{} In this phase the effective fermion number is [[N]{}]{}= , indicating that the level is now occupied. The Polyakov line expectation values are P = ,  P\^[-1]{} = 0. Comparing with the previous case the behavior of $P$ and $P^{-1}$ swaps over along the replacement $\xi \to \xi^{-1}$. The large $\xi$ confined phase persists until the value = \_2 = . For smaller values of $\xi$ the contour of eigenvalue distribution is not closed and the phase does not exist. The points of transition $\xi = \xi_1$ and $\xi = \xi_2$ satisfy $\xi_1 \xi_2 = 1$. In the $(\mu, T)$ plane the boundary lies along the straight line = + T , again valid in the low temperature limit. 3. [*The deconfined phase*]{} In the region $\xi_1 \leq \xi \leq \xi_2$, experience with GWW matrix model suggests that the eigenvalue distribution exhibits the shape of an open contour. In this regime we get a condition = . This equation determines ${{\cal N}}$ as a function of $\xi$. From the above equation it follows that across the transitions at $\xi = \xi_1$ and $\xi = \xi_2$, fermion number density ${{\cal N}}$ and its first derivative $\partial {{\cal N}}/ \partial \mu$ are continuous, however higher derivatives are discontinuous. Since ${{\cal N}}$ is the effective fermion number, the first derivative of the grand potential, it follows that the transitions are third order, just as in the original GWW model. For a single winding, the Polyakov lines are P = ,  P\^[-1]{} = . Using complex Langevin dynamics we have simulated the single level matrix model given by the action in Eq. . In Fig. \[fig:eigs-nc500-nf500-m0-b30\_noise\] we show the eigenvalue distributions of the Polyakov line in the confined and deconfined phases as a function of the logarithm of the transition parameter, $\log \xi$, for $SU(N)$ case with $N = N_f = 500$ and quark mass $m = 0$. We see that the eigenvalue distributions start with a closed contour (confined phase), passes through an open contour (deconfined phase) and again goes into a closed contour. (This figure can be compared with Fig. 12 in Sec. 4.1 of Ref. [@Hands:2010zp], where it was obtained through analytical methods.) ![The eigenvalue distributions in the confined and deconfined phases as a function of $\log \xi$ for the single level matrix model with positive chemical potential. (See Eq. for the form of the action.) Here $N = N_f = 500$ and quark mass $m = 0$. The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 18000$, generation steps $N_{\rm gen} = 2000$ and with measurements performed with an interval of $100$ steps. The solid unit circles are guide to the eye.[]{data-label="fig:eigs-nc500-nf500-m0-b30_noise"}](FIGS/su500_eigs_lg_xi.pdf){width="5.5in"} In Fig. \[fig:fN\_N500\] we provide the (normalized) effective fermion number $\langle f_N \rangle$, and in Fig. \[fig:P\_inv\_P\_N500\] the Polyakov line expectation value $\langle P \rangle$ and the inverse Polyakov line expectation value $\langle P^{-1} \rangle$ across a pair of GWW transitions from the small $\xi$ confined phase through the deconfined phase to the large $\xi$ confined phase. The transitions from confined/deconfined phases occur when either $\langle P \rangle$ or $\langle P^{-1} \rangle$ vanish. The parameters used are: $N = N_f = 3~{\rm and}~500$ and quark mass $m = 0$. The simulations show excellent agreement with the analytical results in the large $N$. ![The (normalized) effective fermion number $\langle f_N \rangle$ across the pair of GWW transitions from the small $\xi$ confined phase through the deconfined phase to the large $\xi$ confined phase for the single level model with positive chemical potential. (See Eq. for the form of the action.) The solid curve is the analytical result ($N = \infty$). The data points are obtained through complex Langevin simulations. We used adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 10000$, generation steps $N_{\rm gen} = 10000$ and measurements are performed with an interval of $100$ steps. We show simulation data for quark mass $m = 0$ and for $N = N_f = 500$ and $N = N_f = 3$.[]{data-label="fig:fN_N500"}](FIGS/lgxi_fN.pdf){width="4.5in"} ![The Polyakov line $\langle P \rangle$ and inverse Polyakov line $\langle P^{-1} \rangle$ across the pair of GWW transitions from the small $\xi$ confined phase through the deconfined phase to the large $\xi$ confined phase for the single level model with positive chemical potential. (See Eq. for the form of the action.) The transitions from confined/deconfined phases occur when either $\langle P \rangle$ or $\langle P^{-1} \rangle$ vanish. The solid and dotted curves are the analytical results ($N = \infty$) for $\langle P \rangle$ and $\langle P^{-1} \rangle$, respectively. The data points are obtained through complex Langevin simulations. We used adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 10000$, generation steps $N_{\rm gen} = 10000$ and measurements are performed with an interval of $100$ steps. We show simulation data for quark mass $m = 0$ and for $N = N_f = 500$ and $N = N_f = 3$.[]{data-label="fig:P_inv_P_N500"}](FIGS/lgxi_P_inv_P.pdf){width="4.5in"} In Figs. \[fig:single-level-p-invp\] and \[fig:single-level-fn\] we show the Polyakov lines and fermion number density for a range of simulation parameters of the single level matrix model (see Eq. for the form of the action): $\beta = \{10, 15, \cdots, 100\}$ and $\mu = \{3.0, 3.025, 3.05, \cdots, 4.0\}$. The quark energy level of the model is fixed to the third level $\epsilon \equiv \epsilon_{(l=3)} = 3.5$. The Polyakov loops peak around $\mu = 3.5$ in this model. In Fig. \[fig:single-level-p-invp\] we show the behavior of Polyakov and inverse Polyakov loops for $\beta = \{25, 50, 75, 100\}$. It is clear that the widths of the Polyakov loops decrease as the temperature is reduced (large $\beta$) and the behavior of inverse Polyakov line precedes that of the Polyakov line as a function of $\mu$. In Fig. \[fig:runtime-hist-b50-75\] we show the Langevin evolution history of the Polyakov loop observable in this model for $\beta = 50, 75$ and with $\mu = 3.0, 3.3, 3.5$ for each $\beta$ value. We note that the observables saturate to their equilibrium values rather quickly in this model. In Fig. \[fig:single-level-fn\] we show the behavior of the (normalized) fermion number density $\langle f_N \rangle$ as a function of chemical potential and inverse temperature. The transition in fermion number becomes sharper as the temperature is decreased (high $\beta$). The model is in a deconfined phase when $ 0 < \langle f_N \rangle < 1$. ![Polyakov line $\langle P \rangle$ and inverse Polyakov line $\langle P^{-1} \rangle$ as a function of chemical potential for single level matrix model with quark energy level $\epsilon \equiv \epsilon_{(l=3)} = 3.5$ and quark mass $m = 0$. (See Eq. for the form of the action.) Here $N = N_f = 500$ and $\beta = 25, 50, 75, 100$. The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.000005$, thermalization steps $N_{\rm therm} = 5000$, generation steps $N_{\rm gen} = 5000$ and with measurements performed with an interval of $50$ steps.[]{data-label="fig:single-level-p-invp"}](FIGS/single_level_su500_mu_p_invp.pdf){width="5.0in"} ![The (normalized) fermion number density $\langle f_N \rangle$ as a function of chemical potential $\mu$ and inverse temperature $\beta$ for single level matrix model with quark energy level $\epsilon \equiv \epsilon_{(l=3)} = 3.5$ and quark mass $m = 0$. (See Eq. for the form of the action.) Here $N = N_f = 500$. The model is in a deconfined phase when $0 < \langle f_N \rangle < 1$. The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.000005$, thermalization steps $N_{\rm therm} = 5000$, generation steps $N_{\rm gen} = 5000$ and with measurements performed with an interval of $50$ steps.[]{data-label="fig:single-level-fn"}](FIGS/single_model_fn_phase_su500_mu_beta.pdf){width="5.0in"} When the quark mass is non-vanishing in QCD, the expectation values of bulk observables such as the fermion number density, Polyakov lines and energy, exhibit the ‘Silver Blaze’ behavior. The bulk observables are nearly zero until onset [@Cohen:2003kd] to a deconfinement transition, which occurs when the chemical potential increases to the value of the lightest quark mass. We simulate the model given by the action in Eq. to see this phenomenon. In this model the onset occurs at $\mu = m$. The Polyakov line is given in Fig. \[fig:Silver-Blaze\] (Left) as a function of chemical potential for large quark mass, near the onset $\mu = m = 25$ for $N = N_f = 500$ and $\beta = 25$ (low $T$). In the large $m$ limit, similar to the $m = 0$ case, the behavior of inverse Polyakov line $\langle P^{-1} \rangle$ precedes that of $\langle P \rangle$ as a function of $\mu$. The transition in $\mu$ occurs around onset at $m$. In Fig. \[fig:Silver-Blaze\] (Right) we show the effective fermion number density as a function of chemical potential. As we can see in the figures the bulk observables are close to zero until the onset transition at $\mu = m$. The observables rise smoothly from the onset and as $\mu$ is increased further from $m$ the observables behave as they would for $m = 0$. This is reflected in the oscillations that appear in the observables at larger $\mu$. The oscillations in the Polyakov and inverse Polyakov loops are clearly visible. In order to see the prominent nature of oscillations in the fermion number density one has to normalize this observable by its Stefan-Boltzmann value. (See Ref. [@Hands:2010zp] for a discussion on this.) ![The Silver Blaze behavior of observables $\langle P \rangle$ and $\langle P^{-1} \rangle$, and $\langle f_N \rangle$ at non-zero quark mass $m$ for the model given by the action in Eq. . (Left) Polyakov line $\langle P \rangle$ and inverse Polyakov line $\langle P^{-1} \rangle$ and (Right) fermion number $\langle f_N \rangle$ as a function of chemical potential for large quark mass near onset at $\mu = m = 25$ (marked by the solid vertical lines in the figures). Here $N = N_f = 500$ and $\beta = 25$ (low $T$). The data are obtained through complex Langevin simulations with an adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 5000$, generation steps $N_{\rm gen} = 5000$ and with measurements performed with an interval of $50$ steps.[]{data-label="fig:Silver-Blaze"}](FIGS/b25m25su500_mu_P_inv_P_fN.pdf){width="5.0in"} Single Level Model with $U$ and $U^\dagger$ {#sec:single-level-model-pm-mu} ------------------------------------------- In this section we consider the phase diagram of the model given by the following action \[eq:act-xi1\_xi2\] S\[U\] = - , where $$\begin{aligned} \xi_1 &= e^{\beta(\mu - \epsilon)}, \\ \xi_2 &= e^{\beta(-\mu - \epsilon)}.\end{aligned}$$ Such a model naturally arises from $0+1$-dimensional gauge theory with a fundamental fermion. In Fig. \[fig:two-fuga-fn\_high\_T\] we provide the phase diagram of this model on the $(\mu, \beta)$ plane for the level $l=1$. (Corresponding to quark energy level $\epsilon = 1.5$ and $\sigma = 4$.) From the behavior of the expectation value of the fermion number density we see that the phase transition from confined to deconfined phase is smooth on the $(\mu, \beta)$ plane even at high temperature ($0.1 \leq \beta \leq 2.0$). ![The (normalized) fermion number density $\langle f_N \rangle$ as a function of chemical potential $\mu$ and inverse temperature $\beta$ for the matrix model given by the action in Eq. . The model has fixed quark energy level $\epsilon \equiv \epsilon_{(l=1)} = 1.5$, quark mass $m = 0$ and $N = N_f = 100$. The model is in a deconfined phase when $ 0 < \langle f_N \rangle < 1$. The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 10000$, generation steps $N_{\rm gen} = 50000$ and with measurements performed with an interval of $100$ steps. []{data-label="fig:two-fuga-fn_high_T"}](FIGS/U_Udag_xi1_xi2_model_fn_mu_beta_phase_su100.pdf){width="4.5in"} Single Level Model with Interaction {#sec:single-level-model-int} ----------------------------------- It would be interesting to consider the single-level matrix model with a nontrivial interaction turned on. We take a Polyakov line interaction term of the form S\_[int]{}\[U\] = g  ([[Tr]{}]{}U) ([[Tr]{}]{}U\^[-1]{}). Here $g$ denotes a coupling parameter. Thus we have \[eq:act-single-level-int\] S\[U\] = - ( 1 + e\^[(- )]{} U ) + S\_[int]{}\[U\]. Here also we take the quark energy level to be fixed at $\epsilon \equiv \epsilon_{(l=3)} = 3.5$. The action is again not hermitian, giving rise to the sign problem in the presence of a chemical potential. In Figs. \[fig:single-level-fn-int\] and \[fig:single-level-poly-int\] we plot the fermion number density and the Polyakov lines of the interacting model for various values of the coupling $g = 0, 5, 20, 100$. It is evident that the confinement/deconfinement transition becomes sharper as the interaction strength is increased. The behavior of the Polyakov lines show that the model is in a confined phase for most of the values of the chemical potential. ![The (normalized) fermion number density $\langle f_N \rangle$ as a function of chemical potential $\mu$ for interacting single-level matrix model, given by the action in Eq. , with couplings $g = 0, 5, 20$ and $100$. The quark energy level is taken as $\epsilon \equiv \epsilon_{(l=3)} = 3.5$ and quark mass is $m = 0$. Here $N = N_f = 500$. The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.000005$, thermalization steps $N_{\rm therm} = 5000$, generation steps $N_{\rm gen} = 5000$ and with measurements performed with an interval of $50$ steps. The model is in a deconfined phase when $0 < \langle f_N \rangle < 1$. The data show that the phase transition becomes sharper as the interaction strength $g$ is increased.[]{data-label="fig:single-level-fn-int"}](FIGS/b30_g_0_5_20_100_fN.pdf){width="4.5in"} ![The Polyakov line and inverse Polyakov line across a pair of GWW transitions for the interacting single-level matrix model, given by the action in Eq. , with a fixed quark energy level $\epsilon \equiv \epsilon_{(l=3)} = 3.5$, quark mass $m = 0$ and $N = N_f = 500$. The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.000005$, thermalization steps $N_{\rm therm} = 5000$, generation steps $N_{\rm gen} = 5000$ and with measurements performed after every $50$ steps. The solid lines are guid to the eye. The plots indicate that the model prefers to stay in a confined phase as the interaction strength $g$ is increased.[]{data-label="fig:single-level-poly-int"}](FIGS/int_su500_mu_p_invp.pdf){width="4.5in"} Conclusions and Discussions {#sec:conclusions} =========================== In this work we have successfully used complex Langevin dynamics with stochastic quantization to simulate the thermodynamics of large $N$ unitary matrix models with complex actions. We started with a simple matrix model called the $ab$-model and investigated its phase structure analytically and numerically. The numerical simulations show excellent match with analytical results. We also studied a model obtained from the effective theory of QCD on $S^1 \times S^3$ at low temperature and finite quark chemical potential. At zero quark mass and low temperature our simulations showed a series of GWW confinement-deconfinement phase transitions as a function of the chemical potential. The phases are characterized by the distribution of eigenvalues of the Polyakov line in the complex plane. In the large quark mass regime we were also able to observe the Silver Blaze behavior in that the bulk observables are roughly zero until the onset transition to the deconfined phase, which occurs at $\mu = m$. We also simulated the model with a simple nontrivial Polyakov loop interaction turned on. The model prefers to live in the confined phase as the interaction strength is increased. We also note that each confinement-deconfinement transition in the Polyakov loop is associated with a quark energy level transition. It is interesting to note that the non-monotonic behavior of Polyakov loops have been observed in lattice simulations of QCD with gauge group $SU(2)$ near its saturation density in Ref. [@Hands:2010gd]. We successfully applied complex Langevin dynamics to QCD on $S^1 \times S^3$ with finite chemical potential and computed several bulk observables. We provided our simulation results on this in Appendix \[sec:qcd-finite-cp\]. There are several interesting future directions. One could consider complex Langevin simulations of the model with several quark flavors with masses $m_f$ and different chemical potentials $\mu_f$. One could also add other types of nontrivial interaction terms into the model and look for cross-over transitions on the $(\mu, \beta)$ plane [@Basu:2008uc]. It would also be interesting to see if there exists an AdS/CFT type gravitational dual of the models we studied here. One could ask the question whether the infinite sequence of GWW transitions that we observe in the matrix model can be seen in the dual gravitational description. We gratefully acknowledge support from the International Centre for Theoretical Sciences (ICTS-TIFR), the Infosys Foundation and the Indo-French Centre for the Promotion of Advanced Research (IFCPAR/CEFIPRA). We thank Spenta Wadia, Takehiro Azuma, Jun Nishimura and Andrei Alexandru for a careful reading of the manuscript and providing valuable suggestions. We also thank Gautam Mandal, Antonio Gonzalez-Arroyo and Shiraz Minwalla for valuable comments and discussions. We also thank the organizers of ICTS program “Nonperturbative and Numerical Approaches to Quantum Gravity, String Theory and Holography”, 2018, where this work was presented. PB thanks TIFR theory group for inviting him to present this work as a part of the Quantum Spacetime Seminars. QCD on $S^1 \times S^3$ at Finite Chemical Potential {#sec:qcd-finite-cp} ==================================================== In this section we discuss the results obtained through complex Langevin simulations of QCD on $S^1 \times S^3$ with finite chemical potential, zero quark mass and at low temperature, given by the action in Eq. (\[eq:action-multi-level\]). Fermion number $\langle f_N \rangle$ ------------------------------------ In Fig. \[fig:fN-nc3\_30-nf3\_30-m0-b30\] we show $\langle f_N \rangle$ as a function of $\mu$ for low temperatures for $m = 0$. The presence of an occupation level structure is evident. The transitions occur when $\epsilon_l - \mu$ changes sign, that is, when $\mu$ passes a quark energy level. It is interesting to compare with the results obtained in Ref. [@Hands:2010zp]. We also note that in Ref. [@Banerjee:2010kc] Banerjee and Chandrasekharan observed the same level structure in the particle number in the nonlinear $O(2)$ sigma model. The fermion number can be used as an order parameter of the confinement-deconfinement transitions in the large $N$ theory. The first and second derivatives of the grand potential, $\langle f_N \rangle$ and $\langle \partial f_N/\partial \mu \rangle$ are continuous as a function of the chemical potential but the third derivative $\langle \partial^2 f_N/\partial \mu^2 \rangle$ is discontinuous. This indicates that the transitions are third order, of the GWW type. ![Expectation values of the effective fermion number $\langle f_N \rangle$ as a function of the quark chemical potential for QCD on $S^1 \times S^3$. (See Eq. (\[eq:action-multi-level\]) for the form of the action.) Here $m = 0$, inverse temperature $\beta = 30$, $N = N_f = 3$ (Left) and $N = N_f = 30$ (Right). The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 10000$, generation steps $N_{\rm gen} = 50000$ and with measurements performed with an interval of $100$ steps. The solid lines are to guide the eye.[]{data-label="fig:fN-nc3_30-nf3_30-m0-b30"}](FIGS/su3_mu_fN.pdf){width="99.00000%"} ![Expectation values of the effective fermion number $\langle f_N \rangle$ as a function of the quark chemical potential for QCD on $S^1 \times S^3$. (See Eq. (\[eq:action-multi-level\]) for the form of the action.) Here $m = 0$, inverse temperature $\beta = 30$, $N = N_f = 3$ (Left) and $N = N_f = 30$ (Right). The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 10000$, generation steps $N_{\rm gen} = 50000$ and with measurements performed with an interval of $100$ steps. The solid lines are to guide the eye.[]{data-label="fig:fN-nc3_30-nf3_30-m0-b30"}](FIGS/su30_mu_fN.pdf){width="99.00000%"} Polyakov Lines $\langle P \rangle$ and $\langle P^{-1} \rangle$ --------------------------------------------------------------- When the chemical potential is zero the Polyakov line $\langle P \rangle$ and the conjugate Polyakov line $\langle P^{-1} \rangle$ coincide and it is no longer the case for non-zero chemical potential. In Fig. \[fig:fN-nc3\_30-nf3\_30-m0-b30\] we show $\langle P \rangle$ and $\langle P^{-1} \rangle$ as a function of $\mu$. Each spike in $\langle P \rangle$ and $\langle P^{-1} \rangle$ corresponds to a level transition in $\langle f_N \rangle$. They exhibit similar behavior as a function of $\mu$ however, the the behavior of $\langle P^{-1} \rangle$ always precedes that of $\langle P \rangle$ at the start and finish of each level transition. We note that the lines peak at $\mu = 1.5, 2.5, \cdots$. We also note that the widths of deconfined regions increase as $\mu$ is increased. ![Expectation values of the Polyakov line $\langle P \rangle$ and inverse Polyakov line $\langle P^{-1} \rangle$ as a function of the quark chemical potential $\mu$ for QCD on $S^1 \times S^3$. (See Eq. (\[eq:action-multi-level\]) for the form of the action.) Here $m = 0$, inverse temperature $\beta = 30$, $N = N_f = 3$ (Left) and $N = N_f = 30$ (Right). The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 10000$, generation steps $N_{\rm gen} = 50000$ and with measurements performed with an interval of $100$ steps. The solid lines are to guide the eye.[]{data-label="fig:fN-nc3_30-nf3_30-m0-b30"}](FIGS/su3_mu_P_inv_P.pdf){width="99.00000%"} ![Expectation values of the Polyakov line $\langle P \rangle$ and inverse Polyakov line $\langle P^{-1} \rangle$ as a function of the quark chemical potential $\mu$ for QCD on $S^1 \times S^3$. (See Eq. (\[eq:action-multi-level\]) for the form of the action.) Here $m = 0$, inverse temperature $\beta = 30$, $N = N_f = 3$ (Left) and $N = N_f = 30$ (Right). The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 10000$, generation steps $N_{\rm gen} = 50000$ and with measurements performed with an interval of $100$ steps. The solid lines are to guide the eye.[]{data-label="fig:fN-nc3_30-nf3_30-m0-b30"}](FIGS/su30_mu_P_inv_P.pdf){width="99.00000%"} Pressure $\langle p \rangle$ and Energy $\langle E \rangle$ ----------------------------------------------------------- In Figs. \[fig:p-E-nc3-nf3-m0-b30\] and \[fig:p-E-nc30-nf30-m0-b30\] we provide the pressure multiplied by the 4-volume and energy $\langle E \rangle = - \langle p \rangle + \mu \langle f_N \rangle$. We note that the pressure exhibits a level structure. The energy levels are not horizontal. The factor $\mu$ in front of the fermion number causes the levels to rise linearly with $\mu$. ![(Left) Pressure $\langle p \rangle$ and (Right) energy $\langle E \rangle$ as a function of the quark chemical potential for QCD on $S^1 \times S^3$. (See Eq. (\[eq:action-multi-level\]) for the form of the action.) Here $N = N_f = 3$, $m = 0$ and inverse temperature $\beta = 30$. The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 10000$, generation steps $N_{\rm gen} = 50000$ and with measurements performed every $100$ steps. The solid lines are guide to the eye.[]{data-label="fig:p-E-nc3-nf3-m0-b30"}](FIGS/su3_mu_E_p.pdf){width="5.0in"} ![(Left) Pressure $\langle p \rangle$ and (Right) energy $\langle E \rangle$ as a function of the quark chemical potential for QCD on $S^1 \times S^3$. (See Eq. (\[eq:action-multi-level\]) for the form of the action.) Here $N = N_f = 30$, $m = 0$ and inverse temperature $\beta = 30$. The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 10000$, generation steps $N_{\rm gen} = 50000$ and with measurements performed with an interval of $100$ steps. The solid lines are to guide the eye.[]{data-label="fig:p-E-nc30-nf30-m0-b30"}](FIGS/su30_mu_E_p.pdf){width="5.0in"} In Fig. \[fig:eigs-nc30-nf30-m0-b30\_noise\] we show the eigenvalue distributions in the confined and deconfined phases as a function of the quark chemical for $N = N_f = 30$ and barious $\mu$ values. ![The eigenvalue distributions in the confined, deconfined and again confined phases as a function of the quark chemical potential for QCD on $S^1 \times S^3$. (See Eq. (\[eq:action-multi-level\]) for the form of the action.) Here $N = N_f = 30$, $m = 0$ and inverse temperature $\beta = 30$ (low $T$). The data are obtained through complex Langevin simulations with adaptive Langevin step sizes $\Delta \tau \leq 0.00005$, thermalization steps $N_{\rm therm} = 10000$, generation steps $N_{\rm gen} = 50000$ and with measurements performed with an interval of $100$ steps. The solid unit circles are guide to the eye.[]{data-label="fig:eigs-nc30-nf30-m0-b30_noise"}](FIGS/su30_eigs_mu_4p1_to_mu4p9.pdf){width="5.0in"} Reliability of Complex Langevin Dynamics {#sec:appendix_c} ======================================== We would like to justify the use of complex Langevin dynamics for the matrix models we simulated in this work. In Ref. [@Nagata:2016vkn; @Nagata:2018net] the authors suggested a possible criterion to determine the correct convergence of the complex Langevin method – the probability distribution of the magnitude of the drift term should fall off exponentially or faster. This criterion can, in general, be violated if the complexified fields develop large imaginary parts (the [*excursion problem*]{}). In Fig. \[fig:prob-dist-histogram\] we show the probability distributions $P(u)$ for the magnitude of the drift term u = , of the single level $SU(N)$ matrix model. However, in our case the plots hint that the probability distribution falls off like a power law with $u$ even though we have excellent agreements with analytical results. Figs. \[fig:fN\_N500\] and \[fig:P\_inv\_P\_N500\] show excellent agreement between simulation and analytical data in this model. We also observed a similar fall off behavior in the $ab$-model. We think this needs further investigations and we save it for future work. It is desirable to have a well localized distribution of dynamical variables of the theory in the complexified field configuration space. A convenient measure of the size of the distribution in imaginary directions of the field variables is the unitarity norm [@Sexty:2013ica] defined as W [[Tr]{}]{}( ( U U\^- 1 )\^2 ) 0, with the equality reaching when the fields take values in $SU(N)$. In Fig. \[fig:unitarity-norm\] we show the unitarity norm as a function of Langevin time for the single level $SU(N)$ matrix model with $N = N_f = 500$, quark mass $m = 0$ and inverse temperature $\beta = 30$ (low $T$). We see that the unitarity norm remains bounded in the simulations. In Fig. \[fig:poly-evolution\] we show the Langevin evolution of the Polyakov line observable for the same set of parameters. ![The unitarity norm $W$ against the Langevin time for the single level $SU(N)$ matrix model with $N = N_f = 500$ and quark mass $m = 0$. (See Eq. (\[eq:act-single-level\]) for the form of the action.) The plots are for $\log \xi = -3.5, -1.5, 1.5$ and $2.5$. We used adaptive Langevin step sizes $\Delta \tau \leq 0.00005$ in the simulations.[]{data-label="fig:unitarity-norm"}](FIGS/lgxi_unorm_runtime_hist.pdf){width="6.0in"} ![The Polyakov loop $\langle P \rangle$ against the Langevin time for the single level $SU(N)$ matrix model with $N = N_f = 500$ and quark mass $m=0$. (See Eq. (\[eq:act-single-level\]) for the form of the action.) The plots are for $\log \xi = -3.5, -1.5, 1.5$ and $2.5$. We used adaptive Langevin step sizes $\Delta \tau \leq 0.00005$ in the simulations. The bottom four plots show the thermalizations of the observables shown on the top four plots.[]{data-label="fig:poly-evolution"}](FIGS/lgxi_poly_runtime_hist.pdf){width="7.0in"}
--- address: - 'A. F. Ioffe Physical-Technical Institute, Russian Academy of Sciences, 194021 St. Petersburg, Russia' - 'Escuela de Fisica de la UAZ, Apartado Postal C-580, 98060 Zacatecas, Zac., Mexico' author: - 'L. I. Korovin, I. G. Lang' - 'S. T. Pavlov[@byline1]' title: Influence of a polaron dispersion and excitonic effect on a magnetopolaron energy spectrum in a quantum well --- Introduction. ============= Let us consider an energy spectrum of the electronic excitations in a QW in a strong magnetic field directed perpendicularly to the QW plane. So long as the system is homogeneous in the QW plane ($xy$ plane) any excitation (an exciton or electron-hole pair) can be characterized by a quasi-momentum $\hbar{\bf {\cal K}}_\perp$ in the QW plane, if the wave functions are chosen properly. Such functions have been obtained earlier for the excitons [@1] and magnetopolaron-hole pairs[@2]. The quasi-momentum $\hbar{\bf {\cal K}}_\perp$ may be a quantum number because the excitation, consisting of an electron and hole, is neutral one. (Let us remind that in a strong magnetic field there are not the states for an electron (hole) where ${\bf {\cal K}}_\perp$ would be a quantum number). Our aim is to determine in principle how the magnetopolaron theory changes when a LO phonon dispersion and excitonic effect are taken into account, i. e. a Coulomb interaction between an electron, defining the polaron, and hole, weakly interacting with LO phonons. If the excitations are created in a QW by light, the condition $\bf {\cal K}_\perp=\bf {\kappa}_\perp$ must be satisfied, where $\bf {\kappa}_\perp$ is the inplane projection of the light wave vector. One obtains ${\bf \cal K}_\perp=0$ if ${\bf \kappa}_\perp=0$. It is obviously that only discrete energy spectrum is possible at ${\bf \cal K}_\perp=0$ and in the case of a finite motion. In particular that means that at a normal light incidence neither the LO phonon dispersion, nor Coulomb interaction between an electron and hole results in a broadening of the energy levels into the bands. The mentioned above factors may only shift the discrete energy levels and change the corresponding inverse lifetimes $\gamma$. Our results confirm this general statement. We have shown that the excitonic effect leads to the dependence of the energy of the magnetopolaron -hole pair from the quasi-momentum $\hbar{\bf {\cal K}}_\perp$ of the aggregate motion, what can be detected in experiments including an oblique light incidence on the QW plane. The magnetopolaron effect (the Johnson-Larsen effect) has been discovered in (see also the reviews ). The magnetopolarons are created in 3D-systems as well as in 2D-ones, for instance, in QWs. The distance between the magnetopolaron energy levels in 3D-systems is $\sim \alpha^{2/3}\hbar\omega_{LO}$[@9], where $\alpha$ is the Fröhlich non-dimensional electron-phonon coupling constant[@10], and in 2D-systems $\sim \alpha^{1/2}\hbar\omega_{LO}$ [@11; @12; @13; @14; @15; @16; @17; @18; @19; @20; @21; @22; @23; @24]. Influence of the phonon dispersion on the magnetopolaron energy spectrum. ========================================================================= The theory of magnetopolarons in a QW without taking into account a phonon dispersion (i. e. in an approximation, where all the phonons, taking part in a polaron formation, have the same frequency $\omega_{LO}$) have been proposed in . The electrons interact with the confined and interface phonons. In the continuum approximation [@25] (i. e. in the limit $a\to0$, where $a$ is the lattice constant) the confined phonons have the frequency $\omega_{LO}$ without a dispersion, but the interface phonons have a dispersion, i. e. the dependence of the frequency from the modulus $q_{\perp}$ of the inplane phonon wave vector. The frequency of the interface phonons depends on the parameter $q_{\perp}d$, where $d$ is the QW’s width. One has to take into account the dependence of the phonon frequency from $q_{\perp}a$. It has been shown in , that in the case of the wide QWs the approximation is applicable, in which the interaction with the bulk phonons is substituted for the interaction with the confined phonons, and the interaction with the interface phonons may be neglected. Obviously, that in such a case we have only a dispersion due to the deviation from the continuum model. In [@14] the polarons are considered with taking into account the interaction with the interface phonons. In a classification of the magnetopolarons has been demonstrated. For example let us consider here the magnetopolaron $A$. It appears as a result of the crossover of the energy levels of the electron-phonon system with the indexes $m, n=0, N=1$ and $m, n=1, N=0,$ respectively. $m$ is the size-quantization quantum number, $n$ is the Landau quantum number, $N$ is the number of LO phonons. The energies of the first and second levels are $\varepsilon_{m}^e+\Sigma_0+\hbar\omega_{LO}$ and $ \varepsilon_{m}^e+\Sigma_1$, respectively, where $\varepsilon_{m}^e$ is the energy of the $m$-th size-quantized energy level, measured from the QW’s bottom. For instance, for the QW with the infinitely high barriers $$%\label{1} \varepsilon_{m}^e=\frac{\hbar^2{\pi^2}m^2}{2m_{e}d^2},$$ where $m_e$ is the electron effective mass, $m=1,2,3\ldots .$ The designations are introduced $$%\label{2} \Sigma_0=\hbar\omega_{eH}/2,\quad \Sigma_1=3\hbar\omega_{eH}/2,\quad\omega_{eH}=|e|H/(m_{e}c),$$ e is the electron charge, H is the magnetic field intensity, $c$ is the light velocity in a vacuum. We do not take the phonon dispersion into account up to now. Obviously the energy levels cross over when $$%\label{3} \omega_{eH}=\omega_{LO}.$$ When the resonance condition Eq. (3) is satisfied, the role of the electron-phonon interaction increases sharply, what leads to the splitting of the energy levels of the electron-phonon system and to the magnetopolaron formation. The theory, which has been developed in , is applicable when the QW’s widths are not too wide and ,consequently, the distance between the magnetopolaron levels is small in comparison to the distance between the size-quantized energy levels. It has been shown in that the last condition is satisfied for a QW of the system AlSb/GaAs/AlSb at $d\leq 500A$. The low temperatures are supposed when the optical phonons are not excited. We consider a rectangular QW of the I type with an energy gap $E_g$. The magnetic field is directed along the axis $z$ perpendicularly to the QW’s plane, the vector-potential is ${\bf A}={\bf A}(0,xH,0)$. The electron wave function in the QW is $$%\label{4} \Psi_{n,k_{y},m}^{e}(x,y,z)= \Phi_{n}(x+a_{H}^{2}k_{y})\frac{1}{\sqrt L_y}e^{ik_{y}y}\varphi_m^e(z),$$ where $$\Phi_n(x)=\frac{e^{-x^2/2a_H^2}H_n(x/a_H)}{\sqrt{\pi^{1/2}2^nn!a_H}}, ~~ a_H=\sqrt{\frac{c\hbar}{|e|H}},$$ $H_n(t)$ is the Hermitian polynomial, $L_y$ is the normalization length, $\varphi_m^{e}(z)$ is the real electron wave function, corresponding to the $m$-th size-quantized energy level (see, for instance, ). The electron-phonon interaction is written as $$%\label{5} V=\sum_\nu[{\cal C}_\nu({\bf r}_\perp,z)b_\nu+{\cal C}_\nu^\ast({\bf r}_\perp, z)b_\nu^+],$$ where $\nu$ is the set on indexes, consisting of $\bf {q}_\perp$ and other indexes $j$, which characterize the confined and interface phonons; $b_\nu^{+}~(b_\nu)$ is the phonon creation (annihilation) operator, $$%\label{6} {\cal C}_\nu({\bf r}_{\perp},z)= C_{\nu}e^{i{\bf q}_ {\perp}{\bf r}_\perp}\eta_{\nu}(z);$$ the values $C_{\nu}\eta_\nu(z)$ for the electron-phonon interaction with phonons are determined in . As it has been shown in , at $d\geq200A$ in GaAs the application of the electron-bulk phonon interaction is a good approximation. In this approximation the index set $\nu$ transits into $\bf q$, where ${\bf q}=({\bf q}_{\perp},q_z)$ is the 3D phonon wave vector and, according to , $$%\label{7} \eta_\nu(z)=e^{iq_zz}, C_{\nu}=C_{q}=-i\hbar\omega_{LO} \left(\frac{4\pi{\alpha}l^3}{V_0}\right)^{1/2}\frac{1}{ql},$$ where $$l=\sqrt\frac{\hbar}{2m_e{\omega_{LO}}},\quad \alpha=\frac{e^2}{2\hbar\omega_{LO}l}(\frac{1}{\varepsilon_\infty}- \frac{1}{\varepsilon_0}),$$ $V_0$ is the normalization volume, $\varepsilon_0(\varepsilon_\infty)$ is the static (high frequency) dielectric function [@10]. For GaAs $\alpha\simeq0.071,$ $l\simeq40A$. Applying Eqs. (4) and (6),one finds that the interaction matrix elements are equal $$\begin{aligned} %\label{8} \int d^{3}r\Psi_{n^\prime,k_y^\prime,m}^{e\ast}{\cal C}_ \nu^\ast({\bf r}_\perp ,z)\Psi_{n,k_y,m}^{e}= U_{n,n^\prime}^\ast(\nu)\nonumber\\ \times e^{(ia_H^2q_x(k_y+k_y^ \prime)/2)}\delta_{k_y^\prime,k_y-q_y},\end{aligned}$$ where the designations are introduced: $$%\label{9} U_{n,n^\prime}^\ast(\nu)=C_\nu^\ast{\cal K}_{n,n^\prime} (a_H{\bf q}_\perp\times{\bf H}/H){\cal M}^\ast(\nu),$$ $$\begin{aligned} %\label{10} K_{n,n^\prime}({\bf s})=\sqrt\frac{min(n!,n^\prime!)}{max(n!,n^\prime!)} i^{|n-n^\prime|}\left(\frac{s}{\sqrt 2}\right)^{|n-n^{'}|}\times\nonumber\\ exp(-\frac{s^2}{4}) exp[i(\phi-\pi/2)(n-n^\prime)]L_{min(n,n^\prime)}^{|n-n^\prime|}(s^2/2)\end{aligned}$$ $$%\label{11} {\cal M}(\nu)=\int_{-\infty}^{\infty}dz[\varphi_m^e(z)]^2\eta_\nu(z),$$ ${\bf s}$ is the 2D vector, $s=\sqrt{s_x^2+s_y^2}$, $\phi=arctg(s_y/s_x)$, $L_m^n(t)$ is the Laguerre polynomial. Without taking into account the phonon dispersion [@1] the following expression for the energy $E$ of the polaron $A$ has been obtained: $$%\label{12} E-\Sigma_1-\frac{\sum_\nu|U(\nu)|^2}{E-\Sigma_0-\hbar\omega_{LO}}=0.$$ Here $U(\nu)=U_{1,0}(\nu)$. Applying (10) and(11), one obtains $$%\label{13} |U(\nu)|^2=|C_\nu|^2~(a_H^2q_\perp^2/2)~exp(-a_H^2q_\perp^2/2)~| {\cal M}(\nu)|^2.$$ Resolving Eq. (12), one obtains $$\begin{aligned} %\label{14} E_p&=&\frac{1}{2}(\Sigma_1+\Sigma_0+\hbar\omega_{LO})\nonumber\\&\pm& \sqrt{\frac{1}{4}(\Sigma_1-\Sigma_0-\hbar\omega_{LO})^2 +\sum_\nu|U(\nu)|^2},\end{aligned}$$ where the index $p$ designates the magnetopolaron energy levels: $p=a$ corresponds to the upper level (the sign minus in the RHS of Eq. (14)), $p=b$ corresponds to the lower level (the sign minus in the RHS of Eq. (14))). The solution Eq. (14) is right in the resonance vicinity Eq. (3). In the resonance sharply $$%\label{15} E_p^{res}=\Sigma_1\pm\sqrt{\sum_\nu|U(\nu)|^2},$$ thus, the polaron splitting equals $$%\label{16} \Delta E^{res}=E_a^{res}-E_b^{res}=2\sqrt{\sum_\nu|U(\nu)|^2}.$$ The magnetopolaron $A$ wave functions for $p=a, b$ have been obtained in . Taking into account the phonon dispersion and applying the method of , we obtain $$%\label{17} E-\Sigma_1-\sum_{{\bf q}_\perp,~j}\frac{|U({\bf q}_\perp,j)|^2} {E-\Sigma_0-\hbar\omega_j({\bf q}_\perp)}=0$$ instead of Eq. (12). Let us consider a quite wide QW, where an approximation of interaction with the bulk phonons is applicable. We determine the phonon dispersion as following (an anisotropy of the phonon energy spectrum is neglected) $$%\label{18} \omega_{LO}({\bf q})=\omega_{LO}- \Delta\omega_{LO}(q),\qquad\Delta\omega_{LO}(q=0)=0.$$ Then Eq. (17) takes the view $$%\label{19} F(E){\equiv}E-\Sigma_1-\sum_{\bf q}\frac{|U({\bf q}_\perp,q_z)|^2} {E-\Sigma_0-\hbar\omega_{LO}+\hbar\Delta\omega_{LO}(q)}=0,$$ and, according to Eqs. (7), (11), (13), $$\begin{aligned} %\label{20} |U({\bf q}_\perp,q_z)|^2&=&\hbar\omega_{LO})^2 {4\pi\alpha l\over V_0q^2}\nonumber\\&\times& (a_H^2q_\perp^2/2)~exp(-a_H^2q_\perp^2/2)~|{\cal M}(q_z)|^2;\end{aligned}$$ $$%\label{21} {\cal M}(q_z)=\int_{-\infty}^{\infty}dz[\varphi_m^e(z)]^2exp(iq_zz).$$ The function $F(E)$ has been calculated for the case of the square-law dispersion $$%\label{22} \Delta\omega_{LO}(q)=cq^2$$ and under the resonance condition of Eq. (3). The integral had been taken as a main value for those energies $E$, when the denominator could equal 0. The function $F(E)$ is represented in Fig. 1 in the dispersion absence for the cases: $c=0$,( the curve 1), $c/gl^2=0.04$ (the curve 2), $c/gl^2=0.2$ (the curve 3), $g=\alpha^{1/2}\hbar\omega_{LO}.$ The curves 2 and 3 cross over the abscissa axis in the points $E_{b}^\prime$, $E_{c}$ and $E_{a}^\prime$, obtained with taking into account the LO phonon dispersion. The crossover points $$E_b=\Sigma_1-\sqrt {\sum_\nu|U(\nu)|^2},\quad E_a=\Sigma_1+\sqrt {\sum_\nu|U(\nu)|^2}$$ correspond to the theory without the LO phonon dispersion. The differences $E_{a}-E_{a}^\prime$ and $E_{b}^\prime-E_b$ increase with growing of the dispersion parameter as it is seen in Fig. 1. The small shifts of the polaron levels correspond to the weak dispersion. The third crossover point (to which the energy $E_c$ corresponds) appears only with taking into account the phonon dispersion. A discussion of the last result follows below. The phonon dispersion may lead to the additional contributions into the inverse lifetimes of the polaron states[@byline2]. That can be explained with the help of the schematic Fig. 2, where the energy levels $E_{b}^\prime$, $E_{c}$ and $E_{a}^\prime$ are represented together with the schematic curves, depicting the dependence of the value $\Sigma_{0}+\hbar\omega_{LO}(q)$ on of the 3D phonon wave vector under the resonant condition of Eq. (3). Fig. 2b corresponds to the larger phonon dispersion, than Fig. 2a. In Fig. 2b the curve $\Sigma_{0}+\hbar\omega_{LO}(q)$ does not cross over the energy levels $E_{a}^\prime$ and $E_{b}^\prime$. That means, that the denominators $E_{a}^\prime-\Sigma_{0}-\hbar\omega_{LO}(q)$ and $E_{b}^\prime-\Sigma_{0}-\hbar\omega_{LO}(q)$ in the LHS of Eq. (19) do not equal 0 and the real solutions $E_{a}^\prime$ and $E_{b}^\prime$ are precise ones. Applying the method of one can show, that the magnetopolaron wave functions, corresponding to the precise solutions $E_{a}^{'}$ and $E_{b}^{'},$ in the resonance vicinity of Eq. (3) have the view $$\begin{aligned} %\label{23} \Theta_{p,k_{y}}|0\rangle&=&\left[1+\sum_\nu\frac{|U(\nu)|^2} {E_p^\prime-\Sigma_{0}-\hbar\omega_{LO}(\nu)}\right]^{-1/2}\nonumber\\ &\times&\left[\Psi_{1,k_{y},m}^{e} +\sum_\nu\frac{exp[ia_{H}^{2}q_x(k_{y}-q_{y}/2)]} {E_p^\prime-\Sigma_{0}-\hbar\omega_{LO}(\nu)}\right.\nonumber\\ &\times&\left. U^\ast(\nu) \Psi_{0,k_{y}-q_{y},m}^{e}b_\nu^{+}\right]|0\rangle,\end{aligned}$$ where $|0\rangle$ is the phonon vacuum wave function; $p$ equals $a$ or $b$. The functions of Eq.(24) distinguish on the corresponding functions without the phonon dispersion only by the substitution $E_{p}$ by $E_{p}^\prime$ and $\omega_{LO}$ by $\omega_{LO}(\nu)$ [@19]. The wave functions are orthogonal and normalized, i. e. $$%\label{24} \int d^3r<0|{\Theta^{+}_{p^{\prime}k_{y}^\prime}}{\Theta_{pk_{y}}}|0>= \delta_{p,p^\prime}\delta_{k_{y},k_{y}^\prime}.$$ The orthogonalization of the wave functions Eq. (23) with indexes $a$ and $b$ can be checked easily if one takes into account the interrelation $$%\label{25} \sum_\nu\frac{|U(\nu)|^2}{[E_a^\prime-\Sigma_0-\hbar\omega_{LO}(\nu)] [E_b^\prime-\Sigma_0-\hbar\omega_{LO}(\nu)]}=-1,$$ which can be obtained, if in the LHS of Eq. (19) one substitutes $E_{a}^\prime$, afterwards $E_{b}^\prime$ and subtracts the second expression from the first one. As far as the energy $E_{c}$ value is concerned, it is seen in Fig. 2 , that the curve $\Sigma_{0}+\hbar\omega_{LO}(q)$ crosses over always with the energy level $E_{c}$, because this level is as closer to the energy $\Sigma_1=\Sigma_{0}+\hbar\omega_{LO}$, as the phonon dispersion is weaker. That means that the denominator $E_{c}-\Sigma_{0}-\hbar\omega_{LO}(q)$ in the LHS of Eq. (19) equals 0 at some absolute value $q_{c}$ of the phonon wave vector. Consequently, the real value $E_{c}$ is not a precise solution of Eq. (19). In Fig. 2b the curve $\Sigma_{0}+\hbar\omega_{LO}(q)$ crosses over not only the energy level $E_{c}$, but the lower polaron level $E_{b}^\prime$ also. It follows from this fact, that only real solution $E_{a}^\prime$ is precise one, but the solutions, corresponding to the energy levels $a$ and $b$, must contain some imaginary parts. That means that the states $a$ and $b$ have the finite lifetimes, which we designate as $\gamma_c^{-1}$ and $\gamma_b^{-1}$. Let us try to calculate $\gamma_{c}$ and $\gamma_{b}$. We have to generalize Eq. (19) so, that it could admit the complex solutions. The generalization suppose the substitution of the desirable energy $E$ by $E+i\delta$, where $\delta\to+0$. Then some imaginary term will appear in Eq. (19), connected with the circuition of the integrand pole (in fact, the function $F(E+i\delta)$ is a denominator of an one-particle retarded electron Green function). Let us suppose, that the inverse lifetime $\gamma_{p}$ of the state $p$ is very small. Then, adopting $\tilde{E}_{p}=E^\prime_{p}-i\hbar\gamma_{p}/2$, where $E^\prime_{p}$ is the real value, and applying a decomposition on the small value $\gamma_{p}$, one obtains from Eq. (19) in a zero approximation $$%\label{26} E_{p}^\prime-\Sigma_{1}-Re\sum_{\bf q}\frac{|U({\bf q}_\perp,q_z)|^2} {E_{p}^\prime-\Sigma_0-\hbar\omega_{LO}(q)+i\delta}=0,$$ where $\delta\to +0$, and in the next approximation one obtains $$\begin{aligned} %\label{27} \gamma_p=-2Im\frac{1}{\hbar}\sum_{\bf q}\frac{|U({\bf q}_\perp,q_z)|^2} {E_{p}^\prime-\Sigma_0-\hbar\omega_{LO}(q)+i\delta}=\nonumber\\ =\frac{2\pi}{\hbar}\sum_{\bf q}|U({\bf q}_\perp,q_z)|^2 \delta[E_{p}^\prime-\Sigma_0-\hbar\omega_{LO}(q)].\end{aligned}$$ Having the values $\gamma_p$, we can find, if they are small indeed. Thus, we can check, if the method, descending to the Eqs. (26), (27), is applicable indeed. We will see below, that the method is applicable to the energy level $b$ at the weak phonon dispersion, but inapplicable to the energy level $c$. Let us substitute Eq. (20) into the RHS of Eq. (27) and use the dispersion Eq. (22). In the calculations of the function ${\cal M}(q_z)$ we use the wave functions $$%\label{28} \varphi_{m}^{e}(z)=(2/d)^{1/2}sin(m{\pi}z/d),~~~~~0<z<d$$ (and $\varphi_{m}^{e}(z)=0$ outside this interval), corresponding to the QW with the infinite barriers. One obtains $$%\label{29} |{\cal M}(q_{z})|^2\equiv f_{m}(Q)=\frac{2(2\pi m)^4(1-cosQ)}{Q^2[Q^2-(2\pi m)^2]^2},$$ where $Q=q_zd$, $m$ is a number of the size-quantized energy level. Integrating the RHS of Eq. (27) on $q_\perp$ with the help of the $\delta$-function, let us represent $\gamma_p$ as an integral on the variable $Q$ $$%\label{30} \gamma_{p}=\frac{2\alpha l\hbar\omega_{LO}^2}{d(\Sigma_0+\hbar\omega_{LO}-E_{p}^{'})} \int_0^{q_p d}dQe^{-x}xf_m(Q),$$ where $E_{p}^{'}$ is the $p$ polaron level energy, calculated, according to Eq. (26), with taking into account the phonon dispersion, $$%\label{31} q_{p}=\sqrt\frac{\Sigma_{0}+\hbar\omega_{LO}-E_{p}}{c},$$ $$%\label{32} x=\frac{a_H^2q_p^2}{2}-\frac{Q^2}{\beta_0^2},$$ $$%\label{33} \beta_{0}=\frac{\sqrt 2d}{a_H}.$$ If the condition of Eq. (3) is satisfied, $\beta_{0}=d/l$. At any value  $m$ $$%\label{34} \int_0^{\infty}dQf_m(Q)=3\pi/2.$$ Therefore the integral in the RHS of Eq. (30) is always smaller than $3\pi/2$. If the dispersion is very weak $$%\label{35} dq_{p}>>1,~~~~~a_Hq_p>>1,$$ the integral $$%\label{36} \int_0^{q_p d}dQe^{-x}xf_m(Q)<<1.$$ That means, that for the energy level $p=b$ the value $\gamma_p\to 0$ when the dispersion parameter $c\rightarrow 0$, because the value $\Sigma_{0}+\hbar\omega_{LO}-E_{p}^\prime$, which is in the denominator of the RHS of Eq. (30), tends to $\Sigma_{0}+\hbar\omega_{LO}-E_{b}$. In Fig. 3 the position of the energy level $E_{a}^\prime$ is represented, as well as the broadening of the energy level $E_{b}^\prime$ as a function of the dispersion parameter $c$. One can see, that at the small values $c$ the broadening is small and the approximate expression Eq. (30) for $\gamma_{b}$ is right. However, with the increasing $c$ the value $\gamma_{b}$ increases so strongly, that the solutions of Eqs. (26) and (27) become incorrect. For the sake of comparison let us represent the expression for the magnetopolaron splitting $\Delta E^{res}$, which has been obtained in for the wide QWs in the limit $\beta_{0}>>2\pi m$: $$%\label{37} \Delta E^{res}=\alpha^{1/2}\hbar\omega_{LO}\sqrt {6l/d}.$$ Comparing Eq. (30) to Eq. (37), one finds, that $\hbar\gamma_{b}<<\Delta E^{res}$ at a weak dispersion. As far as the energy level $c$, according to Eq. (30), its inversion lifetime $\gamma_c$ increases with the dispersion decreasing. Indeed, in the denominator of the RHS of Eq. (30) there is the value $\Sigma_{0}+\hbar\omega_{LO}-E_c$, which tends to 0 with decreasing the dispersion parameter and the value $\gamma_c\rightarrow\infty$. That means that the method of the sequential approximations of Eqs. (26)-(27) is inapplicable to analyse the energy level $c$. The question about an existence of this level is opened up to now. Influence of the excitonic effect on the magnetopolaron energy spectrum. ======================================================================== In the previous section we have examined a magnetopolaron, which has been formed by an electron. In this section we consider some magnetopolaron-hole pair. The influence of the Coulomb forces on the energy spectrum of an electron-hole pair (EHP) is weak under conditions $$%\label{38} a_{exc}^{2}>>a_H^2,~~~~~~~~~~~~~~a_{exc}>>d,$$ where $a_{exc}=\hbar^2\varepsilon_{0}/\mu e^2$ is the radius of the Wannier-Mott exciton in a magnetic field absence, $\mu=m_{e}m_{h}/(m_{e}+m_{h})$ is the reduced effective mass. Applying the parameters $m_{e}/m_{0}=0.065, m_{h}/m_{0}=0.16, \varepsilon_{0}=12.55, \hbar\omega_{LO}=0.0367$ one obtains for GaAs $$%\label{39} a_{exc} = 146A,~~~~~~~~~~ a_{H}^{res} = 57.2A,$$ $a_{H}^{res}=\sqrt{c\hbar/(|e|H_{res})}$, $H_{res}$ is the magnetic field, corresponding to the magnetopolaron resonance Eq. (3), $m_0$ is the bare electron mass. $H_{res} = 20.2T$ for GaAs. One obtains from Eq. (39), that $(a_{H}^{res}/a_{exc})^2\simeq 0.154$, i. e. the first of the conditions of Eq. (38) is satisfied, however the second condition demands to consider the QWs with the widths $d<<146A$, i. e. more narrower than we have considered in the previous section. The first inequality of Eq. (38) is equivalent to the following one $$%\label{40} \hbar\omega_{\mu H}/2 >> \Delta E_{exc},$$ where $\omega_{\mu H}=|e|H/\mu c$ is the cyclotron frequency, $\Delta E_{exc} = \hbar^{2}/\mu a_{exc}^2$ is the exciton coupling energy in a magnetic field absence. Under condition Eq. (38) the Coulomb interaction of an electron and hole may be considered as a weak perturbation and one can calculate the first order corrections to the EHP energy according to the perturbation theory (see , where the 2D case has been considered). The EHP unperturbed wave functions are chosen as the wave functions [@2] with the index ${\bf {\cal K}}_{\perp}$ and indexes $n$ and $n^\prime$, corresponding to the relative motion of the electrons and holes [@26]. Let us note, that the indexes n and $n^\prime$ are connected single-valuedly with the Landau quantum numbers $n_e$ and $n_h$ of electrons and holes, respectively: at $n_{e}>n_{h}~~n=n_{h}, n^\prime=n_{h}-n_{e}<0$, but if $n_{e}<n_{h}$, then $n=n_{e}, n^\prime=n_{h}-n_{e}>0$. The EHPs with taking into account the Coulomb forces, which can be called as excitons, are characterized by the same sets of indexes ${\bf {\cal K}}_{\perp},n,n^\prime$ and ${\bf {\cal K}}_{\perp},n_{e},n_{h}$. That has been shown in that the corrections to the EHP energy due to the Coulomb interaction, depend on the inplane quasi-momentum $\hbar{\cal K}_\perp$ in the QW plane, i. e. the Coulomb forces lead to the exciton dispersion. The energy corrections due to the excitonic effect may be represented as two parts. The first depends only on the indexes $n_e$ and $n_h$ and corresponds to ${\bf {\cal K}}_\perp=0$. The second (the rest part) depends on ${\bf {\cal K}}_{\perp},n_{e},n_h$ and describes the exciton dispersion. The exciton energy, characterized by the quasi-wave vector ${\bf{\cal K}}_\perp$, and consisting of the electron with the indexes $n_e$ and $m_e$ and the hole with the indexes $n_h$ and $m_h$, where $m_{e}(m_{h})$ is the number of the size-quantized energy level, is equal $$\begin{aligned} %\label{41} {\cal E}_{n_{e},{n_h},m_{e},m_{h}}({\cal K}_\perp)=E_{g}+\varepsilon_{m_{e}}^{e}+ \varepsilon_{m_{h}}^{h}\nonumber\\+(n_{e}+1/2)\hbar\omega_{eH}\nonumber\\ +(n_{h}+1/2)\hbar\omega_{hH}+ \Delta {\cal E}_{n_{e},n_{h}}({\bf {\cal K}}_\perp),\end{aligned}$$ where $\Delta{\cal E}_{n_{e},n_{h}}({\bf {\cal K}}_\perp)$ is the Coulomb correction to the exciton energy. Let us separate the contribution at ${\bf {\cal K}}_\perp=0$: $$%\label{42} \Delta {\cal E}_{n_{e},n_{h}}({\bf {\cal K}}_\perp)= \Delta {\cal E}_{n_{e},n_{h}}({\bf {\cal K}}_{\perp}=0)+ \Delta_{1} {\cal E}_{n_{e},n_{h}}({\bf {\cal K}}_\perp).$$ The exciton states with indexes $n_{e},n_{h},m_{e},m_{h},{\bf {\cal K}}_\perp$ are described by the unperturbed (without taking into account the Coulomb forces) wave functions, which are not represented here. Let us consider a pair, consisting of a magnetopolaron and hole. In the case of the A magnetopolaron[@23] the following terms are overcrossed: the exciton with the indexes $n_{e}=1,n_{h}=1,m$ and exciton with indexes $n_{e}=0,n_{h}=0,m$ plus the phonon with the frequency $\omega_{LO}$. We have chosen $n_{h}=1, m_{h}=m_{e}=m$, because such combination may be created by light in the case of the infinitely deep QW. If one omits the correction $\Delta_{1} {\cal E}_{n_{e},n_{h}}({\bf {\cal K}}_\perp)$, which depends on ${\bf {\cal K}}_\perp$, the resonant condition becomes $$%\label{43} \hbar\omega_{eH}=\hbar\omega_{LO}+\Delta {\cal E}_{0,1}({\bf {\cal K}}_\perp=0) -\Delta {\cal E}_{1,1}({\bf {\cal K}}_\perp=0).$$ Because the excitonic corrections $\Delta {\cal E}_{0,1}({\bf {\cal K}}_\perp=0)$ and $\Delta {\cal E}_{1,1}({\bf {\cal K}}_\perp=0)$ are different in values, the resonant condition Eq. (43) does not coincide with the resonant condition Eq. (3), which has been obtained without taking into account the Coulomb forces. Applying , one can obtain the equation for the energy ${\cal E}$ of the magnetopolaron-hole pair $$%\label{44} {\cal E}-{\cal E}_{1,1}({\bf {\cal K}}_\perp)-\sum_{\nu}\frac{|U(\nu)|^2} {{\cal E}-{\cal E}_{0,1}({\bf {\cal K}}_{\perp}-{\bf q}_{\perp}) -\hbar\omega_{LO}}=0.$$ Inequalities of Eq. (38) and the estimates Eq. (39) lead to the hard restrictions of the QW width from above, what makes problematic (in any case for GaAs) the application of the bulk phonon approximation. Therefore the index $\nu$ in Eq. (44) includes the indexes ${\bf q}_\perp$ and $j$, where $j$ relates to the confined and interface phonons. Because a dispersion of any phonons is neglected in this section, the denominator in Eq. (44) does not depend on $j$. Measuring the energy from the level $$E_g+\varepsilon_m^e+\varepsilon_m^h+\frac{3}{2}\hbar\omega_{hH},$$ one obtains $$\begin{aligned} %\label{45} {\cal E}_{1,1}({\bf {\cal K}}_\perp)=\Sigma_1+ \Delta {\cal E}_{1,1}({\bf {\cal K}}_\perp),\nonumber\\ {\cal E}_{0,1}({\bf {\cal K}}_\perp)=\Sigma_0+ \Delta {\cal E}_{0,1}({\bf {\cal K}}_\perp).\end{aligned}$$ Obviously, that the energy corrections $\Delta_{1} {\cal E}_{n_{e},n_{h}}({\bf {\cal K}}_\perp)$ lead to the two essential results. First, the energy of the magnetopolaron-hole pair begins to depend on the value of the vector ${\bf {\cal K}}_\perp$. At the normal light incidence $\Delta_{1} {\cal E}_{n_{e},n_{h}}({\bf {\cal K}}_\perp)=0$, but the energy dependence on ${\bf {\cal K}}_\perp$ must appear at the oblique light incidence on the QW surface. Second, the term ${\cal E}_{0,1}({\bf {\cal K}}_{\perp}-{\bf q}_{\perp})$ in the denominator ${\cal E}_p$ of the LHS Eq. (44) must lead to the same qualitative results as the phonon dispersion, i. e. to the additional shifts of the energies of the upper and lower polaron levels and to the additional contributions into the inverse lifetimes of the polaron states. To obtain more precise results one have to take into account simultaneously the phonon dispersion and Coulomb forces. Acknowledgements ================ S.T.P thanks the Zacatecas Autonomous University and the National Council of Science and Technology (CONACyT) of Mexico for the financial support and hospitality. This work has been partially supported by the Russian Foundation for Basic Research and by the Program “Solid State Nanostructures Physics”. On leave from P. N. Lebedev Physical Institute, Russian Academy of Sciences, 117924 Moscow, Russia. The main contributions - radiative and non-radiative - have been calculated in. I. V. Lerner, Yu. E. Lozovik, Zh. Eksp. Teor. Fiz., [**78**]{}, 1167(1980). I. G. Lang, L. I. Korovin, D. A. Contreras-Solorio, S. T. Pavlov, Phys.Rev. [**B**]{}, submitted for publication. D. M Larsen and E. J. Jonson, in Proc. of 8th Intern. Conf. on Physics of Semiconductors, Kyoto, 1966 (J. Phys. Soc. Japan, Suppl. [**21**]{}, 443(1966)). E. J. Johnson and D. M. Larsen, Phys. Rev. Lett. [**16**]{}, 655(1966). D. M. Larsen, in Proc. of X Intern. Conf. on the Physics of Semiconductors, Cambridge, Mass., 1970, ed. by S. P. Keller, J. C. Hensel and F. Stern, U. S. AEC, Oak Ridge (1970). A. Petron and B. D. Mc Comb, in Landau Level Spectroscopy, ed. by G. Landwer and E. I. Rashba, Modern Problems in Condensed Matter Sciences (1988), Vol. 27.2. R. J. Nicholas, D. J. Barnes, D. R. Seadly, C. J. Langerak, J. Singleton, P. J. van der Wel, J. A. A. J. Perenboom, J. J. Harris, and C. T. Foxon, in Spectroscopy of Semiconductor Microstructures, Vol. 206 of NATO Advanced Study Institute, Series B: Physics, ed. by G. Fasel, A. Fasolino, and P. Lugli, Plenum, New York(1980), p. 451. R. J. Nicholas, in Handbook of Semiconductors, ed. by M. Balkanski, North Holland, Amsterdam(1994), Vol. 2. L. I. Korovin, S. T. Pavlov, Zh. Eksp. Teor. Fiz., [**53**]{}, 1708(1967) (Sov. Phys. JETP, [**26**]{}, 979 (1968)); Pis’ma Zh. Eksp. Teor. Fiz., [**6**]{}, 525(1967). H. Fröhlich, Adv. Phys. [**3**]{}, 325(1954). L. I. Korovin, S. T. Pavlov, B. E. Eshpulatov, [**20**]{}, 3594(1978). Das Sarma and O. Madhucar, Phys Rev. [**B22**]{}, 2823(1980). Das Sarma and O. Madhucar, Phys Rev. Lett. [**52**]{}, 859(1984). G. O Hai, F. M. Peeters, and J. T. Devreese, Phys. Rev [**B47**]{}, 10358(1993). A. O. Govorov, Solid State Commun. [**92**]{}, 977(1994). R. J Nicholas, S. Sasaki, N. Niura, F. M. Peeters, J. M. Shi, C. O. Hai, J. T. Devreese, M. I. Lawless, D. E. Ashenlord, and B. Lunn, Phys. Rev [**B50**]{}, 7596(1994). J. M. Shi, F. M. Peeters, and J. T. Devreese, Rhys. Rev. [**B50**]{}, 15182(1994). L. I. Korovin, S. T. Pavlov, B. E. Eshpulatov, Fiz. Tverd. Tela, [**35**]{}, 1562(1993)(Sov. Phys. Solid State, [**35**]{}, 788 (1993)). I. G. Lang, V. I. Belitsky, A. Cantarero, L. I. Korovin, S. T. Pavlov, and M. Cardona, Phys. Rev. [**B54**]{}, 17768(1996). L. I. Korovin, I. G. Lang, S. T. Pavlov, Zh. Eksp. Teor. Fiz. [**111**]{}, 2194(1997)(JETP, [**84**]{}, 1197 (1997)). L. I. Korovin, I. G. Lang, S. T. Pavlov, Pis’ma Zh. Eksp. Teor. Fiz. [**65**]{}, 511(1997) (JETP Lett., [**65**]{}, 532 (1997)). I. G. Lang, V. I. Belitsky, A. Cantarero, L. I. Korovin, S. T. Pavlov, and M. Cardona, Phys. Rev. [**B56**]{}, 6880(1997). L. I. Korovin, I. G. Lang, S. T. Pavlov, Zh. Eksp. Teor. Fiz.[**115**]{}, 187(1999)(JETP, [**88**]{}, 105 (1999)). L. I. Korovin, I. G. Lang, S. T. Pavlov, Zh. Eksp. Teor. Fiz. [**116**]{}, 1419(1999). N. Mori and T. Ando, Phys. Rev. [**B40**]{}, 6175(1988). L. D. Landau, E. M. Liphshitz, Quantum Mechanics, 1974, á. 525.
--- abstract: 'The coefficient $c_{\mathrm{A}}$ required for $\mathrm{O}(a)$ improvement of the axial current in lattice QCD with $\nf=3$ flavors of Wilson fermions and the tree-level Symanzik-improved gauge action is determined non-perturbatively. The standard improvement condition using Schrödinger functional boundary conditions is employed at constant physics for a range of couplings relevant for simulations at lattice spacings of $\approx 0.09\,\Fm$ and below. We define the improvement condition projected onto the zero topological charge sector of the theory, in order to avoid the problem of possibly insufficient tunneling between topological sectors in our simulations at the smallest bare coupling. An interpolation formula for $c_{\mathrm{A}}(g_0^2)$ is provided together with our final results.' address: - 'School of Mathematics, Trinity College, Dublin 2, Ireland' - 'CP$^3$-Origins & Danish IAS, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark' - 'IFIC and CSIC, Calle Catedrático José Beltran 2, 46980 Paterna, Valencia, Spain' - 'Institut für Theoretische Physik, Universität Münster, Wilhelm-Klemm-Str. 9, 48149 Münster, Germany' author: - John Bulava - Michele Della Morte - Jochen Heitger - Christian Wittemeier title: 'Non-perturbative improvement of the axial current in $\nf = 3$ lattice QCD with Wilson fermions and tree-level improved gauge action ' --- Lattice QCD, Symanzik effective theory
--- abstract: 'The technique of degree of randomness is used to model the correlations in sequences containing various subsignals and noise. Kolmogorov stochasticity parameter enables to quantify the randomness in number sequences and hence appears as an efficient tool to distinguish the signals. Numerical experiments for a broad class of composite signals of regular and random properties enable to obtain the qualitative and quantitative criteria for the behavior of the descriptor depending on the input parameters typical to astrophysical signals.' address: 'Center for Cosmology and Astrophysics, Alikhanian National Laboratory and Yerevan State University, Yerevan, Armenia' author: - 'S. Sargsyan' title: Probing the correlations in composite signals --- The method ========== The correlations functions and the power spectra are common and efficient tools for the study of correlations in the signals. Astrophysical signals typically are superposition of various subsignals, regular and random, by features comparable to each other and of weaker ones, i.e. perturbations or the noise. The procedure of analysing of the needed signal or signals, their separation from the noise is a common problem while dealing with observations and measurements. The Kolmogorov stochasticity parameter technique enables to quantify the randomness of sequences of number theory or dynamical systems [@Kolm; @Arnold; @Arnold_UMN; @Arnold_MMS; @Arnold_FA]. The technique of the degree of randomness has been applied to the Cosmic Microwave Background (CMB) temperature sky maps and to the X-ray flux data of the clusters of galaxies. The former data were those obtained by the Wilkinson Microwave Anisotropy Probe (WMAP) during 7-year observations [@K; @J], while the X-ray data were obtained by XMM-Newton satellite providing a particularly accurate and complete sky survey (see [@XMM; @XMM1]). In the case of CMB, the Kolmogorov function enabled to separate signals of different origin, e.g. the Galactic and non-Galactic ones, and to detect point sources in the CMB maps [@G2009] (see Fig.1). Concerning the X-ray clusters, it was shown that their X-ray images do possess correlation in the pixelized flux data peculiar to the gravitational potential of the galaxy clusters [@GD2011]. This technique resembles the methods of the dynamical systems applied to nonlinear problems (e.g. [@GP]). A crucial step in these studies is the modeling and analysis of generated systems, which enables to reveal the behavior of the stochasticity parameter in the case of a given signal and then to consider the application of this technique for real signals [@EPL2011]. Below we represent the results of numerical experiments for a broad class of signals. Kolmogorov stochasticity parameter is introduced for a sequence $\{X_1,X_2,\dots,X_n\}$ of real random variable $X$ sorted in growing order $X_1\le X_2\le\dots\le X_n$. Then the theoretical distribution function is [@Kolm; @Arnold] $$F(X) = n \cdot(probability\, of\, the\, event\, x \leq X).$$ the stochasticity parameter is defined as $$\label{KSP} \lambda_n=\sqrt{n}\ \sup_x|F_n(x)-F(x)|\,$$ where the empirical distribution function is $$F_n(X)= (number\, of\, the\, elements\, x_i\, which\, are\, less\, than\, X),$$ and $$F_n(X)= \left\{ \begin{array}{rl} 0, & X < x_1 \\ k / n, & x_k \leq X < x_{k+1} \\ 1, & x_n \leq X.\\ \end{array} \right. \label{eq:empiricdistribution}$$ Then for the limit $$\lim_{n\to\infty}P\{\lambda_n\le\lambda\}=\Phi(\lambda)\ ,$$ where $$\Phi(\lambda)=\sum_{k=-\infty}^{+\infty}\ (-1)^k\ e^{-2k^2\lambda^2},\, \Phi(0)=0,\, \lambda>0\ ,\label{Phi}$$ exists at uniform convergence and independent on $F$. ![\[fig:phi1\]The Kolmogorov’s function $\Phi$ for the portion of the 7-year temperature CMB map obtained by WMAP.](phi.eps){width="20pc"} Random-regular sequences ======================== We consider a broad class of sequences, i.e. those composed of random $x_n$ and regular $y_n = \frac{an\pmod b}{b}$ ($a,b$ are prime numbers) sub-sequences within $(0,1)$ $$z_n = \alpha x_n + (1-\alpha) y_n.$$ The parameter $\alpha$ varies within \[0,1\] defining random sequences at $1$ and regular ones at $0$, so that by mutually fixing the pair $a,b$ we get new regular sequences. For $z_n$ we have $$F(X)= \left\{ \begin{array}{rl} 0, & X \leq 0 \\ \frac{X^2}{2 \alpha (1-\alpha)}, & 0 < X \leq \alpha\\ \frac{2 \alpha X - \alpha^2}{2 \alpha (1-\alpha)}, & \alpha < X \leq 1-\alpha\\ 1-\frac{(1-X)^2}{2 \alpha (1-\alpha)}, & 1-\alpha < X \leq 1\\ 1, & X > 1.\\ \end{array} \right. \label{eq:d}$$ Figure 2 shows the results of the numerical experiments for 100 sequences, each sequence containing 10000 elements. Each sequence is divided into 50 subsequences, i.e. $m$ runs through values $1, ..., 50$, and for each of them the parameter $\Phi(\lambda_n)_m$ is calculated and then the empirical distribution function $G(\Phi)_m$ of these numbers is obtained. When the original sequences are random, this distribution have to be uniform according to Kolmogorov’s theorem. To test that, $\chi^2$ for the functions $G(\Phi)_m$ and $G_0(\Phi)=\Phi$ have been calculated, i.e. one parameter $\chi^2$ is calculated for each of the $100\times101$ sequences. For $100$ $\chi^2$ values per each value of $\alpha$, we obtained the mean and error values for $\chi^2$, i.e. for each pair $a,b$ we have a plot of the dependence of $\chi^2$ on $\alpha$. ![\[fig:chi\_sq\]The 3D $\chi^2$ for the Kolmogorov’s function for the sequence $z_n$ vs $\alpha$ and the parameter $a$.](3d.eps){width="25pc"} Parameters of the regular sequences =================================== At certain values of the parameter $a$ for different values of $b$ the dependences in Fig.2 are monotonic, while for others they do have maxima. To study this effect, we introduce a parameter $\Delta$ which is the difference of two values in those plots: maximal value of $\chi^2$ and minimal value in the range $\alpha \in (0, \alpha_{max})$, if $\alpha_{max}$ is the position of the maximal value. Obviously, $\Delta$ is zero when the dependence is monotonic and no extrema do exist. Then we calculate $\Delta$ for fixed $b$ and for each value of primary $a={2, ..., b}$ . ![\[fig:deltas\]The dependence of $\Delta$ vs the parameter $a$.](delta_1201.eps){width="20pc"} The remarkable feature of the results is the strict mirror symmetry in Fig.\[fig:deltas\] in the dependence of $\Delta$ vs $a$, although no periodicity is found by Fourier analysis. The mirror symmetric plots can be hence subjects of particular study, e.g. in two versions: first, of the distribution of $\Delta$ and, second, the spacing between non-zero $\Delta$ and their distribution. The null values of $\Delta$ are skipped and also - since due to the mirror symmetry each $\Delta$ has its pair - only one of each pair is taken into account. The results we give in Fig.\[fig:y1\] where the number of non-zero $\Delta$s from Fig.\[fig:deltas\] is given in growing order. The number of non-zero $\Delta$s appear to be proportional to $b$. ![\[fig:y1\]Sorted amplitudes of $\Delta$s from Fig.\[fig:deltas\].](y1_1987.eps){width="20pc"} Sum of fluctuations: large N limit ================================== The next problem we consider is the properties of the signal being a sum of random and regular fluctuations, each of sequences of 10000 elements and of the same standard deviation. ![\[fig:chi\_dist\]$\chi^2$ frequency for random and regular sequences vs the Gaussian distribution.](chi_dist_rnd.eps "fig:"){width="14pc"}![\[fig:chi\_dist\]$\chi^2$ frequency for random and regular sequences vs the Gaussian distribution.](chi_dist_reg.eps "fig:"){width="14pc"} ![\[fig:phi\]Kolmogorov function $\Phi$ for the random and regular sequences in Fig.\[fig:chi\_dist\].](phi_rnd.eps "fig:"){width="14pc"}![\[fig:phi\]Kolmogorov function $\Phi$ for the random and regular sequences in Fig.\[fig:chi\_dist\].](phi_reg.eps "fig:"){width="14pc"} The regular sequences have been chosen as $$a_i=\frac{1}{\sqrt N} \sum_{k=1}^N Compact(x_i^k,-1,1),$$ where $$x_i^k=i/k,$$ $x^k$ is compactified arithmetical sequence within the interval $(-1,1)$, with step $1/k$. The random sequences are given by $$b_i=\frac{1}{\sqrt N} \sum_{k=1}^N Random(-1,1).$$ At large number of sequences each new sequence $y_n$ is taken as the continuation of the former arithmetical progression. Here $$Compact(x,p,q)=q+x mod(p-q)$$ indicate multiples of $(p-q)$ from $x$ having the value within the range $(p,q), p<q$. The results for random and regular sequences, 10000 each, are given in Fig.\[fig:chi\_dist\] for $\chi^2$, when the number of the fluctuations vary from $N=1000$ to $100000$. The $\chi^2$ shows that for both, random and regular sequences, we deal with a Gaussian limiting distribution, in accordance with the Central Limit theorem which states that for large enough values of $N$, both sequences $a_i$ and $b_i$ tend to Gaussian sequences with the same $\sigma$ and $\mu$ independent on $N$. So, although differences are seen in the Gaussians, namely, the standard deviations are larger for the regular case, the $\chi^2$ are similar. For the Kolmogorov function $\Phi$ the situation is rather different. When the Gaussians do appear both for random and regular sequences (as expected), the behaviors of $\Phi$ is different and enables to separate them, as shown in Fig.\[fig:phi\]. Namely, it is close to a homogeneous function for random sequences and $\Phi=1$ for regular ones. Kolmogorov’s function therefore enables to distinguish the superposition of random and regular sequences, even though both are tending to Gaussians. Finally, we have probed the dependence of the results on the length of the sequences: the dependence on the number of the fluctuations within $1000-100000$ is rather weak, $\chi^2$ varying around $10^{-8}-10^{-9}$. This confirms the universality of the obtained behavior of $\Phi$ for both random and regular fluctuations. Results ======= The performed analysis revealed the behavior of the Kolmogorov distribution vs the properties of the generated signals. To describe datasets which contain both regular and stochastic components, we considered sequences scaled by a single parameter $\alpha$, indicating the ratio of those components. Quantitative and qualitative criteria have been obtained for the Kolmogorov distribution at numerical experiments for broad class of random and regular sequences depending on $\alpha$ parameter. a\) The existence of the [*critical value*]{} for $\alpha$ has been shown, when the monotonic decay of the frequency count of the Kolmogorov distribution is transformed to a function with an extremum. b\) The dependence of scalings and spacings vs that parameter shows mirror properties both in the amplitude and distribution of the frequency counts of the function $\Phi$. c\) The behavior of the randomness of a signal composed of $N$ subsignals at large $N$ limit has been studied, where the Kolmogorov function acts as an informative descriptor. Particularly, the descriptor at large $N$ enables to distinguish the initial set of the fluctuations, even when the superposition both of random and regular subsignals is not informative since tends to a Gaussian in accordance to the Central Limit theorem. The studied properties are typical, for example, for astrophysical datasets, when the sought signals are superposed with regular and random fluctuations of various origin, and hence the behaviors revealed at the numerical experiments due to the universality of the technique will enable its informative application to real data. [12]{} Kolmogorov A.N. 1933 [*G.Ist.Ital.Attuari,*]{} [**4**]{} 83 Arnold V.I. 2008 [*Nonlinearity*]{} [**21**]{} T109 Arnold V.I. 2008 [*Uspekhi Mat. Nauk*]{} [**63**]{} 5 Arnold V.I. 2009 [*Trans. Mosc. Math. Soc.*]{} [**70**]{} 31 Arnold V.I. 2009 [*Funct. Anal. Other Math.*]{} [**2**]{} 139 Komatsu E., Dunkley J. [*et al.*]{} 2009 [*ApJS*]{} [**180**]{} 330 Jarosik N., Bennett C.L. [*et al.*]{} 2011 [*ApJS*]{} [**192**]{} 14 Viana P.T.P., da Silva A. [*et al.*]{} 2011 [*arXiv:1109.1828*]{} Suhada R., Song J., [*et al.*]{} 2011 [*arXiv:1111.0141*]{} Gurzadyan V.G., Allahverdyan A.E. [*et al.*]{} 2009 [*Astron. & Astrophys.*]{} [**497**]{} 343 Gurzadyan V.G., Durret F. [*et al.*]{} 2011 [*Europhys.Lett.*]{} [**95**]{} 69001 Gurzadyan V.G., Pfenniger D., (Eds.) 1994 [*Ergodic Concepts in Stellar Dynamics*]{}, Springer-Verlag. Gurzadyan V.G., Ghahramanyan T., Sargsyan S. 2011 [*Europhys.Lett.*]{} [**95**]{} 19001
--- abstract: 'In this paper we apply the formalism of translation invariant (continuous) matrix product states in the thermodynamic limit to $(1+1)$ dimensional critical models. Finite bond dimension bounds the entanglement entropy and introduces an effective finite correlation length, so that the state is perturbed away from criticality. The assumption that the scaling hypothesis holds for this kind of perturbation is known in the literature as finite entanglement scaling. We provide further evidence for the validity of finite entanglement scaling and based on this formulate a scaling algorithm to estimate the central charge and critical exponents of the conformally invariant field theories describing the critical models under investigation. The algorithm is applied to three exemplary models; the cMPS version to the non-relativistic Lieb-Liniger model and the relativistic massless boson, and MPS version to the one-dimensional quantum Ising model at the critical point. Another new aspect to our approach is that we directly use the (c)MPS induced correlation length rather than the bond dimension as scaling parameter. This choice is motivated by several theoretical arguments as well as by the remarkable accuracy of our results.' author: - 'Vid Stojevic$^{1}$, Jutho Haegeman$^{1}$, I. P. McCulloch$^{2}$ Luca Tagliacozzo$^{3}$, Frank Verstraete$^{1,4}$' bibliography: - 'bibliography.bib' title: Conformal Data from Finite Entanglement Scaling --- Introduction ============ Matrix product states, both in their discrete [@Fannes:1992uq; @verstraete2008matrix] and continuum [@PhysRevLett.104.190405; @2012arXiv1211.3935H] variants, provide efficient descriptions of ground states of one-dimensional gapped systems. The reason for this is that the ground state of a local gapped Hamiltonian in one dimension obeys an ’area law’ [@1742-5468-2007-08-P08024; @1367-2630-12-2-025002], a property that is built into the variational class. The area law, in any dimension, is the statement that the entanglement entropy of a large enough region scales not like the volume, but rather like the area of the boundary of that region[^1], which for (1+1) dimensional gapped systems means that the entropy of a large enough interval will saturate. At a critical point the gap goes to zero, and the low-energy behaviour of a one-dimensional system is described by a conformal field theory (CFT) in $(1+1)$-dimensions. In this case the entanglement entropy of an interval increases proportionally to the logarithm of its length. [@1994NuPhB.424..443H; @PhysRevLett.90.227902; @1742-5468-2004-06-P06002] This implies that a matrix product state (MPS) or continuous matrix product state (cMPS) will not fully capture the behaviour of critical systems in the thermodynamic limit for any finite bond dimension. A different tensor network ansatz was constructed for critical systems by Vidal, the multi-scale entanglement renormalisation ansatz (MERA). [@2007PhRvL..99v0405V; @2008PhRvL.101k0501V; @2011arXiv1102.5524H] The structure of the MERA resembles the scale invariance present in critical ground states and supports the power-law decay of correlations. Indeed, a MERA description of a critical ground state allows to extract the critical exponents, both of local and nonlocal scaling operators and boundary scaling operators.[@2009PhRvA..79d0301P; @2010PhRvB..82m2411E; @2010PhRvB..82p1107E; @2011arXiv1109.5334E] Nevertheless, it was recently observed that the way in which a MPS approximation truncates the correlations in a critical ground state follows a universal scaling behaviour.[@2008PhRvB..78b4410T] This scaling was coined *finite entanglement scaling*, as it is indeed the entanglement in the state which is bounded by the finite bond dimension $D$ of the (c)MPS approximation. As a typical (c)MPS has a finite correlation length, the (c)MPS approximation introduces a length scale which perturbs the CFT away from criticality. A scaling relation between this length scale and $D$ was obtained, which can be understood from interpreting $1/D$ as the distance from the critical point (which should be restored for $1/D=0$). An analytic expression for the corresponding critical exponent was first derived in Ref.   and then confirmed by independent calculations in Ref.   where also the crossover between the finite entanglement and finite size scaling in MPS with periodic boundary conditions was studied. Around the same time, one of the authors of this paper presented a direct approach to extract scaling exponents from MPS data. [@mcculloch_talks] Since then, finite entanglement scaling has been used to find the phase diagram of spin models[@PhysRevB.87.235106; @1751-8121-43-37-372001; @1367-2630-13-2-023015] and to extract the CFT data from the edge theory of a fractional quantum Hall state.[@2013PhRvB..88o5314V] In this paper we provide further insight that helps clarify the validity of finite entanglement scaling, and enables us to develop an algorithm to estimate the central charge and critical exponents of critical theories. In the next section we interpret FES using CFT ideas and formulate a *scaling hypothesis*, which states how entanglement entropy and two-point correlation functions are expected to scale with bond dimension. The scaling hypothesis, if valid, justifies the scaling algorithms for extracting the central charge and critical exponents of a CFT using (c)MPS presented in Section  \[sec:D\_scaling\]. These algorithms reduce to the method discovered by one of the authors [@mcculloch_talks] in a certain limit to be discussed. Unlike previous papers, we directly use the (c)MPS induced correlation length rather than the bond dimension as scaling parameter, and motivate the importance of this choice. Section \[section:examples\] demonstrates these algorithms by applying them to three exemplary models: 1) the Lieb-Liniger model, 2) the massless relativistic boson in $(1+1)$ dimensions, and 3) the one-dimensional quantum Ising model at the critical point. We apply our method both to CFT primary operators and also to a class of descendants. Remarkably for a (c)MPS based approach, for the massless relativistic boson our method is capable of estimating the exponents of vertex operators for arbitrary values of the real coefficient $\beta$[^2] which parameterises a continuous infinity of distinct primary operators. The accuracy of the numerical results provides strong evidence for the scaling hypothesis. Another independent piece of evidence for finite entanglement scaling is provided by the observation that low-lying eigenvalues of the transfer matrix all scale in the same manner, which implies that at large distances only a single independent scale is present in (c)MPS approximations of critical ground states. Section \[sec:conclusions\] presents our conclusions. A brief review of (c)MPS is given in Appendix \[app:cMPS\_review\]. The field-field critical exponent calculation for the Lieb-Liniger model is presented in Appendix \[app:LL\_field\_field\], in order to illustrate the application of the algorithm presented in Section \[sec:D\_scaling\] in full detail. Finally, Appendix \[app:kappa\] illustrates the importance of using the (c)MPS correlation length as scaling parameter for the accuracy of the results. Scaling hypothesis {#sec:scalinghypothesis} ================== Several numerical methods for studying classical or quantum lattice systems are restricted to finite system sizes, due to the intrinsic finiteness of computer memory and computation time. Close to a critical point, the finite system size competes with the finite correlation length and the behaviour of thermodynamical quantities (*e.g.* the order parameter or its susceptibility) can be modelled via scaling functions depending on the dimensionless quantity $L/\xi$. [@privman1990finite] Scaling at quantum critical points has been considered only recently. [@campostrini_finite-size_2014] In a finite-size scaling approach (FSS), one would determine the scaling exponents of the different quantities by plotting the relevant quantities (e.g. the magnetisation) as a function of the dimensionless parameter and tuning the critical exponent such that the curves of these quantities extracted from different system sizes collapse. For correlation functions depending on one or more spatial coordinates $x$, both $x/L$ and $x/\xi$ are dimensionless parameters and the scaling theory is more involved. However, exactly at the critical point ($\xi\to\infty$) of a $(1+1)$ dimensional system, the universal finite size effects can be obtained from the underlying conformal field theory (CFT).[@0305-4470-17-7-003] The crucial feature is that the predictions of a CFT are modified in a controlled way by mapping the theory originally defined on an infinite two dimensional plane to some other 2D geometry with a finite dimension such as, for example, an infinitely long cylinder with finite radius or an infinitely long strip with finite width. For example, the footprint of a CFT, the power-law decay of correlation functions between a primary field of weight $(h, \bh)$ and itself on the infinite plane, $$\begin{aligned} & {\left< 0 \right|} \hatO_A (z_1) \hatO_B (z_2 ) {\left| 0 \right>} =: \cG_{\hatO}(z_{12}) \\ \nonumber & = \frac{ 1 }{(z_{12} )^{2h} (\conjz_{12} )^{2 \bh} } \ ,\end{aligned}$$ is modified to $$\begin{aligned} \label{eq:cyl_correlator} & \cG_{(L) \hatO} (w_{12}) = \left( \frac{2 \pi}{L} \right)^{2(h + \bh)} \times\\ \nonumber & \left( 2 \sinh \left(\frac{ \pi (w_{12} ) }{L} \right) \right)^{-2h} \ \left( 2 \sinh \left(\frac{ \pi (\conjw_{12} ) }{L} \right) \right)^{-2\bh} \ \end{aligned}$$ on the cylinder, where $z_{12} := z_1 - z_2$ and similarly for $w_{12}$.[^3] Low energy properties at a conformally invariant critical point are equally well described by considering a classical two dimensional system or an equivalent one dimensional quantum system. For this reason, finite size effects are also observed in genuinely quantum properties such as in the scaling of the entanglement entropy. The entanglement entropy of an interval of length $x$ belonging to a chain of length $L$ with periodic boundary conditions is indeed described by: $$\begin{aligned} \label{eq:cyl_entropy} S_{(L)} = \frac{c}{3} \log \left( \frac{L}{\pi a} \sin \left(\frac{\pi x}{L} \right) \right) + k \ .\end{aligned}$$ In the limit $L \rightarrow \infty$, or for small $x\ll L$, one recovers the well-known thermodynamic limit expression [@1994NuPhB.424..443H; @PhysRevLett.90.227902; @1742-5468-2004-06-P06002]: $$\begin{aligned} \label{eq:entropy_vs_logmu} S = \frac{c}{3} \log (x) + k \ .\end{aligned}$$ The crucial observation is that the finite size effects in both expressions (\[eq:cyl\_correlator\], \[eq:cyl\_entropy\]) enter via a function that depends on distance in units of $L$, i.e. via $x/L$ in the case of entropy and $w_{12} /L $ in the case of the two-point correlator. Similar expressions exist when not the spatial size but the temporal size of the system is finite (*i.e.* finite temperature). A (c)MPS based FSS approach for one-dimensional critical theories would make use of the fact that finite size introduces a gap in the CFT so that its ground state can be well captured by the variational manifold, provided that the bond dimension grows sufficiently rapidly with the system size.[@2012PhRvB..86g5117P] A natural approach for calculating the central charge $c$ or the critical exponents $\Delta : = h + \overline{h}$ using (c)MPS is as follows. First pick a range of circles on which the spatial direction of the CFT is “compactified”, and for each calculate the (c)MPS ground state at large enough bond dimension to adequately capture the exact ground state. Next pick a scale $s<1$, and for each circle calculate the entropy of an interval of length $x = s L$ numerically using (c)MPS. It is obvious that for *any* choice of $s<1$ one can obtain an estimate for $c$ from the scaling of $S_{(L)}$ vs. $\log(L)$. Similarly, critical exponents can be estimated from from the scaling of $\log( \cG_{(L)\hatO})$ vs. $\log(L)$. Since both $S_{(L)}$ and $\cG_{(L)\hatO}$ are calculated from (c)MPS data, for numerical reasons some values of $s$ may be preferred, and we can scan over $s$ in order to obtain the best numerical fit. Now let us imagine that a length scale $\mu$ is introduced via some other mechanism, either with or without a geometric origin. It is obvious that the scaling approach described above can be applied ’as is’ regardless of the manner in which this scale is introduced, as long as the effect on entanglement entropy and two-point correlator expressions is through a *scaling function* of $(x/\mu)$ such that: $$\begin{aligned} \label{eq:entropy_scaling_general} S_{(\mu)}(x, D) \propto \frac{c}{3} \log \left( \frac{\mu }{\pi a} f \left( \frac{x}{\mu } \right) \right)\end{aligned}$$ and $$\begin{aligned} \label{eq:2pcorr_scaling_general} \cG_{(\mu) \hatO} (z_{12}) \propto \left( \frac{1}{\mu} \right)^{2(h + \bh)} g \left( \frac{x}{\mu } \right) \ .\end{aligned}$$ The precise form of the functions $f$ and $g$ is immaterial; the central charge can be calculated from the scaling of $S_{(\mu)}$ versus $\log(\mu)$, and the critical exponents from $\log( \cG_{(\mu)\hatO})$ versus $\log(\mu)$. Equations (\[eq:entropy\_scaling\_general\]) and (\[eq:2pcorr\_scaling\_general\]) constitute our *scaling hypothesis*. Recently (c)MPS methods have been developed that enable the study of physical systems directly in the thermodynamic limit [@PhysRevLett.98.070201; @PhysRevLett.107.070601; @2012arXiv1207.0691M; @2012arXiv1211.3935H; @2013PhRvB..88g5133H; @2008arXiv0804.2509M], using a translation invariant Ansatz; for MPS, $$\begin{aligned} & {\left| \Psi [ A ] \right>}= \\ \nonumber & \sum_{i_1=1}^d \sum_{i_2=1}^d \cdots \sum_{i_N=1}^d v_L^\dagger A^{i_1}_1 A^{i_2}_2 \cdots A^{i_N}_N v_R {\left| i_1, i_2, \cdots, i_N \right>} \end{aligned}$$ ($N\rightarrow \infty$) with $A$ position independent, and for cMPS $$\begin{aligned} \label{eq:ucMPS2} & {\left| \Psi [ Q(x), R_{\alpha}(x) ] \right>} = \\ \nonumber & v_L^\dagger \mathcal{P} \mathrm{exp} \left[ \int_{- \frac{L}{2}}^{\frac{L}{2}} dx \ \left( Q(x) \otimes \eye + \sum_\alpha R_{\alpha} \otimes \hatpsidag_\alpha (x) \right) \right] v_R {\left| \Omega \right>} \ ,\end{aligned}$$ ($L\rightarrow \infty$) with $R$ and $Q$ position independent. The long distance behaviour of correlation functions with respect to a (c)MPS is governed by the second largest eigenvalue $\lambda_2$ of the transfer matrix $T$ \[defined in Eq.  for cMPS and in Eq.  for MPS\]; the largest eigenvalue is required to be zero in order to ensure correct normalisation. The finite bond dimension $D$ thus introduces a finite correlation length: $$\begin{aligned} \label{eq:corr_length_def} \mu_2 (D) = - \frac{1}{\lambda_2(D)} \ ,\end{aligned}$$ which perturbs the state away from the critical point. It was demonstrated in Ref.  that the effective correlation length asymptotically scales as $\mu_2(D)\sim D^{\kappa}$, where $\kappa$ is a constant that depends only on the universality class. As the bond dimension bounds the maximal entanglement in the state, this kind of scaling is also referred to as *finite entanglement scaling* (FES). Once the exponent $\kappa$ has been determined Ref.   outlines an approach, different to the one presented in this paper, for extracting critical exponents by performing a scaling analysis directly with respect to $D$. While the precise manner in which perturbation due to finite bond dimension affects the CFT is not properly understood, these results constitute evidence that the scaling hypothesis (\[eq:entropy\_scaling\_general\], \[eq:2pcorr\_scaling\_general\]) holds for FES. Assuming the validity of this scaling relation, the exponent $\kappa$ was later determined in function of the central charge $c$ of the CFT as [@2009PhRvL.102y5701P]: $$\begin{aligned} \label{eq:kappa_def} \kappa = \frac{6}{c \left( \sqrt{\frac{12}{c}} + 1 \right)} \ .\end{aligned}$$ In this paper we provide further evidence in favour of the FES hypothesis by observing the higher eigenvalues of the transfer matrix, which also induce a length scale $\mu_I(D) = -1/\Re(\lambda_I(D))$ for $I>2$. Our numerics reveal that ratios of the real parts of the low-lying eigenvalues of the transfer matrix $T$ are roughly constant. This is demonstrated in Figure \[fig:Ising\_combined\] for the quantum Ising model at the critical point. The fact that all the eigenvalues of the transfer matrix obey the same scaling is a further hint that equations like the ones in (\[eq:cyl\_correlator\], \[eq:cyl\_entropy\]), which ultimately are consequences of the presence of a single scale, could also describe FES. The (one-parameter) scaling hypothesis (\[eq:entropy\_scaling\_general\], \[eq:2pcorr\_scaling\_general\]) would be violated if different eigenvalues of the transfer matrix would scale with different powers of $D$, thus producing several independent relevant infrared length scales. In order to attempt to understand this observation, let us imagine that the finite bond dimension induced scale has some geometric origin or interpretation. An initial tempting guess, which is ultimately too simplistic, might be to postulate that the (c)MPS transfer matrix represents the contraction of a section of a 2D tensor network encoding the partition function of a related classical model on an infinite strip (since the (c)MPS describes an infinite chain with finite width). This would mean that the (c)MPS transfer matrix is equivalent to the transfer matrix of the classical model along the infinite direction on the strip. For this geometry the ratios of the eigenvalues of the transfer matrix are known and independent of the scale, i.e. the width of the strip.[@cardy_operator_1986; @cardy_effect_1986] It is however not clear that the origin of the finite entanglement scale really is geometric, and our numerical results for the ratios of the eigenvalues of the transfer matrix do not reproduce the ones expected from the corresponding CFTs on the strip. Nevertheless, the fact that the ratios converge to a well defined scale independent value is another piece of evidence that there should be a CFT interpretation of FES.[^4] ![image](Ising_ln_mu_vs_ln_D_ratios_combined.pdf){width="16cm"} This paper presents a scaling algorithm based on the FES hypothesis (\[eq:entropy\_scaling\_general\], \[eq:2pcorr\_scaling\_general\]). Unlike previous papers that use $D$ or $D^\kappa$ as scaling parameter, our approach directly uses the (c)MPS induced correlation length $\mu_2(D)$ as scaling parameter. There are several benefits to this approach. As $\mu_2(D)$ has the dimension of a length scale, it is the most natural parameter to be used in the scaling relations (\[eq:entropy\_scaling\_general\], \[eq:2pcorr\_scaling\_general\]). Secondly, even when the parameters of the Hamiltonian are slightly different from its critical point (*e.g.* because the precise location is not exactly known), we can still argue that $\mu_2(D)$ is the only relevant length scale in the system. While the $D$-limited length scale $D^\kappa$ would compete with the physical correlation length $\xi$ resulting in a two-scale problem, we anticipate that the observed correlation length $\mu_2(D)$ automatically interpolates between these two length scales in such a way that the scaling relations (\[eq:entropy\_scaling\_general\], \[eq:2pcorr\_scaling\_general\]) continue to hold. Another significant problem with scaling using the bond dimension is that converging to an optimum ground state is computationally very expensive. Often one is much better increasing $D$, even by a small amount, and doing a few iterations of TDVP (or iDMRG) rather than doing many iterations to reach the true optimum for smaller $D$. This provides another significant advantage to using the correlation length in practical calculations. Finally, as is shown in Appendix \[app:kappa\], the scaling approach based on $\mu_2(D)$ as scaling parameter produces more accurate results for the critical exponents and central charge. Recipe for Finite Entanglement Scaling {#sec:D_scaling} ====================================== In this section we describe a finite entanglement scaling (FES) method for estimating critical exponents and the central charge of a conformally invariant theory. Critical Exponents ------------------ Two-point correlation functions in critical theories obey power-law decay at large distances, in contrast to the exponential falloff that occurs for gapped models. That is, in a CFT, the two-point correlation function of a primary operator $\hatO$ with itself behaves as $$\begin{aligned} \label{eq:power_law} \cG_{\hatO}(x) = {\left< 0 \right|} \hatO^{\dagger} (0) \hatO (x) {\left| 0 \right>} \propto x^{-2 \Delta_{\hatO}} \ \ \ , \ \ \ x \gg 0 \ ,\end{aligned}$$ where $\Delta_{\hatO}$ is the critical exponent corresponding to $\hatO$. We will not be considering correlation functions between different operators in this paper. The cMPS approximation of the CFT ground state at any finite bond dimension $D$ generates a gap, and the approximation of the two-point correlation function, $$\begin{aligned} \label{eq:power_law_cmps} G_{\hatO}(x) : = (l | O^{\dagger} [ R, \conjR, Q, \conjQ ] e^{T x} O [ R, \conjR, Q, \conjQ] | r ) \ ,\end{aligned}$$ reproduces the power-law decay up to some distance generally shorter then, or at best of the order of the correlation length (as defined in Eq. ), and decays exponentially beyond that (see Figure \[fig:LL\_psi\_disconn\_correlator\] in Appendix \[app:LL\_field\_field\]).[^5] The observation central to our algorithm for approximating critical exponents is a consequence of the scaling hypothesis (\[eq:entropy\_scaling\_general\], \[eq:2pcorr\_scaling\_general\]): At all scales $s$ large enough to eliminate short distance artefacts, $\log(G_{\hatO}(s \mu_2(D))$ scales linearly with respect to $\log( \mu_2(D))$ with the constant of proportionality given by $-2 \Delta_\hatO$. Using this property, critical exponents can be estimated as follows: Using (c)MPS approximations for the critical ground state for a range of bond dimensions, estimates for $\Delta_{\hatO}$ at different scales $s$ are given by the slopes obtained from the linear interpolation of $\log(G_{\hatO}(s \mu_2(D)) )$ vs. $\log (s \mu_2(D))$. We scan over $s$ such that $s_0 < s < \infty$, and $s_0$ is large enough to wash out any short distance/cutoff effects. The final result for the exponent is obtained from the interpolation of $\log(G_{\hatO}(s \mu_2(D)) )$ vs. $\log (s \mu_2(D))$ at the scale $s$ at which the confidence interval for the the slope is minimal. The error estimate for the exponent is given by the confidence interval. The confidence intervals for the slopes depend on the choice of the confidence level; in this paper we will calculate error estimates for the slopes using both $95 \%$ and $99.73\%$ confidence levels. It is not obvious that the scan over $s$ improves the accuracy of the estimates, over simply choosing some particular value, e.g. $s=1$ or considering the limit $s\rightarrow \infty$, but it turns out that this is numerically a worthwhile step. Using the eigenvalue decomposition of $T$ and writing the distance $x$ in units of $\mu_2$ $$\begin{aligned} \label{eq:s_def2} x = s \mu_2(D) \ , \end{aligned}$$ (\[eq:power\_law\_cmps\]) can be re-expressed as: $$\begin{aligned} \label{eq:correlator_expanded} G_{\hatO}(s \mu_2(D)) ) = | ( l | O |r ) |^2 + \sum_{I=2}^{D^2} ( l | O^\dagger |r_I ) e^{-s \frac{\lambda_I}{\lambda_2} } (l_I | O |r) \ .\end{aligned}$$ Here $(l_I | $ and $|r_I)$ are the left and right eigenvectors corresponding to the eigenvalue $\lambda_I$; $( l_1 | \equiv (l |$ is the zero-eigenvalue eigenvector. We have suppressed the $D$ dependence of eigenvectors and eigenvalues on the right hand side. At $s=\infty$ only the dominant contribution to $G_{\hatO}(s \mu_2(D)) ) $ survives. Let us suppose that the first non zero contribution is for $I=a$, then in the limit of large $s$ scaling $ \log( \exp(-s \frac{\lambda_a}{\lambda_2}) ( l | O^\dagger |r_a ) (l_a | O |r) ) $ vs. $\log( \mu_2(D) )$ provides an estimate for the critical exponent. If the first non-zero contribution is for $a=2$, the prefactor is constant. If on the other hand it occurs at some $a >2$, it is still roughly constant, since the ratios of the eigenvalues converge (see Figure \[fig:Ising\_combined\]). However, since the low lying eigenvalues all scale in the same way, the FES approach described above is also valid with any low lying $\mu_a$ replacing $\mu_2$. It follows that dropping the prefactor in front of the dominant contribution is ok, i.e. that simply scaling $\log ( ( l | O^\dagger |r_a ) (l_a | O |r) ) )$ vs. $\log( \mu_2(D) )$ should provide an estimate for the exponent. This indeed turns out to be the case, as was observed by one of the authors of this paper.[@mcculloch_talks] However, for nearly all the calculations performed in this paper, estimates obtained at $s$ of the order of one, that contain all contributions from an arbitrarily large number of eigenvectors of the transfer matrix, are superior to the fits at $s= \infty$. The remaining problem at this stage is how to determine $s_0$, or at least an upper bound for it. To address this problem, let us first consider estimates for the exponents obtained directly from the (c)MPS approximation to the correlator at one particular bond dimension: At one particular bond dimension pick a distance $x_I < \mu_2$ at which the algebraic decay is well captured by the (c)MPS approximation, but which is still large enough to wash out any short distance/cutoff effects. Estimate $\Delta_{\hatO}$ from the slope of $\log(G_{\hatO}(x) ) )$ vs. $\log(x)$ at $x_I$. The relation between the Direct and FES approaches is demonstrated in Figure \[fig:LL\_field\_plot3d\] for the ($\hatpsi-\hatpsidag$) correlator in the Lieb-Liniger model at $g_{\mathrm{eff}} = 1.348..$ . It is clear that an algorithm based on the Direct Approach alone is beset by serious obstacles, the most serious being that no general method exists to determine the window for $x_I$ inside which estimates are accurate. In addition, estimating the error in the estimates is not as straightforward as in the FES scalings. One could attempt to overcome these problems by working with a set of bond dimensions and choose the critical exponent estimate corresponding to the scale at which the spread in estimates is minimal. Unfortunately it turns out that the minimal spread often occurs in regions where short distances effects are important thus in general missing the true value of the exponent. ![ []{data-label="fig:LL_field_plot3d"}](LL_field_plot3d.pdf){width="8cm"} We do proceed by working with a range of bond dimensions $\cD$, and apply the Direct Approach at each of these, but use this simply in order to get an upper bound on $s_0$ for the FES Approach. Above $s=1$ the critical exponent estimates in the Direct Approach will be completely off, as the algebraic falloff is no longer captured by the (c)MPS approximation to the correlator. The approximations will become more accurate at some distance below the correlation length, and will again become unreliable at short distances. Since the FES Approach remains accurate for $s>1$, an upper bound for $s_0$ in the FES approach is given by the maximum scale below $s=1$ at which the FES and Direct Approach method results intersect. A region in which the two approaches agree is expected to exist in general, since in some region below the correlation length $G_{\hatO}(x)$ will have converged to good accuracy for all bond dimensions used in the FES Approach (see e.g. the plot in Figure \[fig:LL\_psi\_disconn\_correlator\]). There are exceptions to this, that is, cases when no clear intersection exists. This can occur, for example, when for all bond dimensions in $\cD$ the Direct Approach estimate only approaches the true value for all bond dimensions in the range, but never reaches it, and then deviates wildly at very small $s$. In such cases we simply have to restrict our FES scan from $s=1$ to $s=\infty$, i.e. we work with $s_0 = 1$, which generally still brings about a large increase in accuracy over the estimate at $s=\infty$. For the field theory examples studied in this paper we always see a clear intersection, but this is not the case for all the operators in the Ising model example (see Table \[tab:ising\_results\]). We now have all the ingredients to for a robust algorithm to calculate critical exponents: 1. [For a set of bond dimensions $\cD$, apply the FES Approach scanning all scales from zero to infinity[^6], and store the critical exponent estimates at all scales. Having chosen an appropriate confidence level, the error bars are determined by the confidence interval for the slope. ]{} 2. [For all all the bond dimensions in $\cD$ apply the Direct Approach, scanning over all distances from zero to infinity. At each bond dimension store the estimates for all $s$.]{} 3. [ Take $s_0$ to be the maximum scale at which the estimates from 1) and 2) agree. The final estimate of the critical exponent is given by the FES estimate with the smallest confidence interval for the slope in the range $s_0 \leq s < \infty$. ]{} Central Charge -------------- For a (c)MPS the density matrix corresponding to an interval of length $x$ is given by the $D^2 \times D^2$ matrix: $$\begin{aligned} \rho = (l^T)^{\frac{1}{2}} \otimes (r^T)^{\frac{1}{2}} (\widetilde{\exp (T x)})^{\frac{1}{2}} \ ,\end{aligned}$$ where $$\begin{aligned} \widetilde{\exp (T x)}_{ijkl} := \exp (T x)_{ikjl} \ .\end{aligned}$$ Here $l$ and $r$ are the left and right zero-eigenvalue eigenvectors of the transfer matrix $T$ reshaped into $D\times D$ matrices (see Appendix \[app:cMPS\_review\] for more details). The corresponding entanglement entropy is given by $$\begin{aligned} S = - \mathrm{tr} ( \rho \log (\rho )) = - \sum_i \lambda_i^2 \log ( \lambda_i^2) \ .\end{aligned}$$ where $\lambda_i$ are the Schmidt coefficients corresponding to $\rho$. Following the discussion in the context of (\[eq:entropy\_scaling\_general\]), after choosing a scale $s$, the central charge can be estimated from the scaling of $S(D)$ of an interval $x(D) = s \mu_2 (D)$ vs. $\log( \mu_2 (D) )$. The error estimates are again given by the confidence interval for the slope, and depend on the choice of the confidence level. Since $S$ is obtained numerically from the (c)MPS data, a different estimate for $c$ is in general obtained at each scale $s$. For the examples studied in this paper we observe, by comparing to exact results, that the linearity of the scalings based on the interval entanglement entropy improves down to some scale $s_{\mathrm{opt}} < 1$, below which it becomes inaccurate due to short-distance/cutoff effects. When determining critical exponents we encountered a similar problem of having to determine an optimum scale, and made use of estimates obtained directly from the (c)MPS approximation of the two-point correlation function at some fixed bond dimension in order to give an upper bound for the optimum scale and ensures that we do not pick a scale that is too small. An analogous approach is also possible for the calculation of the central charge. Unfortunately the computational cost of calculating the entanglement entropy of a finite interval is $\cO (D^6)$, so scanning over $s$ becomes a lot more expensive than for the critical exponent calculations, where the computational cost is only $\cO (D^3)$. We have not found it feasible to implement such an algorithm for the models considered in this paper. In addition, unlike for critical exponent estimates where the increase in accuracy over $s = \infty$ is already significant for $s$ close to $1$, the analogous gain in accuracy for estimates of central charge turns out to be very poor (in particular this means that scaling using intervals at the value $s=1$, where we need not worry about short distance effects, gives virtually no improvement in accuracy). For these reasons, instead of working with the entanglement entropy of an interval, as given by equation (\[eq:entropy\_vs\_logmu\]), we will consider a bi-partition of a finite system and the entanglement entropy of the half-system $\cA$. In the limit of growing the length $x_{\cA}$ of $\cA$ to infinity, the entropy of the half-system now grows as: $$\begin{aligned} \label{eq:entropy_vs_logmu_open} S = \frac{c}{6} \log (x_{\cA}) + k \ .\end{aligned}$$ The simplest approach to calculating the central charge is indeed by using the half-infinite line entanglement entropy rather than entropy of an interval, since the density matrix of a half-infinite line (chain) in the (c)MPS approximation is only $D \times D$ dimensional: $$\begin{aligned} \rho = (l^T)^{\frac{1}{2}} r^{\frac{1}{2}} \ .\end{aligned}$$ One can easily check that the contributions to the interval entanglement entropy due to non-zero eigenvalue eigenvectors of $T$ vanish as the interval is taken to infinity, and that the interval and half-infinite line estimates for $c$ become equal in the limit $s \rightarrow \infty$. We have also examined the possibility of exploiting the conjectured relation between $D$ and $\mu$ [@PhysRevLett.107.070601; @2008PhRvB..78b4410T; @2008PhRvA..78c2329C; @PhysRevLett.56.742], namely that: $$\begin{aligned} \label{eq:Dscaling} \mu_2 \propto D^{\kappa} \ ,\end{aligned}$$ with $\kappa$ analytically determined as function of $c$ in Eq. . Using this relation the central charge can be estimated from the slope of $\log(\mu_2 (D))$ vs. $\log(D)$. Another estimate for $c$ can be obtained by combining the half-infinite entropy with (\[eq:Dscaling\], \[eq:kappa\_def\]), so: $$\begin{aligned} \label{eq:entropy_vs_D} S = \frac{1}{\sqrt{\frac{12}{c}} + 1} \log (D) + k\ , \end{aligned}$$ and $c$ can be estimated also from the scaling of $S(D)$ vs. $\log(D)$. Alternatively, we can keep $\kappa$ as a free parameter and work simply with: $$\begin{aligned} \label{eq:entropy_vs_D2} S = \frac{\kappa c}{6} \log (D) + k \ .\end{aligned}$$ That is, we still obtain $c$ from the scaling of $S(D)$ vs. $\log(D)$, but use the value for $\kappa$ obtained from the scaling of $\log(\mu_2 (D))$ vs. $\log(D)$ instead of using (\[eq:kappa\_def\]). The interval entanglement entropy grows twice as quickly with $\log (D)$ compared to expressions (\[eq:entropy\_vs\_D\], \[eq:entropy\_vs\_D2\]). In this paper we obtain estimates for $c$ for the three aforementioned models using the half-infinite line entropies. For the two field theories we also obtain estimates based on interval entropies at $s=0.1$, in order to demonstrate that an increase in accuracy is obtained by going to finite $s$, albeit a modest one. We observe significant deviations form the predicted value for $\kappa$ (\[eq:kappa\_def\]) for all three models, and estimates for $c$ that depend on this relation turn out to be inaccurate. Central charge estimates obtained from scalings with respect to $\mu_2(D)$ are presented in the next section, and those obtained from scalings with respect to $D$ directly are presented in Appendix \[app:kappa\]. Exemplary Models {#section:examples} ================ In this section we consider three exemplary critical models in order to demonstrate the FES approach to calculating the central charge and critical exponent that was described in the previous section. A cMPS version of the algorithm is applied to the Lieb-Liniger model [@PhysRev.130.1605; @PhysRev.130.1616], which describes an interacting non-relativistic one dimensional Bose gas, and also to the relativistic massless boson in $(1+1)$ dimensions. The MPS version is applied to the one-dimensional quantum Ising model at the critical point. The scaling calculations for all three models are performed using all bond dimensions in the range $ 32 \leq D \leq 64$. The (c)MPS approximations of the ground state are obtained using the time dependent variational principle [@2013PhRvB..88g5133H] combined with a conjugate gradient method[@2013PhRvD..88h5030M]; for the MPS case an equally efficient option is to use the infinite-size variant of the standard DMRG algorithm (iDMRG). [@2008arXiv0804.2509M] Lieb-Liniger Model {#subsection:LL} ------------------ The Lieb-Liniger model describes bosons on a line interacting via a contact potential. The Hamiltonian is given by: $$\begin{aligned} \hatH = \int_{-\infty}^{\infty} dx \left[ \frac{d}{dx} \hatpsi^\dagger \frac{d}{dx} \hatpsi + v \ \hatpsidag \hatpsi + g \ \hatpsidag \hatpsidag \hatpsi \hatpsi \right] \ , \label{eq:LL_hamiltonian}\end{aligned}$$ and the theory is critical for the whole range of parameters $g >0$, $v < 0$. The effective space of vacua is not two-dimensional, as the only relevant parameter is the effective interaction strength $g_{\mathrm{eff}} := g / \rho^2$, where $\rho$ is the particle density, and $g_{\mathrm{eff}}$ can be adjusted by either changing the chemical potential $v$ or the interaction strength $g$. The central charge of the Lieb-Liniger model is known to be $c=1$. In this section we consider the ground state of the Hamiltonian (\[eq:LL\_hamiltonian\]) with $v=1, g=1$, which corresponds to $g_{\mathrm{eff}} = 1.348...$. We observe that the low lying eigenvalues of the transfer matrix of the Lieb-Liniger model all scale in the same manner (see discussion at the beginning of Section \[sec:D\_scaling\]). The situation is very similar to that depicted for the quantum Ising model in Figure \[fig:Ising\_combined\], except that the ratios converge to different values. Estimates for $\kappa$, as obtained from the scalings of $\log (\mu_I)$ vs. $\log(D)$ (see Eq. (\[eq:Dscaling\])), underestimate the predicted value (\[eq:kappa\_def\]) for all $I$. The value obtained from the scaling of $\log (\mu_2)$ vs. $\log(D)$ is given in Table \[tab:kappa\_results\] in Appendix \[app:kappa\]. We also obtain estimates for $c$ using scalings of $S$ vs. $\log(\mu_2)$, using both the entanglement entropy of the half-infinite line, and also of finite intervals of length $0.1 \mu_2(D)$ (i.e. at $s=0.1$). We have not implemented a robust method for obtaining a lower bound for $s$, due to the high resources necessary for such a computation and the very modest gain in accuracy (see discussion in Section \[sec:D\_scaling\]). That is, we do not give any demonstration that the value $s=0.1$ is large enough so that cutoff effect are not present, independent of the fact that the know exact value $c=1$ is reproduced. The results for $s=0.1$ demonstrate at least that the accuracy *can* be improved over the scaling at $s=\infty$. There is an improvement already when picking the “safe” value $s=1$, but this improvement turns out to be so small that it is negligible, at least for the range of bond dimensions we are using. Central charge estimates obtained from using half-infinite line entropies are summarised in Table \[tab:c\_half\_line\_results\], and from entropies of intervals of length $0.1 \mu_2(D)$ in Table \[tab:c\_LL\_results\_interval\]. Critical exponent estimates have been obtained for a number of Lieb-Liniger operators and are listed in Table \[tab:LL\_results\] - various details pertaining to the particular operators are presented in the remainder of this subsection. As a guiding example for the method, the field-field exponent calculation is spelled out in full detail in Appendix \[app:LL\_field\_field\]. ---------------- ------------------------------ ------------------------------ -------------------- --------------------------- --------------------------- -- Model Slope Slope Predicted $c$ Estimate $c$ Estimate $99.73\%$ conf. $95 \%$ conf. Slope $99.73\%$ conf. $95\%$ conf. Lieb-Liniger $0.164^{+0.005}_{-0.005}$ $0.164^{+0.003}_{-0.003}$ $c/6 = 0.1666...$ $0.983^{+0.029}_{-0.030}$ $0.983^{+0.019}_{-0.019}$ Relativ. Boson $0.171^{+0.004}_{-0.004}$ $0.1710^{+0.0022}_{-0.0022}$ $c/6 = 0.1666...$ $1.026^{+0.021}_{-0.022}$ $1.026^{+0.013}_{-0.013}$ Quantum Ising $0.0826^{+0.0012}_{-0.0011}$ $0.0826^{+0.0007}_{-0.0007}$ $c/6 = 0.08333...$ $0.496^{+0.007}_{-0.007}$ $0.496^{+0.004}_{-0.004}$ ---------------- ------------------------------ ------------------------------ -------------------- --------------------------- --------------------------- -- ---------------- ------------------------------ ------------------------------ ------------------ --------------------------- --------------------------- -- Model Slope Slope Predicted $c$ Estimate $c $ Estimate at $99.73\%$ conf. $95 \%$ conf. Slope $99.73\%$ conf. $95\%$ conf. Lieb-Liniger $0.331^{+0.004}_{-0.004}$ $0.3313^{+0.0026}_{-0.0027}$ $c/3 = 0.333...$ $0.994^{+0.013}_{-0.013}$ $0.994^{+0.008}_{-0.008}$ Relativ. Boson $0.3365^{+0.0033}_{-0.0033}$ $0.3365^{+0.0020}_{-0.0021}$ $c/3 = 0.333...$ $1.010^{+0.010}_{-0.010}$ $1.010^{+0.006}_{-0.006}$ ---------------- ------------------------------ ------------------------------ ------------------ --------------------------- --------------------------- -- ------------------------------- ---------------- -------------------------------- ------------------------------ ---------------- -- Operator Optimal $2\Delta_{\hatO}$ at $99.73\%$ $2\Delta_{\hatO}$ at $95\%$ Exact result scale confidence confidence $\hatpsi$ $0.66\mu_2$ $0.1667^{+0.0005}_{-0.0005} $ $0.1665^{+0.0003}_{-0.0003}$ $0.1668575...$ $ \frac{d}{ dx} \hatpsi $ $0. 86 \mu_2 $ $2.165^{+0.006}_{-0.005}$ $ 2.165^{+0.004}_{-0.003}$ $2.1668575...$ $ \frac{d^2}{ dx^2} \hatpsi $ $1.49 \mu_2 $ $4.167^{+0.010}_{-0.010}$ $ 4.167^{+0.006}_{-0.007}$ $4.1668575...$ $\hatpsidag \hatpsi$ $0.965 \mu_2$ $2.001^{+0.009}_{-0.008} $ $2.001^{+0.005}_{-0.005} $ 2 $\hatcH$ $1.58 \mu_2$ $4.013^{+0.018}_{-0.019}$ $ 4.013^{+0.011}_{-0.013} $ 4 ------------------------------- ---------------- -------------------------------- ------------------------------ ---------------- -- **Field-field exponent ($\hatpsi-\hatpsidag$)** The field-field exponent can be calculated using the Bethe Ansatz to arbitrary precision.[@Korepin:1997kx] The general result reads: $$\begin{aligned} \label{eq:field-field} \langle \hatpsi(x,t) \hatpsidag (0,0) \rangle \approx A | x + iv t |^{\frac{-1}{2 \calZ^2}} \ ,\end{aligned}$$ where $\calZ$ is given by $$\begin{aligned} Z(k) \equiv 2 \pi \rho(k) \label{eq:bethe_Z}\end{aligned}$$ evaluated at the Fermi-boundary of the quasi-momenta; $\rho$ is the density of quasi-momenta. For $g=1, v=1 \leftrightarrow g_{\mathrm{eff}} = 1.3478... $, the critical exponent is given by: $$\begin{aligned} \frac{1}{2 \calZ^2} = 2 \Delta_{\hatpsi} = 0.1668575... \ .\end{aligned}$$ We consider the correlator (\[eq:field-field\]) at equal times, and restrict to $x>0$, so that the cMPS approximation is given by: $$\begin{aligned} \langle \hatpsi(x,0) \hatpsidag (0,0) \rangle \approx ( l | ( 1 \otimes \conjR ) e^{Tx} ( R \otimes 1) |r) \ .\end{aligned}$$ The $U(1)$ symmetry of the exact Lieb-Liniger ground state is broken by the cMPS approximation; the expectation value of the field, $$\begin{aligned} \langle \hatpsi \rangle \approx (l | R \otimes 1 | r) \neq 0 \ ,\end{aligned}$$ scales to zero as $D$ is increased, and the state approaches the true Lieb-Liniger vacuum, however convergence is very slow. In fact, the scaling of $\log( | (l | R \otimes 1 | r) |^2$ vs. $\log(\mu_2(D))$ yields a (sub-optimal) approximation for the critical exponent of $\hatpsi$, and corresponds to the dominant contribution to the scaling as $s\rightarrow \infty$. In Figure \[fig:LL\_psi\_disconn\_correlator\] (Appendix \[app:LL\_field\_field\]) one can see that, with the disconnected part included, power-law behaviour is immediately evident for distances smaller then the correlation length, even for low bond dimension. This is not the case if the disconnected part is omitted. **Descendants of $\hatpsi / \hatpsidag$** We examine the class of descendants of $\hatpsi$ at level $l$ obtained by taking the $l$-th derivative of $\hatpsi$ (\[eq:psi\_sq\_cMPS\], \[eq:psi\_cubed\_cMPS\]). While no exact Bethe Ansatz results are available for comparison, it follows from standard CFT arguments [@citeulike:1280772] that the exact exponent is simply $\Delta_{\frac{d^l}{ dx^l} \hatpsi} = \Delta_{\hatpsi} + l$, which is confirmed for first two levels to good accuracy (see Table \[tab:LL\_results\]). **Density-density exponent ($\hatpsidag \hatpsi - \hatpsidag \hatpsi$)** The Bethe Ansatz result for the density-density correlator is: $$\begin{aligned} \label{eq:density-density} & \langle \hatpsidag \hatpsi (x,t) \hatpsidag \hatpsi (0,0) \rangle = \langle \hatpsidag \hatpsi \rangle^2 \\ \nonumber & + \frac{A}{ (x + iv t )^{2}} + \frac{A}{ (x - iv t )^{2}} + A_3 \frac{ \cos (2 k_F x) }{| x + iv| ^{2 \calZ^2}} \ ,\end{aligned}$$ where $A$ and $A_3$ are constants. Since $\calZ$ (\[eq:bethe\_Z\]) is bounded from below by $1$ [@Korepin:1997kx], the first two terms dominate at large distances, so $\Delta_{\hatpsidag \hatpsi } = 1$. This is reproduced by our scaling calculations (see Table \[tab:LL\_results\]). Unlike for the field-field correlator, here the disconnected part is non-zero in the exact ground state, so it needs to be subtracted out in the scaling calculation. **$\hatcH-\hatcH$ exponent** The Hamiltonian density $\hat{\cH}$ is obtained from the time-time component of the energy-momentum tensor, which is a descendent of the unit operator. For reasons equivalent to those given for the Hamiltonian density of the relativistic massless boson in the next section, $\delta_{\hatcH} = 2$, which our scaling calculation confirms (Table \[tab:LL\_results\]). Massless Relativistic Boson {#subsection:KG} --------------------------- Let us start from the massive relativistic boson (Klein-Gordon) Hamiltonian in (1+1) dimensions: $$\begin{aligned} \label{eq:KG} \hatH_{\mathrm{KG}} = \frac{1}{2} \int_{-\infty}^{\infty} dx \left[ \hatpi^2 + \left( \frac{d}{dx} \hatphi \right)^2 + m^2 \hatphi^2 \right] \ .\end{aligned}$$ For $m=0$ we obtain a conformally invariant theory with central charge $c=1$. The field operators $\hatphi$ and $\hatpi$ can be written in terms of the cMPS Fock space operators $\hatpsi$ and $\hatpsidag$ as: $$\begin{aligned} \hatphi = \frac{1}{\sqrt{2 \nu } } ( \hatpsi + \hatpsidag ) \ \ \ , \ \ \ \hatpi = -\frac{i}{2} \sqrt{2 \nu} ( \hatpsi - \hatpsidag ) \ ,\end{aligned}$$ where an arbitrary scale $\nu$ is introduced. The Hamiltonian (\[eq:KG\]) diverges in the cMPS setting and needs to be regularised. Surprisingly, one way to do this is by requiring the second derivative of $\hatpsi$ to be continuous. It is, however, difficult to impose such a constraint, and in any event this approach is too restrictive for our purposes since we actually want to work with operators that contain second order derivative terms. A better solution is to consider the counterterm: $$\begin{aligned} \label{eq:counterterm} \frac{1}{\nu^2 } \left( \frac{d \hatpi}{dx} \right)^2 \ ,\end{aligned}$$ which removes all divergences and serves as a momentum cutoff. The resulting Hamiltonian has the form: $$\begin{aligned} \label{eq:reg_KG} \hatH = \int_{-\infty}^{\infty} dx \left[ \frac{d}{dx} \hatpsidag \frac{d}{dx} \hatpsi + v \hatpsidag \hatpsi + u ( \hatpsi \hatpsi + \hatpsidag\hatpsidag) \right] \end{aligned}$$ with: $$\begin{aligned} v = \frac{m^2 + \nu^2}{2 } \ \ \ , \ \ \ u = \frac{m^2 - \nu^2}{4 } \ . \label{eq:uu_parameters}\end{aligned}$$ Results presented in this section are obtained using the values $u=-5$, $v=10$. Estimates for $\kappa$, together with the related estimates for $c$, are given in Table \[tab:kappa\_results\]. Estimates for the central charge obtained using the half-infinite line entropies, and from entropies of subsystems of length $0.1 \mu_2(D)$, are summarised in Tables \[tab:c\_half\_line\_results\] and \[tab:c\_LL\_results\_interval\] respectively. The comments made in the context of the Lieb-Liniger model regarding the accuracy of the value for $\kappa$ as predicted by (\[eq:kappa\_def\]), and the scaling of the transfer matrix eigenvalues, apply here as well. Critical exponent estimates are listed in Tables \[tab:uu\_results\] and \[tab:vertex\_results\]; the latter lists estimates for vertex operator $: \exp(i \beta \hatphi): $ exponents, for a range of values for the free parameter $\beta$. -------------------------- ---------------- ---------------------------------- ---------------------------------- -------------- -- Operator Optimal $2\Delta_{\hatO}$ at $99.73\%$ $2\Delta_{\hatO}$ at $95\%$ Exact result scale confidence confidence $\partial_z \hatphi$ $ 0.25 \mu_2$ $2.00013^{+0.00028}_{-0.00027} $ $2.00013^{+0.00017}_{-0.00016} $ 2 $ \partial_z^2 \hatphi $ $1.63 \mu_2 $ $3.992^{+0.008}_{-0.009}$ $ 3.992^{+0.006}_{-0.006} $ $4$ $ \partial_z^3 \hatphi $ $3.96 \mu_2 $ $6.007^{+0.005}_{-0.006}$ $ 6.001^{+0.003}_{-0.004} $ $6$ $\hatcH$ $0.78 \mu_2$ $3.97^{+0.06}_{-0.07}$ $ 3.97^{+0.03}_{-0.05}$ 4 -------------------------- ---------------- ---------------------------------- ---------------------------------- -------------- -- **$\partial_z \hatphi$ exponent** $\partial_z \hatphi$ is a $(2,0)$ primary field, so $\Delta = 1$. Our scaling calculation reproduces this to remarkable accuracy (see Table \[tab:uu\_results\]). The relevant expression in terms of cMPS creation and annihilation operators is obtained as follows. Performing a Wick rotation back to Minkowski space, we have $\partial_z = \frac{1}{2} ( \partial_x - \partial_t )$, so: $$\begin{aligned} \partial_z \hatphi = & \frac{1}{2} \left( \frac{d}{dx} \hatphi - \hatpi \right) \\ \nonumber = & \frac{1}{2} \left[ \frac{1}{\sqrt{2 \nu} } \frac{d}{dx}( \hatpsi + \hatpsidag) + \frac{i \sqrt{2 \nu}}{2} ( \hatpsi - \hatpsidag) \right] \ . \end{aligned}$$ **Descendants of $\partial_z \hatphi$** In order to obtain the expression without time derivatives, which is necessary in order to write down the correlator in terms of cMPS data, we first start by expanding, $$\begin{aligned} \label{eq:dtdtphi} \partial_z \partial_z \hatphi = \frac{1}{4} \left( \frac{d^2}{dx^2} \hatphi - 2 \frac{d}{dx} \hatpi + \frac{d}{dt}\hatpi \right) \ ,\end{aligned}$$ and next use: $$\begin{aligned} \frac{d}{dt} \hatpi = \frac{\delta \hatH}{\delta \hatphi} = - \frac{d^2}{dx^2} \ \hatphi \ .\end{aligned}$$ The final result is simply: $$\begin{aligned} \partial_z \partial_z \hatphi = - \frac{1}{2} \frac{d}{dx} \hatpi \ .\end{aligned}$$ The time derivative of the canonical momentum in (\[eq:dtdtphi\]) precisely cancels the double spatial derivative of $\hatphi$. It should be noted that a $\delta$-function divergence occurs in cMPS expectation values when two operators containing second and higher order spatial derivatives coincide. This is not a problem in the present context since we are not interested in taking the limit in which two operators are at exactly the same position. Second order (and higher) spatial derivatives of $\hatpsi / \hatpsidag$ (\[eq:psi\_sq\_cMPS\], \[eq:psi\_cubed\_cMPS\]) are present in cMPS expressions when evaluating $(\partial_z)^n \hatphi$ for $n>2$. The above approach for eliminating time derivatives can be applied straightforwardly for an arbitrary number of $\partial_z$ derivatives. Each application of $\partial_z$ increases the value of the critical exponent by one. The numerical results for descendants up to the third level are displayed in Table \[tab:uu\_results\]. **Energy-momentum tensor and Hamiltonian density exponent** The operator product expansion for the energy-momentum tensor $\mathbf{\hatT}$ [@citeulike:1280772], $$\begin{aligned} \mathbf{\hatT}_{z z} = : \partial_z \hatphi \partial_z \hatphi : \ ,\end{aligned}$$ with itself is given by: $$\begin{aligned} \label{eq:TT_ope} \bold{\hatT}_{z z} (z) \bold{\hatT}_{z z} (0) = & \frac{c (\alpha ')^2}{2 z^4} - \frac{2 \alpha'}{z^2} \bold{\hatT}_{z z} (0) \\ \nonumber & - \frac{2 \alpha'}{z} : \partial_z^2 \hatphi \partial_z \hatphi (0) : \ .\end{aligned}$$ In our conventions $\alpha' = \frac{1}{2 \pi}$. The Hamiltonian density is simply the combination: $$\begin{aligned} \label{eq:H_in_terms_of_T} \bold{\hatT}_{z z} + \bold{\hatT}_{\overline{z} \overline{z}} = \hatcH \ ,\end{aligned}$$ The appropriate OPE follows straightforwardly from (\[eq:TT\_ope\]), since the OPE of mixed $zz$ and $\overline{z} \overline{z}$ terms vanishes. Furthermore, the second and third terms on the RHS in the OPE (\[eq:TT\_ope\]) drop out in the vacuum expectation value when considering only the connected component of the $\hatcH$ - $\hatcH$ correlator. In conclusion, only the first term in (\[eq:TT\_ope\]) survives in the vacuum expectation value, so $\Delta_{\hatcH} = 2$, which is reproduced by our numerics (see Table \[tab:uu\_results\]). **Vertex Operators** The free relativistic massless boson CFT has an infinite number of primary operators of the form $: \exp(i \beta \hatphi): $ (where $ : :$ denotes normal ordering), parameterised by a real coefficient $\beta$. The scaling exponent for each such operator is: $$\begin{aligned} 2 \Delta = \frac{\alpha ' \beta^2}{2} = \frac{\beta^2}{2 \pi} \ ,\end{aligned}$$ where the last equality assumes our conventions. The cMPS approximation is given by: $$\begin{aligned} \langle 0 | : \exp(i \beta \hatphi): & \cdots | \rangle \approx \\ \nonumber & (l | \exp \left( \frac{i \beta}{\sqrt{\nu}} (R \otimes 1 + 1 \otimes \overline{R}) \right) \cdots | r) \ ,\end{aligned}$$ where $\cdots$ denotes additional insertions. Critical exponent estimates for a range of values for $\beta$ are displayed in Table \[tab:vertex\_results\]. --------- -------------- ------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------ $\beta$ Optimal $2\Delta_{\hatO}$ at $99.73\%$ $2\Delta_{\hatO}$ at $95 \%$ Exact result scale confidence confidence $0.1$ $0.81 \mu_2$ $(1.589^{+0.012}_{-0.012}) {\ensuremath{\times 10^{-3}}}$ $(1.589^{+0.008}_{-0.008} ){\ensuremath{\times 10^{-3}}}$ $1.592... {\ensuremath{\times 10^{-3}}}$ $0.2$ $0.83 \mu_2$ $(6.36^{+0.05}_{-0.05}) {\ensuremath{\times 10^{-3}}}$ $(6.36^{+0.03}_{-0.03}) {\ensuremath{\times 10^{-3}}}$ $6.366... {\ensuremath{\times 10^{-3}}}$ $0.4$ $0.90 \mu_2$ $(2.547^{+0.023}_{-0.023} ) {\ensuremath{\times 10^{-2}}}$ $(2.547^{+0.014}_{-0.015} ) {\ensuremath{\times 10^{-2}}}$ $2.546... {\ensuremath{\times 10^{-2}}}$ $0.6$ $0.98 \mu_2$ $(5.74^{+0.06}_{-0.06} ) {\ensuremath{\times 10^{-2}}}$ $(5.74^{+0.04}_{-0.04}) {\ensuremath{\times 10^{-2}}}$ $5.792... {\ensuremath{\times 10^{-2}}}$ $1$ $1\mu_2$ $0.1595^{+0.0016}_{-0.0017} $ $0.1595^{+0.0010}_{-0.0011} $ $0.1591... $ $2$ $1 \mu_2$ $0.637^{+0.007}_{-0.007} $ $0.637^{+0.005}_{-0.004}$ $0.6366...$ $3$ $1\mu_2$ $1.433^{+0.022}_{-0.022}$ $1.433^{+0.014}_{-0.014}$ $1.432...$ --------- -------------- ------------------------------------------------------------ ------------------------------------------------------------ ------------------------------------------ Due to the presence of a finite cutoff $\nu$, the FES scaling algorithm eventually fails to reproduce the exponents as the value of $\beta$ is increased; indeed beyond $\beta \approx 3$ the estimates degenerate quickly. Quantum Ising Model ------------------- The Hamiltonian of the quantum Ising model in a transverse magnetic field on an infinite $1d$ chain is given by: $$\begin{aligned} \hatH = \sum_{i \in \mathbb{Z}} -J \hatsigma^x_i \hatsigma^x_{i+1} + h \hatsigma^z_i \ , \label{eq:ising_hamiltonian}\end{aligned}$$ where $\{ \hatsigma^x, \hatsigma^y, \hatsigma^z \}$ are the Pauli matrices, $J$ determines the coupling strength between nearest neighbour spins, and $h$ determines the strength of the magnetic field. The model is critical for $h/J = \pm 1$. The numerics in this section are performed using $J=-1$ and $h=1$, and a spline interpolation is used in order to obtain values for two-point correlation functions at arbitrary distances (see footnote on page ). The quantum Ising model can be mapped to a free fermion model and solved exactly; the CFT describing the theory at the critical points $h/J = \pm 1$ has central charge $c=1/2$. The low lying eigenvalues of the transfer matrix can be seen to all scale in the same way, their ratios converging to definite values as the bond dimension is increased. This is depicted in the plots in Figure \[fig:Ising\_combined\]. For a theoretical interpretation of this convergence see the discussion at the beginning of Section \[sec:D\_scaling\], Estimates for the central charge are presented in Tables \[tab:c\_half\_line\_results\] and \[tab:c\_LL\_results\_interval\]. The estimate for $\kappa$ is given in Table \[tab:kappa\_results\] in Appendix \[app:kappa\], and the relevant comments made in the context of the Lieb-Liniger model apply here as well. Since the underlying CFT describing the critical quantum Ising model is minimal, it has a finite number of primary fields. [@citeulike:1280772] There are five in total - two correspond to local and three to non-local operators. The two local primaries are traditionally denoted as $\hatsigma$ and $\hatepsilon$, and using our conventions (\[eq:ising\_hamiltonian\]) they are given by: $$\begin{aligned} \hatsigma(i) = \hatsigma^x_i \ \ \ , \ \ \ \hatepsilon(i) = \hatsigma^x_i \hatsigma^x_{i+1} - \hatsigma^z_i \ .\end{aligned}$$ The three non-local primates are denoted as $\hatmu$, $\hatpsi$, and $\hatpsibar$. $\hatmu$ is given by a half-infinite string consisting of $\hatsigma^z$-s up to (and including) position $i$, while $\hatpsi$ and $\hatpsibar$ have instead $\hatsigma^+ := \frac{1}{2} (\hatsigma^x + i \hatsigma^y)$ and $\hatsigma^- := \frac{1}{2} (\hatsigma^x - i \hatsigma^y)$ at position $i$. These strings modify the MPS transfer matrix but otherwise do not change our method for extracting the corresponding critical exponents. We also consider a class of descendant fields obtained by taking discrete derivatives of the local primaries; for example the first level descendant of $\hatsigma$ is $d \hatsigma(i) := \hatsigma(i+1) - \hatsigma(i)$. The estimates for the critical exponents are displayed in Table \[tab:ising\_results\], which also contains the exact values. We note that for many operators there is no clear intersection between the Direct and FES Approaches (see Section \[sec:D\_scaling\]), so when this is the case we need to work with $s_0 =1$ in our algorithm, i.e. we perform the scan over scale from $s=1$ to $s=\infty$. ------------------------ ---------------- -------------------------------- ------------------------------- -------------- -- Operator Optimal $2\Delta_{\hatO}$ at $99.73\%$ $2\Delta_{\hatO}$ at $95\%$ Exact result scale confidence confidence $\hatsigma$ $1 \mu_2$ $0.2492^{+0.0008}_{-0.0010}$ $0.2492^{+0.0005}_{-0.0006} $ 0.25 $d \hatsigma$ $ 1.25 \mu_2$ $2.250^{+0.003}_{-0.004} $ $2.2497^{+0.0021}_{-0.0020} $ 2.25 $ d^2 \hatsigma $ $2.15 \mu_2 $ $4.248^{+0.006}_{-0.006}$ $ 4.248^{+0.004}_{-0.004} $ $4.25$ $ d^3 \hatsigma $ $3.2 \mu_2 $ $6.249^{+0.008}_{-0.008}$ $ 6.249^{+0.005}_{-0.005} $ $6.25$ $\hatepsilon$ $4 \mu_2$ $1.996^{+0.005}_{-0.005}$ $ 1.996^{+0.003}_{-0.003}$ 2 $d \hatepsilon$ $1.85 \mu_2$ $3.997^{+0.010}_{-0.010}$ $ 3.997^{+0.007}_{-0.007}$ 4 $ \hatmu $ $\infty \mu_2$ $0.2508^{+0.0018}_{-0.0017}$ $ 0.2508^{+0.0011}_{-0.0010}$ 0.25 $\hatpsi / \hatpsibar$ $1.95 \mu_2$ $0.9991^{+0.0013}_{-0.0013}$ $ 0.9991^{+0.0008}_{-0.0008}$ 1 ------------------------ ---------------- -------------------------------- ------------------------------- -------------- -- Conclusions {#sec:conclusions} =========== In this paper we have developed finite entanglement scaling (FES) methods, based on translation invariant (continuous) matrix product states in the thermodynamic limit, for calculating conformal field theory (CFT) data for critical theories, namely the central charge and critical exponents of both local and nonlocal scaling operators. The fact that for the three exemplary models our algorithm is capable of reproducing the exact results to excellent accuracy using only a modest range of bond dimensions provides strong support for the validity of the FES hypothesis (\[eq:entropy\_scaling\_general\], \[eq:2pcorr\_scaling\_general\]) presented in Section \[sec:scalinghypothesis\]. One of the new ingredients in our approach is to directly use the (c)MPS induced correlation length as scaling parameter, rather than the bond dimension or any function thereof. This is essential to obtain the accuracy on the data reported in this paper. The calculation of operator product coefficients between primary fields, has not been addressed in this paper. This involves a three-point function scaling calculation and will be addressed in a future publication. Together with the central charge and critical exponents of the primaries, the operator product coefficients constitute the data necessary to fully specify a general (i.e. non-minimal) CFT.[@citeulike:1280772] Crucial to the precision is the ability to optimise over the scale parameter $s$ at which critical exponents are calculated. This optimisation hinges on the fact that it is not only the first eigenvalue of the transfer matrix that scales with $D$ as $D^{\kappa}$, but all the other low-lying eigenvalues also follow the same scaling. This results was not presented before and provides a further hint that there should exist a CFT interpretation for finite entanglement scaling, that once fully understood would provide access to the sub-leading corrections and possibly to a geometric interpretation of FES. The FES calculations have been performed for three exemplary models: two field theories, the (non-relativistic) Lieb-Liniger model and the massless relativistic boson, and to the critical quantum Ising model in the lattice setting. The numerical accuracy of the results is comparable to those of MERA calculations.[@2008PhRvL.101k0501V; @2010PhRvB..82m2411E; @2011arXiv1109.5334E] The central advantage over MERA is the computational cost, which is much lower for comparable accuracy. In addition, the continuous version of MPS can equally be applied to free and interacting field theories, while there is no interacting version of the continuous version of the MERA as of yet.[@2011arXiv1102.5524H] The central disadvantages include the fact that at present a geometric or a renormalisation group interpretation of the CFT perturbation caused by the finite bond dimension is lacking, and the related problem that we do not understand how the structure of the CFT is encoded in the (c)MPS data. What we mean by the latter is some mapping between the primary and descendant structure of the CFT and the eigen-decomposition of the (c)MPS transfer matrix - a practical benefit of such a mapping would be that we could simply work at the level of (c)MPS, without needing any additional information about the primary/descendent structure in terms of operators acting at the physical level. There has been some progress in the MPS context along these lines for the entanglement spectrum[@2013arXiv1303.0741L], albeit not in the thermodynamic limit. We are hoping to report on some new findings in this direction soon. In addition, it would also be interesting to check wether the finite entanglement scaling framework can be used for determining critical exponents of boundary CFTs corresponding to edges in the system, analogous to the MERA results presented in Ref. . Finally, let us turn to the issue of determining the critical point. The models studied in this paper either have an extended critical region, or have a critical point whose location is known exactly[^7]: the Lieb-Liniger model is critical for all choices of parameters in (\[eq:LL\_hamiltonian\]), the relativistic boson model (\[eq:reg\_KG\], \[eq:uu\_parameters\]) is critical for $m =0$, while the transverse quantum Ising model (\[eq:ising\_hamiltonian\]) is critical for $h/J = \pm 1$. When the values for the parameters at criticality are not known, one can try to obtain them from the (c)MPS simulation. Let us illustrate this in the context of the quantum Ising model, by imagining that, having chosen e.g. $J=1$, we do not know that the critical point is at $h=1$. In order to obtain an estimate it is necessary to first scan over $h$ for a range of bond dimensions and search for the point at which the order parameter $\langle \hatsigma^x \rangle$ transitions from a finite value to zero. For finite bond dimension this happens at some point $h(D)>1$ and the exact critical point $h=1$ can be obtained by scaling to $D \rightarrow \infty$ [@2008PhRvB..78b4410T] (see the plot in Figure \[fig:Ising\_phase\_transition\]). This raises two questions. Firstly, one can wonder how sensitive the results are to the accuracy with which the exact critical point $h(D\to\infty)$ is obtained. Secondly, one can question whether it may be more natural to perform the scaling calculations using (c)MPS solutions obtained at the transition point $h(D)$ at each bond dimension $D$, rather than using the exact point $h(D\to\infty)$. We can answer the second question negatively. Both for the quantum Ising model and in a preliminary cMPS analysis of the $\phi^4$ model [@phi4_in_preparation], we have established that the FES scaling approach does not work —or needs to be altered— when using the (c)MPS transition points. To directly extract the scaling exponents of the primary operators, the Hamiltonian parameters have to be kept fixed. Regarding the first question, we anticipate that by using the (c)MPS induced correlation length, the scaling hypothesis of Eq.  and continues to hold as long as the parameters of the Hamiltonian are sufficiently close to the critical point so that we are in the scaling regime. Even when the bond dimension grows sufficiently large so as to accurately reproduce the slightly off-critical ground state, this will only cause a saturation in $\mu_2(D)$ so that no new data points are obtained by further increasing $D$. At this point, the scaling relation $\mu_2(D)\sim D^{\kappa}$ will break down, which is why the use of $\mu_2(D)$ as scaling parameter is to be preferred. ![ []{data-label="fig:Ising_phase_transition"}](Ising_phase_transition.pdf){width="8cm"} We would like to thank Marek Rams, Volkher Scholz, Henri Verschelde, and Valentin Zauner for helpful discussions. Review of Continuous Matrix Product States {#app:cMPS_review} ========================================== The variational set of matrix product states (MPS) is given by: $$\begin{aligned} & {\left| \Psi [ A ] \right>}= \\ \nonumber & \sum_{i_1=1}^d \sum_{i_2=1}^d \cdots \sum_{i_N=1}^d v_L^\dagger A^{i_1}_1 A^{i_2}_2 \cdots A^{i_N}_N v_R {\left| i_1, i_2, \cdots, i_N \right>} \ ,\end{aligned}$$ where $d$ is the number of physical (spin) degrees of freedom, and for every value of the index $i_a$, $A^{i_a}$ is a $D\times D$ matrix. In order to take the continuum limit we first promote the finite-dimensional Hilbert space at each lattice site to a full Fock space: $$\begin{aligned} & \hata_i {\left| \Omega \right>} = 0 \ \ \ , \ \ \ [ \hata_i, \hata_j ] = 0 = [ \hata^\dagger_i, \hata^\dagger_j ] \\ \nonumber & [ \hata_i, \hata_j^\dagger ] = \delta_{ij} \ .\end{aligned}$$ The continuum limit $\epsilon \rightarrow 0$ is taken as [@2012arXiv1211.3935H]: $$\begin{aligned} & {\left| \Psi [ A ] \right>}_\epsilon= \\ \nonumber & \sum_{i_1 \cdots i_N}^d \left( v_L^\dagger A^{i_1}_{\left(-\frac{N}{2} \right)} \cdots A^{i_N}_{\left( \frac{N}{2} \right)} v_R \right) (\psihdag_1)^{i_1} \cdots (\psihdag_N)^{i_N} {\left| \Omega \right>} \ ,\end{aligned}$$ where $$\begin{aligned} \psihdag_i = \frac{\hata_i^\dagger}{\sqrt{\epsilon}} \ \ \ \psih_i = \frac{\hata_i}{\sqrt{\epsilon}} \ \ \ , \ \ \ N = \frac{L}{\epsilon} \ .\end{aligned}$$ This limit can be taken consistently only if the infinite set of matrices $A^i$ depends on two matrices $R$ and $Q$ as: $$\begin{aligned} A^0 = \eye + Q \ \ \ , \ \ \ A^1 = \epsilon R \ \ \ , \ \ \ A^n = \epsilon^n \frac{R^n}{n!} \ .\end{aligned}$$ Promoting the above analysis to multiple particle species, the continuous matrix product variational set of states (cMPS) on a finite interval $[ -L/2, L/2 ]$, can be written as: $$\begin{aligned} \label{eq:cMPS} & {\left| \Psi [ Q(x), R_{\alpha}(x) ] \right>} = \\ \nonumber & v_L^\dagger \mathcal{P} \mathrm{exp} \left[ \int_{- \frac{L}{2}}^{\frac{L}{2}} dx \ \left(Q(x) \otimes \eye + \sum_\alpha R_{\alpha} \otimes \hatpsidag_\alpha (x) \right) \right] v_R {\left| \Omega \right>} \ .\end{aligned}$$ The $\alpha$ index runs over particle species, $\mathcal{P} \mathrm{exp}$ denotes the path ordered exponential, and $v_L$, $v_R$, determine the boundary conditions. If the particles are bosons $[\psi_\alpha(x) , \psidag_\beta (y) ] = \delta_{\alpha \beta} (x -y) $, while for fermions $\{ \psi_\alpha(x) , \psidag_\beta (y) \} = \delta_{\alpha \beta} (x -y) $. In this paper we are interested in translation invariant cMPS describing a single bosonic particle species in the thermodynamic limit, that is, the variational set: $$\begin{aligned} \label{eq:ucMPS} & {\left| \Psi [ Q, R ] \right>} \\ \nonumber & = v_L^\dagger \mathcal{P} \mathrm{exp} \left[ \int_{- \infty}^{\infty} dx \ \left( Q \otimes \eye + R \otimes \hatpsidag \right) \right] v_R {\left| \Omega \right>} \ ,\end{aligned}$$ with the matrices $R$ and $Q$ position independent. The transfer matrix is given by $$\begin{aligned} \label{eq:transfer_matrix} T = Q \otimes \eye + \eye \otimes \overline{Q} + R \otimes \overline{R} \ .\end{aligned}$$ Finite normalisation requires the largest eigenvalue of the transfer matrix to be zero, which can always be achieved by transforming $Q \rightarrow Q - (\lambda / 2) \eye$, where $\lambda$ is the initial largest non-zero eigenvalue of $T$. In this paper we find it convenient to define the transfer matrix for MPS in the thermodynamic to be: $$\begin{aligned} T_{\mathrm{MPS}} = \log(E) \ , \label{eq:MPS_transfer_matrix}\end{aligned}$$ where $$\begin{aligned} E = \sum_i A^i \otimes \overline{A}^i \ .\end{aligned}$$ Usually $E$ itself is referred to as the transfer matrix in MPS literature, but as this is inconsistent with the cMPS conventions, we chose to define $T$ as in (\[eq:MPS\_transfer\_matrix\]) instead. Expectation values involving an insertion of a single operator involve only the left and right zero-eigenvalue eigenvectors $(l |$, and $|r)$. We normalise these so that the state has norm one: $$\begin{aligned} {\left< \Psi \vphantom{ \Psi} \right| \left. \Psi \vphantom{\Psi} \right>} = (l | r) = 1 \ .\end{aligned}$$ Expectation values of insertions of $\hatpsi$, $\hatpsidag$ have straightforward cMPS expressions, for example: $$\begin{aligned} & {\left< \Psi \right|} \hatpsi {\left| \Psi \right>} = ( l | R \otimes \eye |r ) = \mathrm{tr} ( l^{T} R r) \\ \nonumber & {\left< \Psi \right|} \hatpsidag {\left| \Psi \right>} = ( l | \eye \otimes \Rbar |r ) = \mathrm{tr} ( l^{T} r \Rdag ) \\ \nonumber & {\left< \Psi \right|} \hatpsidag \hatpsi {\left| \Psi \right>} = ( l | R \otimes \Rbar |r ) = \mathrm{tr} ( l^{T} R r \Rdag ) \\ \nonumber & {\left< \Psi \right|} \frac{d \hatpsi}{d x} {\left| \Psi \right>} = ( l | [Q, R ] \otimes \eye |r ) = \mathrm{tr} ( l^{T} [Q, R] r) \ .\end{aligned}$$ $l$ and $r$ in the rightmost expressions denote $D \times D$ matrices corresponding to the $D^2$ component co-vector $(l|$ and vector $|r)$. Working with the trace expressions rather than the tenors product ones is clearly computationally more efficient, as it involves manipulating $D \times D$ rather than $D^2 \times D^2$ matrices (computational cost $\cO (D^3)$ vs. $\cO (D^6)$ ). It is straightforward but tedious to calculate expressions involving higher derivatives of $\hatpsi$, [@2012arXiv1211.3935H] which we frequently require in this paper. In particular, the cMPS expression are more complicated than the expression for $\frac{d \hatpsi}{d x} $ above suggests, and do not consist simply of a Kroenecker product of a nested commutator with the identity operator in $D$ dimensions. For example: $$\begin{aligned} \label{eq:psi_sq_cMPS} & {\left< \Psi \right|} \frac{d^2 \hatpsi}{d x^2} {\left| \Psi \right>} = \\ \nonumber & \ \ \ \ \ ( l | \left( [Q, [Q, R ] ] \otimes \eye + [R, [Q, R] ] \otimes \overline {R}\right) |r ) \ ,\end{aligned}$$ and $$\begin{aligned} \label{eq:psi_cubed_cMPS} {\left< \Psi \right|} \frac{d^3 \hatpsi}{d x^3} {\left| \Psi \right>} = & ( l | \left( \vphan [Q, [Q, [Q, R ] ] ] \otimes \eye \right. \\ \nonumber & \ \ \ \ \ + 2 [R, [Q, [Q, R] ] ] \otimes \overline {R} \\ \nonumber & \ \ \ \ \ \left. + [R, [Q, R] ] \otimes [ \overline{Q}, \overline{R} ] \vphan \right) |r ) \ .\end{aligned}$$ Expectation values of operators at different spatial points separated by some finite distance $(x-y)$ involve the full transfer matrix. For example: $$\begin{aligned} & {\left< \Psi \right|} \hatpsidag(x) \hatpsi (y) {\left| \Psi \right>} = \\ \nonumber & \ \ \ \ \ ( l | ( \eye \otimes \Rbar ) \exp \left[ T (y-x) \right] (R \otimes \eye ) |r ) \ \ \ \ \ \ y > x \\ \nonumber & \ \ \ \ \ ( l | ( R \otimes \eye ) \exp \left[ T (x-y) \right] (\eye \otimes \Rbar ) |r ) \ \ \ \ \ \ x > y \ ,\end{aligned}$$ so unless $(x-y)$ is much larger than the correlation length, all the eigenvalues of the transfer matrix contribute. The above expressions can still be computed in $\cO (D^3)$, by exploiting the tensor product structure of the expressions to calculate the initial density matrix, (e.g. $( l | ( \eye \otimes \Rbar ) $ for $x>y$ in the above example, which can be obtained at cost $\cO (D^3)$ ), and then using a partial differential equation solver to calculate the action of $\exp(T (y-x ) )$ on this co-vector. Details of the Lieb-Liniger Field-Field Exponent Calculation {#app:LL_field_field} ============================================================ In this section we describe the details of the finite entanglement scaling (FES) approach for calculating critical exponents, using the example of the field-field ($\hatpsi-\hatpsidag$) correlator in the Lieb-Liniger model. The algorithm, as described in Section \[sec:D\_scaling\], is to apply the FES Approach, aided by the Direct Approach; the role of the latter is simply to provide an estimate for the lower bound when scanning over scales in the FES Approach. ![[]{data-label="fig:LL_psi_disconn_correlator"}](LL_field_disconn_corr.pdf){width="8cm"} By comparing with the known exact value for the exponent, the left plot in Figure \[fig:LL\_field\_scaling\_direct\_combined\] demonstrates that FES gives good estimates for scales from infinity down to around $s=0.5$. Below this the linearity of the interpolation improves further, but the estimates are off due to short distance effects. The best estimate is roughly around $s=0.6$. ![ through $\log(s) = \log(0.66)$.[]{data-label="fig:LL_field_logcorr_vs_logpos_with_fit"}](LL_field_logcorr_vs_logpos_with_fit.pdf){width="8cm"} The problem is that we do not a priori know the value for $s$ below which short distance effects destroy the precision of the FES scaling. The simplest solution is to simply pick the safe value $s=1$, which in itself is not a bad option as it significantly improves the accuracy over that obtained at $s = \infty$. In order to do better than this, we combine the FES and Direct Approach. ![image](LL_field_D_scaling_direct_combined){width="16cm"} As discussed in Section \[sec:D\_scaling\], the Direct Approach on its own is not useful for obtaining good estimates for the exponents. As can be seen in Figure \[fig:LL\_psi\_disconn\_correlator\], power law decay for the cMPS approximation to the field-field two-point correlation function at some fixed bond dimension $D$ is captured well beyond some short distance at which non-universal effects are present, up to approximately the correlation length, beyond which exponential decay takes over. The left plot in Figure \[fig:LL\_field\_direct\_exponent\_combined\] explicitly demonstrates that estimates for $-2\Delta$, computed from the derivative of $\log(G(x))$ vs. $\log (x)$, are completely off at distances shorter than some cutoff, and also beyond the correlation length. The problem with the Direct Approach therefore lies both in the difficulty of determining the window in which estimates are reliable, and in the lack of any method to determine the error in the estimates. One could attempt to work around these obstacles by obtaining estimates using a range of bond dimensions, and scanning for the scale at which their spread is minimal. For the case at hand, using all bond dimensions $D$ between $32$ and $64$, we obtain the result shown in the right plot of Figure \[fig:LL\_field\_direct\_exponent\_combined\]. In this case the true value is actually captured by this method, but this turns out to be a lucky accident. The approach fails for most operators we considered in the paper. The overlay of the Direct Approach, using all bond dimensions $D$ between $32$ and $64$, and FES is displayed in the right plot of Figure \[fig:LL\_field\_scaling\_direct\_combined\] and demonstrates how our best estimate, $2\Delta = 0.1667^{+0.0005}_{-0.0005} $, at $99.73\%$ confidence level, is obtained (see Table \[tab:LL\_results\]). The region of overlap between the two below $s=1$ gives an upper bound for $s_0$, i.e. the scale we are able to scan down to without encountering short distance/cutoff effects. The best estimate is then determined to be at $s=0.66$; the interpolation at $s=0.66$ is depicted in Figure \[fig:LL\_field\_logcorr\_vs\_logpos\_with\_fit\]. It is instructive to think of the plots in Figures \[fig:LL\_psi\_disconn\_correlator\] and \[fig:LL\_field\_logcorr\_vs\_logpos\_with\_fit\] in terms of appropriate intersections of the two-dimensional surface displayed in Figure \[fig:LL\_field\_plot3d\]. ![image](LL_field_direct_exponent_combined.pdf){width="16cm"} Central charge estimates from $D$-scaling {#app:kappa} ========================================== ---------------- ------------------------ --------------------------- ----------------------- --------------------------- --------------------------- -- Model Slope Slope Predicted $c$ Estimate $c$ Estimate $99.73\%$ conf. $95 \%$ conf. Slope $99.73\%$ conf. $95\%$ conf. Lieb-Liniger $1.30^{+0.03}_{-0.03}$ $1.295^{+0.020}_{-0.019}$ $\kappa = 1.3441... $ $1.06^{+0.04}_{-0.04}$ $1.061^{+0.027}_{-0.025}$ Relativ. Boson $1.26^{+0.04}_{-0.04}$ $1.256^{+0.023}_{-0.023}$ $\kappa = 1.3441... $ $1.12^{+0.06}_{-0.05}$ $1.12^{+0.03}_{-0.03}$ Quantum Ising $1.91^{+0.05}_{-0.05}$ $1.91^{+0.03}_{-0.03}$ $\kappa = 2.0343... $ $0.558^{+0.027}_{-0.026}$ $0.558^{+0.016}_{-0.017}$ ---------------- ------------------------ --------------------------- ----------------------- --------------------------- --------------------------- -- This Appendix presents estimates for the central charge $c$ for the Lieb-Liniger, massless relativistic boson, and critical quantum Ising model, obtained by scaling directly with respect to the bond dimension $D$. The exact central charge for the two field theories is $c=1$, and for the quantum Ising model $c=1/2$. The exponent $\kappa$ is determined from scaling $\log (\mu_2(D))$ vs. $\log(D)$, $\mu_2(D)\sim D^{\kappa}$ (see the discussion around Eq. (\[eq:kappa\_def\]) in Section \[sec:scalinghypothesis\]). A set of estimates for $c$ is then obtained using the analytic relation $\kappa = 6/\left( c \left( \sqrt{\frac{12}{c}} + 1 \right) \right)$ (Table \[tab:kappa\_results\]). Further estimates are obtained from scaling the entropy $S$ with $\log (D)$ (Tables \[tab:c\_half\_line\_results\_kappa\] and \[tab:c\_LL\_results\_interval\_kappa\]), both after making use of the $\kappa(c)$ relation, and also while keeping $\kappa$ as a free parameter (i.e. using the values obtained in Table \[tab:kappa\_results\]). The $\kappa(c)$ relation is expected to be only approximately true, and the inaccuracy of the results based on this relation demonstrates that it does not hold very accurately in the region of bond dimensions $32 \leq D \leq 64$ used for the scalings. The accuracy of the results obtained with $\kappa$ as a free parameter is much better, but the error bars are larger than those obtained when scaling w.r.t. $\mu_2(D)$, as in Section \[section:examples\] of this paper. Results in Tables \[tab:c\_half\_line\_results\_kappa\] and \[tab:c\_LL\_results\_interval\_kappa\] should be compared with results obtained by scaling $S$ directly with $\mu_2(D)$, as presented in Tables \[tab:c\_half\_line\_results\] and \[tab:c\_LL\_results\_interval\]. ---------------- --------------------------- ------------------------------------------------------------------ ------------------------ --------------------------- -- -- Model Slope Predicted $c$ Estimate $c$ Estimate with $99.73\%$ conf. Slope using $\kappa(c)$ $\kappa$ a Free Parameter Lieb-Liniger $0.212^{+0.008}_{-0.008}$ $\frac{1}{\left( \sqrt{\frac{12}{c}} +1 \right)} = 0.22401...$ $0.87^{+0.09}_{-0.08}$ $0.98^{+0.04}_{-0.04}$ Relativ. Boson $0.215^{+0.005}_{-0.005}$ $\frac{1}{\left( \sqrt{\frac{12}{c}} +1 \right)} = 0.22401...$ $0.90^{+0.05}_{-0.05}$ $1.024^{+0.024}_{-0.024}$ Quantum Ising $0.158^{+0.006}_{-0.006}$ $\frac{1}{\left( \sqrt{\frac{12}{c}} +1 \right)} = 0.169521...$ $0.42^{+0.04}_{-0.04}$ $0.496^{+0.019}_{-0.018}$ ---------------- --------------------------- ------------------------------------------------------------------ ------------------------ --------------------------- -- -- ---------------- --------------------------- ----------------------------------------------------------------- ------------------------ ---------------------------- -- -- Model Slope Predicted $c$ Estimate $c$ Estimate with at $99.73\%$ conf. Slope using $\kappa(c)$ $\kappa$ a Free Parametner Lieb-Liniger $0.423^{+0.012}_{-0.011}$ $\frac{2}{\left( \sqrt{\frac{12}{c}} +1 \right)} = 0.448018...$ $0.90^{+0.06}_{-0.06}$ $0.976^{+0.025}_{-0.028}$ Relativ. Boson $0.423^{+0.010}_{-0.010}$ $\frac{2}{\left( \sqrt{\frac{12}{c}} +1 \right)} = 0.448018...$ $0.86^{+0.06}_{-0.05}$ $1.007^{+0.024}_{-0.025}$ ---------------- --------------------------- ----------------------------------------------------------------- ------------------------ ---------------------------- -- -- [^1]: While it is expected that the area law holds for gapped systems in more than one spatial dimension, there exists no proof to this effect. [^2]: Up to restrictions imposed by the choice of UV regulator. [^3]: Here $z : = x_0 + i x_1$, where $x_0$ and $x_1$ are the euclidean space and time coordinates, and the infinite plane coordinates $z$ are related to coordinates on the cylinder as $z = e^{\frac{2 \pi w}{L}}$. The time direction on the cylinder corresponds to the radial direction on the infinite plane, with the origin mapping to the infinite past, while the angular direction on the infinite plane corresponds to moving along the finite direction of the cylinder. The current discussion contains a lot of standard vocabulary used when working with (1+1) CFTs. A reader unfamiliar with the subject can consult, for example, the standard reference [@citeulike:1280772]. [^4]: A more sophisticated CFT interpretation of FES would be in terms of a strip with a line of impurities bisecting it along the infinite dimension, where the impurity line is related to the gluing of the (c)MPS with its complex conjugate. At present we do not have a good enough understanding of the effect of such an impurity line to say whether or not this proposal is correct. [^5]: In this section we will use the conventions and language appropriate for continuous systems. Two-point correlators for lattice systems can clearly only be calculated with the distance between operator insertions a multiple of the lattice spacing. For the purposes of the scaling calculations we interpolate in order to obtain correlator values at arbitrary points. The dependence of the critical exponent estimates on the type of interpolation used (we have compared linear and spline interpolations) is negligible at the large distances at which the FES estimates are obtained. The reason for this is that a scaling calculation, together with the interpolation subroutine, is concerned with the logarithm of the correlator as a function of the logarithm of distance, so at scales much larger than the lattice spacing neighbouring points are very near to each other. Statements made in this section are therefore equally valid for lattice systems once the interpolation step is performed\[ftn:interpolation\] [^6]: In practice this means scanning from sufficiently close to zero that short distance effects are obvious, to far enough beyond the correlation length that only the dominant eigenvector contribution remains. [^7]: The $\phi^4$ model with an imaginary mass parameter is an interesting theory for which this is not the case (see [@2013PhRvD..88h5030M] for a MPS based study of critical regions in this model).
--- abstract: 'We discuss the dynamics of linear, scalar perturbations in an almost Friedmann-Robertson-Walker braneworld cosmology of Randall-Sundrum type II using the 1+3 covariant approach. We derive a complete set of frame-independent equations for the total matter variables, and a partial set of equations for the non-local variables which arise from the projection of the Weyl tensor in the bulk. The latter equations are incomplete since there is no propagation equation for the non-local anisotropic stress. We supplement the equations for the total matter variables with equations for the independent constituents in a cold dark matter cosmology, and provide solutions in the high and low-energy radiation-dominated phase under the assumption that the non-local anisotropic stress vanishes. These solutions reveal the existence of new modes arising from the two additional non-local degrees of freedom. Our solutions should prove useful in setting up initial conditions for numerical codes aimed at exploring the effect of braneworld corrections on the cosmic microwave background (CMB) power spectrum. As a first step in this direction, we derive the covariant form of the line of sight solution for the CMB temperature anisotropies in braneworld cosmologies, and discuss possible mechanisms by which braneworld effects may remain in the low-energy universe.' author: - Bernard Leong - Peter Dunsby - Anthony Challinor - Anthony Lasenby title: 1+3 covariant dynamics of scalar perturbations in braneworlds --- Introduction ============ It is understood that Einstein’s theory of general relativity is an effective theory in the low-energy limit of a more general theory. Recent developments in theoretical physics, particularly in string theory or M-theory, have led to the idea that gravity is a higher-dimensional theory which would become effectively four-dimensional at lower energies. Braneworlds, which were inspired by string and M-theory, provide simple, yet plausible, models of how the extra dimensions might affect the four-dimensional world we inhabit. There is the exciting possibility that these extra dimensions might reveal themselves through specific cosmological signatures that survive the transition to the low-energy universe. It has been suggested that in the context of braneworld models the fields that govern the basic interactions in the standard model of particle physics are confined to a 3-brane, while the gravitational field can propagate in $3+d$ dimensions (the [*bulk*]{}). It is not necessarily true that the extra dimensions are required to be small or even compact. It was shown recently by Randall and Sundrum [@randall] for the case $d=1$, that gravity could be localized to a single 3-brane even when the fifth dimension was infinite. As a result, the Newtonian potential is recovered on large scales, but with a leading-order correction on small scales: $$V(r) = -\frac{GM}{r} \left( 1 + \frac{2l^2}{3r^2} \right)\;,$$ where the 5-dimensional cosmological constant $\tilde{\Lambda} \propto -l^{-2}$. As a result, general relativity is recovered in 4 dimensions in the static weak-field limit, with a first-order correction which is believed to be constrained by sub-millimeter experiments at the TeV level [@randall; @maartens3]. The cosmic microwave background (CMB) currently occupies a central role in modern cosmology. It is the cleanest cosmological observable, providing us with a unique record of conditions along our past light cone back to the epoch of decoupling when the mean free path to Thomson scattering rose suddenly due to hydrogen recombination. Present (e.g. BOOMERANG [@boomerang1]) and MAXIMA [@maxima1]) and future (e.g. MAP and PLANCK) data on the CMB anisotropies and large-scale structure provide extensive information on the spectrum and evolution of cosmological perturbations potentially allowing us to infer the spectrum of initial perturbations in the early universe and to determine the standard cosmological parameters to high accuracy. An obvious question to ask is whether there are any signatures of extra dimensions which could be imprinted on the cosmic microwave sky. The aim of this paper is to set up the evolution and constraint equations for perturbations in a cold dark matter (CDM) brane cosmology, presenting them in such a way that they can be readily compared with the standard four-dimensional results, and to provide approximate solutions in the high and low-energy universe under certain restrictions on how the bulk reacts on the brane. Our equations are clearly incomplete since they lack a propagation equation for the non-local anisotropic stress that arises from projecting the bulk Weyl tensor onto the brane, and our solutions are only valid under the neglect of this stress. However, our presentation is such that we can easily include effective four-dimensional propagation equations for the non-local stress should such equations arise from a study of the full bulk perturbations. The lack of a four-dimensional propagation equation for the non-local stress means that it is currently not possible to obtain general results for the anisotropy of the CMB in braneworld models. Such a calculation would require solving the full five-dimensional perturbation equations which is non-trivial since the equations can only be reduced to two-dimensional partial differential equations on Fourier transforming the 3-space dependence. Only qualitative results are currently known, obtained with either the standard metric-based (gauge-invariant) approach [@anchordoqui; @bridgman; @copeland; @deruelle; @gorbunov; @kodama; @kodama2; @koyama; @langlois; @langlois2; @langlois3; @langlois4; @mukohyama; @vandebruck; @vandebruck2; @vandebruck3], or with 1+3 covariant methods [@gordon; @maartens; @maartens2]. In order to make this paper self-contained, we begin by giving a brief overview of the 1+3 covariant approach to cosmology and define the key variables we use to characterize the perturbations in Sec. \[sec:cov\]. After a short review of how this approach can be used to describe the general dynamics of Randall-Sundrum braneworlds, in Sec. \[sec:equations\] we present a complete set of frame-independent, linear equations describing the evolution of the total matter variables and the non-local energy and momentum densities for scalar perturbations in an almost-FRW universe (with arbitrary spatial curvature). Many of these equations, which employ only covariantly defined, gauge-invariant variables, have simple Newtonian analogues [@ellis89], and their physical meanings are considerably more transparent than those that underlie more standard metric-based approaches. In Sec. \[sec:radiation\] we derive analytic solutions for the scalar modes in the high and low-energy radiation-dominated universe neglecting the non-local anisotropic stress. In principle, these solutions could be used in a phenomenological manner to generate more general solutions which include non-local anisotropic stress, using Green’s method and an ansatz for the non-local stress. Such a study will be presented in a subsequent paper [@leong2], along with a numerical calculation of the CMB power spectrum employing this phenomenology. As a first step, we derive the covariant line of sight solution for the temperature anisotropies from scalar modes in braneworld models in Sec. \[sec:aniso\] and present some comments on how higher dimensional effects may remain in the low-energy universe and thus imprint on the microwave sky. An appendix presents the scalar perturbation equations in the matter energy frame during radiation domination. 1+3 covariant decomposition {#sec:cov} =========================== Throughout this paper we adopt the metric signature $(-+++)$. Our conventions for the Riemann and Ricci tensor are fixed by $[\nabla_a, \nabla_b] U_c = R_{abc}{}^{d} U_d$ and $R_{ab} = R_{acb}{}^{c}$. Lowercase latin indices $a\ldots b$ are used to denote the standard 4-dimensional (1+3) spacetime whereas uppercase $A\ldots B$ and the tilde of any physical quantity are used to denote 5-dimensional (1+4) spacetime (of the braneworld). Round (square) bracket denote symmetrization (antisymmetrization) on the enclosed indices. We use units with $\hbar=c=1$, so the 4-dimensional gravitational constant is related to the 4-dimensional Planck mass via $G=M_P{}^{-2}$. We start by choosing a 4-velocity $u^a$. This must be physically defined in such a way that if the universe is exactly FRW, the velocity reduces to that of the fundamental observers to ensure gauge-invariance of the approach. From the 4-velocity $u^a$, we construct a projection tensor $h_{ab}$ into the space perpendicular to $u^a$ (the instantaneous rest space of observers whose 4-velocity is $u^a$): $$\label{e:projection1} h_{ab} \equiv g_{ab} + u_a u_b\;,$$ where $g_{ab}$ is the metric of the spacetime. The operation of projecting a tensor fully with $h_{ab}$, symmetrizing, and removing the trace on every index (to return the projected-symmetric-trace-free or PSTF part) is denoted by angle brackets, i.e. $T_{\langle ab\ldots c\rangle}$. The symmetric tensor $h_{ab}$ is used to define a projected (spatial) covariant derviative $D^a$ which when acting on a tensor $T^{b\ldots c}{}_{d\ldots e}$ returns a tensor that is orthogonal to $u^a$ on every index, $$\label{e:projection2} D^a T^{b\ldots c}{}_{d\ldots e} \equiv h^{a}{}_{p} h^{b}{}_{r} \ldots h^{c}{}_{s} h^{t}{}_{d} \ldots h^{u}{}_{e} {\nabla}^p T^{r..s}{}_{t..u}\;,$$ where ${\nabla}^a$ denotes the usual covariant derivative. The covariant derivative of the 4-velocity can be decomposed as follows: $$\label{e:covariant1} {\nabla}_a u_b = \omega_{ab} + \sigma_{ab} + \frac{1}{3} \Theta h_{ab} - u_a A_b\;,$$ where $\omega_{ab}$ is the vorticity which satisfies $u^a \omega_{ab}=0$, $\sigma_{ab}=\sigma_{\langle ab \rangle}$ is the shear which is PSTF, $\Theta \equiv {\nabla}^{a} u_a = 3H$ measures the volume expansion rate (where $H$ is the local Hubble parameter), and $A_a \equiv u^b {\nabla}_b u_a$ is the acceleration. Gauge-invariant quantities can be constructed from scalar variables by taking their projected gradients. Such quantities vanish in the FRW limit by construction. The comoving fractional projected gradient of the density field $\rho^{(i)}$ of a species $i$ (for example, photons) is one important example of this construction: $$\label{e:sg1} \Delta_a^{(i)} \equiv \frac{a}{\rho^{(i)}} D_a \rho^{(i)} \;,$$ where $a$ is a locally defined scale factor satisfying $$\label{e:hubble} \dot{a} \equiv u^b {\nabla}_b a = Ha\;,$$ which is included to remove the effect of the expansion on the projected gradients. Another important vector variable is the comoving projected gradient of the expansion, $$\label{e:sg2} {\cal Z}_a \equiv a D_a \Theta\;,$$ which provides a measure of the inhomogeneity of the expansion. The matter stress-energy tensor $T_{ab}$ can be decomposed irreducibly with respect to $u^a$ as follows: $$\label{e:emequation1} T_{ab} \equiv \rho u_a u_b + 2 u_{(a}q_{b)} + P h_{ab} + \pi_{ab}\;,$$ where $\rho \equiv T_{ab} u^a u^b$ is the energy density measured by an observer moving with 4-velocity $u^a$, $q_a \equiv - h^{b}{}_{a} T_{bc} u^c$ is the energy flux or momentum density (orthogonal to $u^a$), $P \equiv h_{ab} T^{ab}/3$ is the isotropic pressure, and the PSTF tensor $\pi_{ab} \equiv T_{\langle a b\rangle}$ is the anisotropic stress. The remaining gauge-invariant variables are formed from the Weyl tensor $C_{abcd}$ which vanishes in an exact FRW universe because these models are conformally flat. The ten degrees of freedom in the 4-dimensional Weyl tensor can be encoded in two PSTF tensors: the electric and magnetic parts defined respectively as $$\begin{aligned} E_{ab} &= C_{abcd} u^b u^d\;, \label{e:eweyl} \\ H_{ab} &= \frac{1}{2} C_{acst} u^{c} \eta^{st}{}_{bd} u^d\;, \label{e:bweyl}\end{aligned}$$ where $\eta_{abcd}$ is the 4-dimensional covariant permutation tensor. Field equations of the braneworld ================================= In a recent paper, Maartens [@maartens] introduced a formalism for describing the non-linear, intrinsic dynamics of the brane in Randall-Sundrum type II braneworld models in the form of bulk corrections to the 1+3 covariant propagation and constraint equations of general relativity. This approach is well suited to identifying the geometric and physical properties which determine homogeneity and anisotropy on the brane, and serves as a basis for developing a gauge-invariant description of cosmological perturbations in these models. An important distinction between braneworlds and general relativity is that the set of 1+3 dynamical equations does not close on the brane. This is because there is no propagation equation for the non-local effective anistropic stress that arises from projecting the bulk Weyl tensor onto the brane. The physical implication is that the initial value problem cannot be solved by brane-bound observers. The non-local Weyl variables enter crucially into the dynamics (for example, the Raychaudhuri equation) of the intrinsic geometry of the brane. Consequently, the existence of these non-local effects leads to the violation of several important results in theoretical cosmology, such as the connection between isotropy of the CMB and the Robertson-Walker geometry. The field equations induced on the brane are derived by Shiromizu et al [@shiromizu] using the Gauss-Codazzi equations, together with the Israel junction conditions and $Z_2$ symmetry. The standard Einstein equation is modified with new terms carrying the bulk effects on the brane: $$\label{e:einstein1} G_{ab} = - \Lambda g_{ab} + \kappa^2 T_{ab} + \tilde{\kappa}^4 {\cal S}_{ab} - {\cal E}_{ab}\;,$$ where $\kappa^2=8 \pi/M_p{}^2$. The energy scales are related to each other via $$\begin{aligned} \label{e:constant1} \lambda &= 6 \frac{\kappa^2}{\tilde{\kappa}^4}\;, \\ \label{e:constant} \Lambda & = \frac{1}{2} \tilde{\kappa}^2 \left(\tilde{\Lambda} + \frac{1}{6} \tilde{\kappa}^2 \lambda^2 \right)\;,\end{aligned}$$ where $\tilde{\Lambda}$ is the cosmological constant in the bulk and $\lambda$ is the tension of the brane. The bulk corrections to the Einstein equations on the brane are made up of two parts: (i) the matter fields which contribute local quadratic energy-momentum corrections via the symmetric tensor ${\cal S}_{ab}$; and (ii) the non-local effects from the free gravitational field in the bulk transmitted by the (symmetric) projection ${\cal E}_{ab}$ of the bulk Weyl tensor. The matter corrections are given by $$\label{e:emtensor2} {\cal S}_{ab} = \frac{1}{12} T_{c}{}^{c} T_{ab} - \frac{1}{4} T_{ac} T^{c}{}_b + \frac{1}{24} g_{ab} [3 T_{cd} T^{cd} - (T_{c}{}^{c})^2]\;.$$ We note that the local part of the bulk gravitational field is the five dimensional Einstein tensor $\tilde{G}_{AB}$, which is determined by the bulk field equations. Consequently, ${\cal E}_{ab}$ transmits non-local gravitational degrees of freedom from the bulk to the brane that includes both tidal (or Coulomb), gravito-magnetic, and transverse traceless (gravitational wave) effects. The bulk corrections can all be consolidated into an effective total energy density, pressure, anisotropic stress and energy flux. The modified Einstein equations take the standard Einstein form with a re-defined energy-momentum tensor: $$\label{e:einstein2} G_{ab} = - \Lambda g_{ab} + \kappa^2 T^{\text{tot}}_{ab}\;,$$ where $$\label{e:emtensor3} T^{\text{tot}}_{ab} = T_{ab} + \frac{\tilde{\kappa}^4}{\kappa^2} {\cal S}_{ab} - \frac{1}{\kappa^2} {\cal E}_{ab}\;.$$ Decomposing ${\cal E}_{ab}$ irreducibly with respect to $u^a$ by analogy with Eq. (\[e:emequation1\]) [@gordon; @maartens; @maartens2], $${\cal E}_{ab} = - \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \left( {\cal U} u_a u_b + 2 u_{(a}{\cal Q}_{b)} + \frac{\cal U}{3} h_{ab} + {\cal P}_{ab} \right),$$ (the prefactor is included to make e.g. ${\cal U}$ have dimensions of energy density), it follows that the total density, pressure, energy flux and anisotropic pressure are given as follows: $$\begin{aligned} \rho^{\text{tot}} &= \rho + \frac{\tilde{\kappa}^4}{\kappa^6} \left[\frac{\kappa^4}{24} (2 \rho^2 - 3 \pi^{ab} \pi_{ab}) + {\cal U} \right]\;, \\ \label{e:pressure1} P^{\text{tot}} &= P + \frac{\tilde{\kappa}^4}{\kappa^6} \left[\frac{\kappa^4}{24} \left(2 \rho^2 + 4 P \rho + \pi^{ab} \pi_{ab} - 4 q_{a} q^{a} \right) + \frac{1}{3} {\cal U} \right]\;, \\ \label{e:flux} q^{\text{tot}}_{a} &= q_a + \frac{\tilde{\kappa}^4}{\kappa^6} \left[\frac{\kappa^4}{24} (4 \rho q_a - 6\pi_{ab} q^{b}) + {\cal Q}_a \right]\;, \\ \label{e:pressure2} \pi^{\text{tot}}_{ab} &= \pi_{ab} + \frac{\tilde{\kappa}^4}{\kappa^6} \left[\frac{\kappa^4}{12} \left\{ -(\rho + 3P) \pi_{ab} -3 \pi_{c \langle a}\pi_{b\rangle}{}^{c} + 3q_{\langle a}q_{b\rangle} \right\} + {\cal P}_{ab} \right]\;.\end{aligned}$$ For the braneworld case, it is useful to introduce an additional dimensionless gradient which describes inhomogeneity in the non-local energy density ${\cal U}$: $$\label{e:nonlocal0} \Upsilon_a \equiv \frac{a}{\rho} D_a {\cal U}\;.$$ The Gauss-Codazzi scalar equation for the 3-curvature defined by ${\cal R}$ is given by $$\label{e:curvature1} {\cal R} = 2 \kappa^2 \rho + \frac{1}{6} \tilde{\kappa}^4 \rho^2 + 2 \left(\frac{\tilde{\kappa}}{\kappa} \right)^4 {\cal U} - \frac{2}{3} \Theta^2 + 2 \Lambda\;,$$ where $$\label{e:curvature2} {\cal R} \equiv ~^{(3)}R = h^{ab}~^{(3)}R_{ab}$$ with ${}^{(3)} R_{ab}$ the intrinsic curvature of the surfaces orthogonal to $u^a$[^1]. In FRW models the Gauss-Codazzi constraint reduces to the modified Friedmann equation $$\label{e:friedmann1} H^2 + \frac{K}{a^2} = \frac{1}{3}\kappa^2 \rho + \frac{1}{3} \Lambda + \frac{1}{36} \tilde{\kappa}^4 \rho^2 + \frac{1}{3}\left(\frac{\tilde{\kappa}} {\kappa}\right)^4 {\cal U},$$ where the 3-curvature scalar is ${\cal R}=6K/a^2$. In non-flat models ($K \neq 0$) ${\cal R}$ is not gauge-invariant since it does not vanish in the FRW limit. However, the comoving projected gradient $$\label{e:curvature3} \eta_b \equiv \frac{a}{2} D_b {\cal R}\;$$ is a gauge-invariant measure of inhomogeneity in the intrinsic three curvature of the hypersurfaces orthogonal to $u^a$. Linearised scalar perturbation equations for the total matter variables {#sec:equations} ======================================================================= Local and non-local conservation equations ------------------------------------------ Based on the form of the bulk energy-momentum tensor and $Z_2$ symmetry, the brane energy-momentum tensor is still covariantly conserved: $$\label{e:em1} {\nabla}^b T_{ab} = 0\;.$$ The contracted Bianchi identities on the brane ensure conservation of the total energy-momentum tensor, which combined with conservation of the matter tensor gives $$\label{e:bianchi1} {\nabla}^{a} {\cal E}_{ab} = \tilde{\kappa}^4 {\nabla}^{a} {\cal S}_{ab}\;.$$ The longitudinal part of ${\cal E}_{ab}$ is sourced by quadratic energy-momentum terms including spatial gradients and time derivatives. As a result any evolution and inhomogeneity in the matter fields would generate non-local Coulomb-like gravitational effects in the bulk which back react on the brane. The conservation equation (\[e:em1\]) implies evolution equations for the energy and momentum densities, and these are unchanged from their general relativistic form. To linear order in an almost-FRW brane cosmology we have $$\label{e:em2} \dot{\rho} + \Theta (\rho + P) + D^a q_a = 0 \;,$$ and $$\label{e:em3} \dot{q}_a + \frac{4}{3} \Theta q_{a} + (\rho +P )A_a + D_a P + D^b \pi_{ab} = 0\;.$$ The linearised propagation equations for ${\cal U}$ and ${\cal Q}$ follow from Eq. (\[e:bianchi1\]) (see Ref. [@maartens]): $$\label{e:nonlocal1} \dot{{\cal U}} + \frac{4}{3} \Theta {\cal U} + D^a {\cal Q}_a =0 \;,$$ and $$\label{e:nonlocal2} \dot{{\cal Q}_{a}} +\frac{4}{3} \Theta {\cal Q}_{a} + \frac{1}{3} D_a {\cal U} + D^{b} {\cal P}_{ab} + \frac{4}{3} {\cal U}A_a = \frac{\kappa^4}{12} (\rho + P) \left(-2 D_a \rho + 3 D^b \pi_{ab} + 2 \Theta q_a \right)\;.$$ Taking the projected derivative of Eq. (\[e:em2\]) we obtain the propagation equation for $\Delta_a$ at linear order: $$\rho \dot{\Delta}_a + (\rho +P)({\cal Z}_a + a \Theta A_a) + a D_a D^b q_b + a \Theta D_a P - \Theta P \Delta_a = 0.$$ From equation $\eqref{e:nonlocal1}$, we obtain the evolution equation of the spatial gradient of the non-local energy density: $$\dot{\Upsilon}_a = \left(\frac{P}{\rho} - \frac{1}{3} \right) \Theta \Upsilon_a - \frac{4}{3} \frac{{\cal U}}{\rho}({\cal Z}_a + a\Theta A_a) - \frac{a}{\rho} D_a D^b {\cal Q}_b.$$ From the propagation equations for ${\cal U}$ and ${\cal Q}$ it can be seen that the energy of the projected Weyl fluid is conserved while the momentum is not conserved; rather it is driven by the matter source terms on the right of Eq. (\[e:bianchi1\]). Note that no propagation equation for ${\cal P}_{ab}$ is implied so the set of equations will not close. Propagation and constraint equations ------------------------------------ In this section we give the linearised gravito-magnetic and gravito-electric propagation and constraint scalar equations on the brane, which follow from the Bianchi identities, and the equations for the kinematic variables $\sigma_{ab}$, and $\Theta$ and its gradient ${\cal Z}_a$ which follow from the Ricci identity. For scalar perturbations, the magnetic part of the Weyl tensor $H_{ab}$ and the vorticity tensor $\omega_{ab}$ vanish identically. The electric part of the Weyl tensor $E_{ab}$ and the shear $\sigma_{ab}$ need not vanish. The non-vanishing variables satisfy the following propagation and constraint equations on the brane: 1. Gravito-electric propagation: $$\label{e:propagation1} \begin{split} &\dot{E}_{ab} + \Theta E_{ab} + \frac{1}{2} \kappa^2 (\rho + P) \sigma_{ab} + \frac{1}{2} \kappa^2 D_{\langle a}q_{b \rangle} + \frac{1}{6} \kappa^2 \Theta \pi_{ab} + \frac{1}{2} \kappa^2 \dot{\pi}_{ab}\\ &= \frac{1}{72} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 [\kappa^4 \{-6\rho (\rho + P) \sigma_{ab} + 3 (\dot{\rho} + 3\dot{P}) \pi_{ab} + 3 (\rho + 3 P) \dot{\pi}_{ab} \\ &- 6\rho D_{\langle a} q_{b\rangle} + \Theta [\rho + 3 P] \pi_{ab}\} - 48 {\cal U} \sigma_{ab} - 36 \dot{{\cal P}}_{ab} - 36 D_{\langle a}{\cal Q}_{b\rangle} - 12 \Theta {\cal P}_{ab}]\; ; \end{split}$$ 2. Shear propagation: $$\label{e:propagation2} \dot{\sigma}_{ab} + \frac{2}{3} \Theta \sigma_{ab} + E_{ab} - \frac{1}{2} \kappa^2 \pi_{ab} - D_{\langle a} A_{b\rangle} = \frac{1}{24} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \{ \kappa^4 [ - (\rho + 3P) \pi_{ab} ] + 12 {\cal P}_{ab}\}\; ;$$ 3. Shear constraint: $$\label{e:constraint1} D^{b} \sigma_{ab} - \frac{2}{3} D_{a} \Theta + \kappa^2 q_a = -\frac{1}{6} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 (\kappa^4 \rho q_a + 6 {\cal Q}_{a})\; ;$$ 4. Gravito-electric divergence: $$\label{e:constraint2} \begin{split} D^{b} E_{ab} + \frac{1}{2} \kappa^2 D^{b} \pi_{ab} - \frac{1}{3} \kappa^2 D_{a} \rho + \frac{1}{3} \kappa^2 \Theta q_a &= \frac{1}{48}\left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \bigg[\kappa^4 \left(- \frac{8}{3} \rho \Theta q_a + 2 (\rho + 3P)D^{b}\pi_{ab} + \frac{8}{3} \rho D_a \rho \right) \\ & \quad + 16 D_a {\cal U} - 16 \Theta {\cal Q}_a - 24 D^b {\cal P}_{ab} \bigg]\; ; \end{split}$$ 5. Modified Raychaudhuri equation: $$\label{e:raychaudhuri} \dot{\Theta}= -\frac{1}{3}\Theta^2 - \frac{1}{2}\kappa^2(\rho+3P) + \Lambda - \frac{1}{12}\left(\frac{\tilde{\kappa}}{\kappa}\right)^4 [\kappa^4 \rho (2\rho+3P) + 12 \mathcal{U}] + D^a A_a\; ;$$ 6. Propagation equation for the comoving expansion gradient ${\cal Z}_a$ which follows from Eq. $\eqref{e:raychaudhuri}$: $$\label{e:propagation3} \dot{{\cal Z}}_a + \frac{2}{3} \Theta \mathcal{Z}_{a} - a \dot{\Theta} A_a + \frac{\kappa^2}{2} aD_a(\rho+3P) - aD_a D^b A_b = -\frac{1}{12} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \{ \kappa^4 aD_a[\rho(2\rho+3P)] + 12 a D_a {\cal U} \} \;.$$ The spatial gradient of the 3-curvature scalar is an auxiliary variable. It can be related to the other gauge-invariant variables using Eqs. $\eqref{e:curvature1}$ and $\eqref{e:curvature3}$: $$\label{e:curconstraint} \eta_a = \kappa^2 \rho \Delta_a + \frac{1}{6}\tilde{\kappa}^4 \rho^2 \Delta_a + \left(\frac{\tilde{\kappa}}{\kappa} \right)^4 \rho \Upsilon_a - \frac{2}{3} \Theta {\cal Z}_a\;.$$ Taking the time derivative of Eq. $\eqref{e:curconstraint}$, commuting the spatial and temporal derivatives, and then making use of Eqs. $\eqref{e:raychaudhuri}$ and $\eqref{e:propagation3}$, we obtain the evolution of the spatial gradient of the 3-curvature scalar: $$\label{e:evolution} \dot{\eta}_a + \frac{2}{3} \Theta \eta_a + \frac{1}{3} {\cal R} ({\cal Z}_a + a \Theta A_a) + \frac{2}{3}\Theta a D_a D^b A_b = -\left(\kappa^2 + \frac{1}{6} \tilde{\kappa}^4 \rho\right) a D_a D^b q_b - \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 a D_a D^b {\cal Q}_b.$$ In general relativity, propagating $\eta_a$ is a useful device to avoid numerical instability problems when solving for isocurvature modes in a zero acceleration frame (such as the rest-frame of the CDM) [@lewis]. Cosmological scalar perturbations in the braneworld {#sec:scalar} =================================================== The tensor-valued, partial differential equations presented in the earlier sections can be reduced to scalar-valued, ordinary differential equations by expanding in an appropriate complete set of eigentensors. For scalar perturbations all gauge-invariant tensors can be constructed from derivatives of scalar functions. It is thus natural to expand in STF tensors derived from the scalar eigenfunctions $Q^{(k)}$ of the projected Laplacian: $$\label{e:helmholtz} D^2 Q^{(k)} = - \frac{k^2}{a^2} Q^{(k)}\;,$$ satisfying $\dot{Q}^{(k)} = O(1)$[^2]. We adopt the following harmonic expansions of the gauge-invariant variables: $$\label{e:harmonics1} \begin{split} \Delta^{(i)}_a = \sum_k k \Delta_k^{(i)} Q^{(k)}_a\;, \quad&\quad {\cal Z}_a = \sum_k \frac{k^2}{a} {\cal Z}_k Q^{(k)}_a\;, \\ q^{(i)}_a = \rho^{(i)} \sum_k q_k^{(i)} Q_a^{(k)}\;, \quad&\quad \pi^{(i)}_{ab} = \rho^{(i)} \sum_k \pi_k^{(i)} Q_{ab}^{(k)}\;, \\ E_{ab} = \sum_k \frac{k^2}{a^2} \Phi_k Q_{ab}^{(k)}\;, \quad&\quad \sigma_{ab} = \sum_k \frac{k}{a} \sigma_k Q_{ab}^{(k)}\;, \\ v^{(i)}_{a} = \sum_k v^{(i)}_k Q_a^{(k)}\;, \quad&\quad A_a = \sum_k \frac{k}{a} A_k Q_a^{(k)}\;. \end{split}$$ Here $v^{(i)}_a$ is the 3-velocity of species $i$ relative to $u^a$; for the CDM model considered here we shall make use of $v^{(i)}_a$ for baryons $b$ and CDM $c$. For photons $\gamma$ and neutrinos $\nu$ we continue to work with the momentum densities which are related to the peculiar velocity of the energy frame for that species by e.g. $q^{(\gamma)}_a = (4/3)\rho^{(\gamma)} v^{(\gamma)}_a$ in linear theory. The scalar expansion coefficients, such as $\Delta_k^{(i)}$ are first-order gauge-invariant variables satisfying e.g. $D^a \Delta_k^{(i)} = O(2)$. Projected vectors $Q_a^{(k)}$ and STF tensors $Q_{ab}^{(k)}$ are defined by [@dunsby]: $$\label{e:scalarid1} \begin{split} Q_a^{(k)} &= - \frac{a}{k} D_a Q^{(k)}\;, \\ Q_{ab}^{(k)} &= \frac{a^2}{k^2} D_{\langle a} D_{b\rangle }Q^{(k)}\;, \end{split}$$ so that $$D^b Q_{ab}^{(k)} = \frac{2}{3} \left(\frac{k}{a} \right) \left(1 - \frac{3K}{k^2} \right) Q_a^{(k)}\;.$$ We expand the non-local perturbation variables in scalar harmonics in the following manner: $$\label{e:harmonics2} \begin{split} \Upsilon_a &= \sum_k k \Upsilon_k Q_a^{(k)}\;, \\ {\cal Q}_a &= \sum_k \rho {\cal Q}_k Q_a^{(k)}\;, \\ {\cal P}_{ab} &= \sum_k \rho {\cal P}_k Q_{ab}^{(k)}\;. \end{split}$$ In addition, we can expand the projected gradient of the 3-curvature term: $$\eta_a = \sum_k 2 \left(\frac{k^3}{a^2} \right) \left(1 - \frac{3K}{k^2} \right) \eta_k Q_{a}^{(k)} \;.$$ The form of this expansion is chosen so that if we adopt the energy frame (where $q_a=0$) the variable $\eta_k$ coincides with the curvature perturbation usually employed in gauge-invariant calculations. Scalar equations on the brane ----------------------------- It is now straightforward to expand the 1+3 covariant propagation and constraint equations in scalar harmonics. We shall consider the CDM model where the particle species are baryons (including electrons), which we model as an ideal fluid with pressure $p^{(b)}$ and peculiar velocity $v^{(b)}_a$, cold dark matter, which has vanishing pressure and peculiar velocity $v^{(c)}_a$, and photons and (massless) neutrinos which require a kinetic theory description. We neglect photon polarization, although this can easily be included in the 1+3 covariant framework [@challinor3]. Also, we assume that the entropy perturbations are negligible for the baryons, so that $D_a P^{(b)} = c_s^2 D_a \rho^{(b)}$ where $c_s^2$ is the adiabatic sound speed. A complete set of 1+3 perturbation equations for the general relativistic model can be found in [@challinor2]. We extend these equations to braneworld models here. In the following, perturbations in the total matter variables are related to those in the individual components by $$\rho \Delta_k = \sum_i \rho^{(i)} \Delta^{(i)}_k, \quad \rho q_k = \sum_i \rho^{(i)} q^{(i)}_k, \quad \rho \pi_k = \sum_k \rho^{(i)} \pi^{(i)}_k,$$ where $q^{(b)}_k=(1+P^{(b)}/\rho^{(b)})v^{(b)}_k$, $q^{(c)}_k = v^{(c)}_k$, and $\pi^{(i)}_k$ vanishes for baryons and CDM. Similarly, the total density and pressure are obtained by summing over components, e.g.$P = \sum_i P^{(i)}$. It is also convenient to write $P=(\gamma-1)\rho$, but $\gamma$ should not be assumed constant (in space or time). We begin with the equation for the gravito-electric field: $$\label{e:propagation1a} \begin{split} &\left(\frac{k}{a}\right)^2 \left(\dot{\Phi}_k + \frac{1}{3} \Theta \Phi_k \right) + \frac{1}{2} \frac{k}{a} \kappa^2 \rho ({\gamma}\sigma_k - q_k) + \frac{1}{6} \kappa^2 \rho \Theta (1-3\gamma) \pi_k + \frac{1}{2} \kappa^2 \rho \dot{\pi}_k \\ &= \frac{1}{72} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \bigg\{ -\kappa^4 \bigg[6 \left(\frac{k}{a} \right) \rho^2 ({\gamma}\sigma_k -q_k)- 3(\dot{\rho} + 3\dot{P}) \rho \pi_k - 3 (3 {\gamma}-2) \rho (\rho \dot{\pi}_k + \dot{\rho} \pi_k) - (3{\gamma}-2) \rho^2 \Theta \pi_k \bigg] \\ & - 12 \left(\frac{k}{a} \right)(4 {\cal U} \sigma_k - 3 \rho \mathcal{Q}_k) - 36 (\dot{\rho} {\cal P}_k + \rho \dot{{\cal P}}_k) - 12 \rho \Theta {\cal P}_k \bigg\}\;. \end{split}$$ We have written this equation in such a form that every term is manifestly frame-independent. The shear propagation equation is $$\label{e:propagation2a} \frac{k}{a} \left(\dot{\sigma}_k + \frac{1}{3} \Theta \sigma_k \right) + \left(\frac{k}{a}\right)^2 (\Phi_k + A_k) - \frac{\kappa^2}{2} \rho \pi_k = \frac{1}{24}\left(\frac{\tilde{\kappa}}{\kappa}\right)^4 [-(3 {\gamma}- 2) \kappa^4 \rho^2 \pi_k + 12 \rho {\cal P}_k]\;.$$ The shear constraint is given by $$\label{e:constraint1a} \kappa^2 \rho q_k - \frac{2}{3} \left(\frac{k}{a} \right)^2 \left[{\cal Z}_k - \left(1 - \frac{3K}{k^2} \right) \sigma_k \right] = - \frac{1}{6} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 (\kappa^4 \rho^2 q_k + 6\rho {\cal Q}_k).$$ The gravito-electric divergence is $$\label{e:constraint2a} \begin{split} & 2 \left(\frac{k}{a} \right)^3 \left(1 - \frac{3K}{k^2} \right) \Phi_k - \kappa^2 \rho \left(\frac{k}{a} \right) \left[\Delta_k - \left(1 - \frac{3K}{k^2} \right) \pi_k \right] + \kappa^2 \Theta \rho q_k \\ &= \frac{1}{16} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \bigg\{\kappa^4 \left[-\frac{8}{3} \rho^2 \Theta q_k + \frac{4}{3} (3{\gamma}-2) \rho^2 \left(1 - \frac{3K}{k^2}\right) \frac{k}{a} \pi_k + \frac{8}{3} \frac{k}{a} \rho^2 \Delta_k \right] \\ & + 16 \frac{k}{a} \rho \Upsilon_k - 16 \Theta \rho {\cal Q}_k - 16 \rho \left(\frac{k}{a} \right) \left(1 - \frac{3K}{k^2} \right) {\cal P}_k \bigg\} \;. \end{split}$$ The propagation equation for the comoving expansion gradient ${\cal Z}_a$ is given by $$\label{e:propagation3b} \begin{split} &\dot{{\cal Z}}_k + \frac{1}{3} \Theta \mathcal{Z}_k - \frac{a}{k} \dot{\Theta} A_k + \frac{k}{a} A_k + \frac{\kappa^2}{2} \frac{a}{k} \left[2 (\rho^{({\gamma})} \Delta^{({\gamma})}_k + \rho^{(\nu)} \Delta^{(\nu)}_k ) + (1 + 3 c_s^2) \rho^{(b)} \Delta^{(b)}_k + \rho^{(c)} \Delta^{(c)}_k \right] \\ &= -\frac{1}{12} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \frac{a}{k} \left\{\kappa^4[(2\rho + 3P)\rho\Delta_k + \rho(3\rho^{({\gamma})}\Delta^{({\gamma})}_k +3\rho^{(\nu)}\Delta^{(\nu)}_k + (2+3c_s^2)\rho^{(b)}\Delta_k^{(b)} + 2 \rho^{(c)} \Delta^{(c)}_k)] + 12 \rho \Upsilon_k \right\}\;. \end{split}$$ The non-local evolution equations for $\Upsilon_k$ and ${\cal Q}_k$ are $$\label{e:nonlocala} \dot{\Upsilon}_k = \frac{1}{3}(3 {\gamma}-4 ) \Theta \Upsilon_k - \frac{4}{3} \Theta \frac{{\cal U}}{\rho} A_k - \frac{4}{3} \frac{{\cal U}}{\rho} \frac{k}{a} {\cal Z}_k + \frac{k}{a} {\cal Q}_k ,$$ and $$\label{e:nonlocal2a} \dot{{\cal Q}}_k - \frac{1}{3}(3 {\gamma}- 4) \Theta {\cal Q}_k + \frac{1}{3} \frac{k}{a} \left[\Upsilon_k + 2 \left(1 - \frac{3 K}{k^2} \right) {\cal P}_k \right] + \frac{4}{3}\frac{k}{a} \frac{\mathcal{U}}{\rho} A_k = \frac{\kappa^4}{6} {\gamma}\rho \left\{ \Theta q_k + \frac{k}{a} \left[ \left(1 - \frac{3 K}{k^2} \right) \pi_k - \Delta_k \right] \right\}\;.$$ The spatial gradient of the 3-curvature scalar is $$\left(\frac{k}{a} \right)^2 \left(1 - \frac{3 K}{k^2} \right) \eta_k = \frac{\kappa^2 \rho}{2} \Delta_k + \frac{\tilde{\kappa}^4 \rho^2}{12} \Delta_k + \frac{1}{2} \left(\frac{\tilde{\kappa}}{\kappa} \right)^4 \rho \Upsilon_k - \frac{1}{3}\frac{k}{a} \Theta {\cal Z}_k,$$ and it evolves according to $$\frac{k}{a} \left(1 - \frac{3 K}{k^2} \right) \left(\dot{\eta}_k - \frac{1}{3} \Theta A_k \right) + \frac{K}{a^2} {\cal Z}_k - \frac{1}{2} \kappa^2 \rho q_k = \frac{1}{12} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4(\kappa^4 \rho^2 q_k + 6 \rho {\cal Q}_k).$$ The evolution equations for the scalar harmonic components of the comoving, fractional density gradients for photons, neutrinos, baryons and cold dark matter (CDM) are $$\begin{aligned} \label{e:photons0} \dot{\Delta}_{k}^{({\gamma})} &= -\frac{k}{a} \left(\frac{4}{3} {\cal Z}_{k} - q_{k}^{({\gamma})} \right) - \frac{4}{3} \Theta A_k \quad \text{(photons)}\;, \\ \label{e:neutrinos0} \dot{\Delta}_{k}^{(\nu)} &= -\frac{k}{a} \left(\frac{4}{3} {\cal Z}_{k} - q_{k}^{(\nu)} \right) - \frac{4}{3} \Theta A_k \quad \text{(neutrinos)}\;, \\ \dot{\Delta}_{k}^{(b)} &= \left(1+ \frac{P^{(b)}}{\rho^{(b)}} \right) \label{e:baryons0} \left[-\frac{k}{a}({\cal Z}_k - v_k^{(b)}) - \Theta A_k \right] + \left(\frac{P^{(b)}}{\rho^{(b)}} - c_s^2 \right) \Theta \Delta_{k}^{(b)} \quad \text{(baryons)}\;, \\ \label{e:CDM0} \dot{\Delta}_{k}^{(c)} &= -\frac{k}{a} ({\cal Z}_k - v^{(c)}_{k}) - \Theta A_k \quad \text{(CDM)}\;.\end{aligned}$$ The evolution equations for the momentum densities and peculiar velocities are $$\begin{aligned} \dot{q}^{({\gamma})}_k & = -\frac{1}{3}\frac{k}{a}\left[\Delta^{({\gamma})}_k+4A_k +2\left(1-\frac{3K}{k^2}\right)\pi^{({\gamma})}_k\right] + n_e \sigma_T \left(\frac{4}{3} v^{(b)}_k - q^{({\gamma})}_k \right) \; ,\\ \dot{q}^{(\nu)}_k & = -\frac{1}{3}\frac{k}{a}\left[\Delta^{(\nu)}_k+4A_k +2\left(1-\frac{3K}{k^2}\right)\pi^{(\nu)}_k\right] \; ,\\ (\rho^{(b)}+P^{(b)}) \dot{v}^{(b)}_k &= - (\rho^{(b)}+P^{(b)})\left[\frac{1}{3}(1-3c_s^2)\Theta v^{(b)}_k + \frac{k}{a} A_k \right] - \frac{k}{a}(1+c_s^2) \Delta^{(b)}_k - n_e \sigma_T \frac{\rho^{(\gamma)}}{\rho^{(\nu)}} \left( \frac{4}{3}v^{(b)}_k - q^{({\gamma})}_k \right) \; , \\ \dot{v}^{(c)}_k & = - \frac{1}{3}\Theta v^{(c)}_k - \frac{k}{a}A_k \;, \label{eq:cdmvel}\end{aligned}$$ where the Thomson scattering terms involving the electron density $n_e$ and Thomson cross section $\sigma_T$ arise from the interaction between photons and the tightly-coupled baryon/electron fluid. The remaining equations are the propagation equations for the anisotropic stresses of photons and neutrinos, and the higher moments of their distribution functions. These equations can be found in [@challinor2], and with polarization included in [@challinor3], since they are unchanged from general relativity. However, we shall not require these additional equations at the level of approximation we make in our subsequent calculations. Perturbation dynamics in the CDM frame ====================================== In this section we specialize our equations to FRW backgrounds that are spatially flat[^3] and we ignore the effects of the cosmological constant in the early radiation-dominated universe. To solve the equations it is essential to make a choice of frame $u^a$. In Ref. [@challinor2] two of the present authors adopted a frame comoving with the CDM. Since the CDM is pressure free, this $u^a$ is geodesic ($A_a=0$) which simplifies the equations considerably. We shall adopt this frame choice here also, though we note it may be preferable to use a frame more closely tied to the dominant matter component over the epoch of interest. This can be easily accomplished by adopting the energy frame ($q_a=0$). For completeness, we give equations in the energy frame in the appendix. We neglect baryon pressure ($c_s^2 \rightarrow 0$ and $P^{(b)} \rightarrow 0$) and work to lowest order in the tight-coupling approximation ($n_e \sigma_T \rightarrow \infty$; see e.g. Ref. [@ma]). At this order the energy frame of the photons coincides with the rest frame of the baryons, so that $v^{(b)}_a = 3 q^{({\gamma})}_a /(4 \rho^{({\gamma})})$, and all moments of the photon distribution are vanishingly small beyond the dipole. With these approximations and frame choice we obtain the following equations for the density perturbations of each component: $$\begin{aligned} \label{e:photons1} \dot{\Delta}_{k}^{({\gamma})} &= -\frac{k}{a} \left(\frac{4}{3} {\cal Z}_{k} - q_{k}^{({\gamma})} \right) \quad \text{(photons)}\;, \\ \label{e:neutrinos1} \dot{\Delta}_{k}^{(\nu)} &= -\frac{k}{a} \left(\frac{4}{3} {\cal Z}_{k} - q_{k}^{(\nu)} \right) \quad \text{(neutrinos)}\;, \\ \label{e:baryons1} \dot{\Delta}_{k}^{(b)} &= -\frac{k}{a} ({\cal Z}_k - v_k^{(b)}) \quad \text{(baryons)}\;, \\ \label{e:CDM1} \dot{\Delta}_{k}^{(c)} &= -\frac{k}{a} {\cal Z}_k \quad \text{(CDM)}\;.\end{aligned}$$ The equations for the peculiar velocities and momentum densities are $$\begin{aligned} (4\rho^{({\gamma})} + 3 \rho^{(b)}) \dot{q}^{({\gamma})}_k & = -\frac{4}{3}\frac{k}{a} \rho^{({\gamma})}\Delta^{({\gamma})}_k - \rho^{(b)}\Theta q^{({\gamma})}_k \; , \label{e:tightcouple1}\\ \dot{q}^{(\nu)}_k & = -\frac{1}{3}\frac{k}{a}\left(\Delta^{(\nu)}_k +2\pi^{(\nu)}_k\right) \; ,\end{aligned}$$ along with $v^{(c)}_k = 0$ and $v^{(b)}_k = 3 q^{({\gamma})}_k /4$. The latter equation, together with Eqs. (\[e:photons1\]) and (\[e:baryons1\]), implies that $\dot{\Delta}^{(b)}_k = 3 \dot{\Delta}^{({\gamma})}_k /4$ so that any entropy perturbation between the photons and baryons is conserved while tight coupling holds. The effects of baryon inertia appear in Eq. (\[e:tightcouple1\]) because of the tight coupling between the baryons and photons. The constraint equations are found to be: $$\label{e:constraint1c} \kappa^2 \rho q_k - \frac{2}{3} \left(\frac{k}{a} \right)^2 ({\cal Z}_k - \sigma_k ) - \frac{1}{6} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 (\kappa^4 \rho^2 q_k + 6 \rho {\cal Q}_k) = 0\;,$$ and $$\label{e:constraint2c} \begin{split} & 2 \left(\frac{k}{a} \right)^3 \Phi_k - \kappa^2 \rho \left(\frac{k}{a} \right) (\Delta_k - \pi_k ) + \kappa^2 \Theta \rho q_k \\ &= \frac{1}{16}\left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \bigg[\kappa^4 \left(-\frac{8}{3} \Theta \rho^2 q_k + \frac{4}{3} \frac{k}{a} \rho^2 \left[(3{\gamma}-2) \pi_k + 2 \Delta_k \right] \right) \\ & + 16 \left(\frac{k}{a} \right) \rho (\Upsilon_k -{\cal P}_k) - 16 \Theta \rho {\cal Q}_k \bigg]\;. \end{split}$$ The propagation equation for the comoving expansion gradient in the CDM frame is $$\label{e:propagation3c} \begin{split} &\dot{{\cal Z}}_k + \frac{1}{3} \Theta \mathcal{Z}_k + \frac{\kappa^2}{2} \frac{a}{k} \left[2 (\rho^{({\gamma})} \Delta^{({\gamma})}_k + \rho^{(\nu)} \Delta^{(\nu)}_k ) + \rho^{(b)} \Delta^{(b)}_k + \rho^{(c)} \Delta^{(c)}_k \right] \\ &= -\frac{1}{12} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \frac{a}{k} \left\{\kappa^4 [(2\rho+3P)\rho\Delta_k + \rho(3\rho^{({\gamma})}\Delta^{({\gamma})}_k +3\rho^{(\nu)}\Delta^{(\nu)}_k + (2+3c_s^2)\rho^{(b)}\Delta^{(b)}_k + 2 \rho^{(c)}\Delta^{(c)}_k)]+12 \rho \Upsilon_k \right\}\;. \end{split}$$ The variables $\Phi_k$ and $\sigma_k$ can be determined from the constraint equations so their propagation equations are not independent of the above set. The propagation equation for $\Phi_k$ is unchanged from Eq. $\eqref{e:propagation1a}$ since that equation was already written in frame-invariant form. The propagation equation for the shear in the CDM frame is $$\frac{k}{a} \left(\dot{\sigma}_k + \frac{1}{3} \Theta \sigma_k \right) + \left(\frac{k}{a}\right)^2 \Phi_k - \frac{\kappa^2}{2} \rho \pi_k = \frac{1}{24}\left(\frac{\tilde{\kappa}}{\kappa}\right)^4 [-(3 {\gamma}- 2) \kappa^4 \rho^2 \pi_k + 12 \rho {\cal P}_k]\;.$$ Finally we have the non-local evolution equations for ${\Upsilon}_k$ and ${\cal Q}_k$ which in the CDM frame become $$\label{e:nonlocal1c} \dot{\Upsilon}_k = \frac{1}{3}(3 {\gamma}-4 )\Theta \Upsilon_k - \frac{4}{3} \frac{{\cal U}}{\rho} \frac{k}{a} {\cal Z}_k + \frac{k}{a} {\cal Q}_k \;,$$ and $$\label{e:nonlocal2c} \dot{{\cal Q}}_k -\frac{1}{3} (3 {\gamma}- 4) \Theta {\cal Q}_k + \frac{1}{3} \frac{k}{a} (\Upsilon_k +2 \mathcal{P}_k)= \frac{\kappa^4}{6} {\gamma}\rho \left[ \Theta q_k + \frac{k}{a} (\pi_k - \Delta_k) \right]\;.$$ Solutions in the radiation-dominated era {#sec:radiation} ======================================== We now use the above equations to extract the mode solutions of the scalar perturbation equations in the radiation-dominated era, ${\gamma}=4/3$. To simplify matters, as well as neglecting the contribution of the baryons and CDM to the background dynamics, we shall only consider those modes for which $D_a \rho^{(b)}$ and $D_a \rho^{(c)}$ make a negligible contribution to the total matter perturbation $D_a \rho$. This approximation allows us to write the total matter perturbations in the form $$(\rho^{({\gamma})} + \rho^{(\nu)})\Delta_k = \rho^{({\gamma})}\Delta^{({\gamma})}_k + \rho^{(\nu)} \Delta^{(\nu)}_k, \quad (\rho^{({\gamma})} + \rho^{(\nu)}) q_k = \rho^{({\gamma})}q^{({\gamma})}_k + \rho^{(\nu)} q^{(\nu)}_k, \label{eq:approx}$$ and effectively removes the back-reaction of the baryon and CDM perturbations on the perturbations of the spacetime geometry. We note that in making this approximation we lose two modes corresponding to the baryon and CDM isocurvature (density) modes of general relativity, in which the sub-dominant matter components make significant contributions to the total fractional density perturbation (which vanishes as $t\rightarrow 0$). However, for our purposes the loss of generality is not that important, while the simplifications resulting from decoupling the baryon and photon perturbations are considerable. We also neglect moments of the neutrino distribution function above the dipole (so there is no matter anisotropic stress). This approximation is good for super-Hubble modes, but fails due to neutrino free streaming on sub-Hubble scales. We shall also assume that the non-local energy density ${\cal U}$ vanishes in the background for all energy regimes [@gordon]. Physically, vanishing ${\cal U}$ corresponds to the background bulk being conformally flat and strictly Anti-de Sitter. Note that ${\cal U}=0$ in the background need not imply that the fluctuations in the non-local energy density are zero, i.e. $\Upsilon_a \neq 0$. With the above conditions the following set of equations are obtained: $$\begin{aligned} \label{e:ae1} \left(\frac{k}{a} \right)^2 (\dot{\Phi}_k + H \Phi_k ) + \frac{\kappa^2 \rho}{2} \left(\frac{k}{a} \right) \left( \frac{4}{3} \sigma_k - q_k \right) \left( 1 + \frac{\rho}{\lambda} \right) &= \frac{3}{\kappa^2} \frac{\rho}{\lambda} \left[ \left(\frac{k}{a} \right) {\cal Q}_k + 3 H {\cal P}_k - \dot{{\cal P}}_k \right]\;, \\ \label{e:ae2} \left(\frac{k}{a} \right) (\dot{{\cal Z}}_k + H {\cal Z}_k) + \kappa^2 \rho \left(1+ \frac{3 \rho}{\lambda} \right) \Delta_k &= - \frac{6}{\kappa^2} \frac{\rho}{\lambda} \Upsilon_k \\ \label{e:ae3} \left(\frac{k}{a} \right) (\dot{\sigma}_k + H \sigma_k) + \left(\frac{k}{a} \right)^2 \Phi_k &= \frac{3}{\kappa^2} \frac{\rho}{\lambda} {\cal P}_k\;, \\ \label{e:ae4} \dot{q}_k^{({\gamma})} + \frac{1}{3} \frac{k}{a} \Delta_k^{({\gamma})} &=0\;, \\ \label{e:ae5} \dot{q}_k^{(\nu)} + \frac{1}{3} \frac{k}{a} \Delta_k^{(\nu)} &=0\;, \\ \label{e:ae6} \dot{\Delta}_{k}^{({\gamma})} + \frac{k}{a} \left(\frac{4}{3} {\cal Z}_{k} - q_{k}^{({\gamma})} \right) &=0\;, \\ \label{e:ae7} \dot{\Delta}_{k}^{(\nu)} +\frac{k}{a} \left(\frac{4}{3} {\cal Z}_{k} - q_{k}^{(\nu)} \right) &=0,\end{aligned}$$ where recall $H=\Theta/3$. For the constraint equations we find $$\begin{aligned} \label{e:ae8} 3 \kappa^2 \left(1 + \frac{\rho}{\lambda} \right) \rho q_k - 2 \left(\frac{k}{a}\right)^2 ({\cal Z}_k - \sigma_k) &= -\frac{18}{\kappa^2} \frac{\rho}{\lambda} {\cal Q}_k\;, \\ \label{e:ae9} 2 \left(\frac{k}{a}\right)^3 \Phi_k + \kappa^2 \rho \left(1 + \frac{\rho}{\lambda} \right) \left[ 3 H q_k - \left(\frac{k}{a} \right) \Delta_k \right] &= \frac{6}{\kappa^2} \frac{\rho}{\lambda} \left[\left( \frac{k}{a} \right) (\Upsilon_k - {\cal P}_k) - 3H {\cal Q}_k \right]\;.\end{aligned}$$ Finally the non-local evolution equations are found to be : $$\begin{aligned} \label{e:ae10} \dot{\Upsilon}_k &= \frac{k}{a} {\cal Q}_k\;, \\ \label{e:ae11} 9 \dot{{\cal Q}}_k + 3 \left( \frac{k}{a} \right) (\Upsilon_k + 2 {\cal P}_k) &= -2 \kappa^4 \rho \left(\frac{k}{a} \Delta_k - 3H q_k \right)\;.\end{aligned}$$ It is easy to show by propagating the constraint equations that the above set of equations are consistent. By inspection, there is a solution of these equations with $$\begin{aligned} \Phi_k &= 0 , \\ {\cal Z}_k &= \left[3 \dot{H} \left(\frac{a}{k}\right)^2 -1\right]\frac{A}{a} ,\\ \sigma_k &= - \frac{A}{a} , \\ q^{(\gamma)}_k &= - \frac{4}{3} \frac{A}{a} ,\\ q^{(\nu)}_k &= - \frac{4}{3} \frac{A}{a} ,\\ \Delta^{(\gamma)}_k &= -4 H \frac{A}{k} , \\ \Delta^{(\nu)}_k &= -4 H \frac{A}{k} , \\ \Upsilon_k &= 0 , \\ {\cal Q}_k &= 0 , \\ {\cal P}_k &=0,\end{aligned}$$ where $A$ is a constant. This solution describes a radiation-dominated universe that is exactly FRW except that the CDM has a peculiar velocity $\bar{v}_a^{(c)} = (A/a) Q_a^{(k)}$ relative to the velocity of the FRW fundamental observers. \[This form for $\bar{v}_a^{(c)}$ clearly satisfies Eq. (\[eq:cdmvel\]) with $A_a=0$.\] Such a solution is possible since we have neglected the gravitational effect of the CDM (and baryon) perturbations in making the approximations in Eq. (\[eq:approx\]). The same solution arises in general relativity [@challinor2]. Including the back-reaction of the CDM perturbations, we would find additional small peculiar velocities in the dominant matter components which compensate the CDM flux. We shall not consider this irregular CDM isocurvature velocity mode any further here. Another pair of solutions are easily found by decoupling the photon/neutrino entropy perturbations. Introducing the photon/neutrino entropy perturbation (up to constant) $\Delta_2$ and relative flux $q_2$: $$\begin{split} \Delta_2 &= \Delta^{({\gamma})}_k - \Delta^{(\nu)}_k\;, \\ q_2 &= q^{({\gamma})}_k - q^{(\nu)}_k\;, \end{split}$$ the equations for $\Delta_2$ and $q_2$ decouple to give $$\begin{aligned} \dot{\Delta}_2 - \frac{k}{a} q_2 &= 0 \;,\\ \dot{q}_2 + \frac{1}{3} \frac{k}{a} \Delta_2 &=0 \;.\end{aligned}$$ Switching to conformal time ($d \tau = dt /a$) we can solve for $\Delta_2$ and $q_2$ to find $$\begin{aligned} q_2 (\tau) &= B \cos \left(\frac{k \tau}{3} \right) + C \sin \left(\frac{k \tau}{3} \right)\;, \\ \Delta_2 (\tau) &= B \sin \left(\frac{k \tau}{3} \right) - C \cos \left(\frac{k \tau}{3} \right) \;.\end{aligned}$$ The constants $B$ and $C$ label the neutrino velocity and density isocurvature modes respectively [@challinor2; @bucher], in which the neutrinos and photons initially have mutually compensating peculiar velocities and density perturbations. The perfect decoupling of these isocurvature modes is a consequence of our neglecting anisotropic stresses (and higher moments of the distribution functions) and baryon inertia. Having decoupled the entropy perturbations, we write the remaining equations in terms of the total variables $\Delta_k$ and $q_k$. The propagation equations for the non-local variables $\Upsilon_k$ and ${\cal Q}_k$ are redundant since these variables are determined by the constraint equations (\[e:ae8\]) and (\[e:ae9\]): $$\begin{aligned} \label{eq:cons1} \frac{6}{\kappa^2} \frac{\rho}{\lambda} \Upsilon_k &= 2 \left(\frac{k}{a}\right)^2 \Phi_k + 2H \left(\frac{k}{a} \right) ({\cal Z}_k - \sigma_k) - \kappa^2 \rho \left( 1+ \frac{\rho}{\lambda} \right) \Delta_k + \frac{6}{\kappa^2} \frac{\rho}{\lambda} {\cal P}_k, \\ \frac{3}{\kappa^2} \frac{\rho}{\lambda} {\cal Q}_k &= \frac{1}{3} \left(\frac{k}{a} \right)^2 ({\cal Z}_k - \sigma_k) - \frac{\kappa^2 \rho}{2} \left( 1+ \frac{\rho}{\lambda} \right) q_k. \label{eq:cons2} \end{aligned}$$ Substituting these expressions in the right-hand sides of Eqs. (\[e:ae1\]) and (\[e:ae2\]) we find $$\begin{aligned} \label{e:be1} \left(\frac{k}{a}\right)^2 \left(\dot{\Phi}_k + H \Phi_k \right) + \frac{2\kappa^2 \rho}{3} \left(\frac{k}{a} \right) \left(1+\frac{\rho}{\lambda}\right) \sigma_k - \frac{1}{3} \left(\frac{k}{a}\right)^3 ({\cal Z}_k - \sigma_k) &= \frac{3}{\kappa^2} \frac{\rho}{\lambda} (3H {\cal P}_k - \dot{\cal P}_k) \;,\\ \label{e:be2} \left(\frac{k}{a} \right) \dot{{\cal Z}}_k + H\left(\frac{k}{a} \right) {\cal Z}_k + \kappa^2 \rho \left(\frac{2 \rho}{\lambda} \right) \Delta_k + 2\left(\frac{k}{a}\right)^2 \Phi_k + 2H \left(\frac{k}{a}\right)({\cal Z}_k - \sigma_k) &= - \frac{6 \rho}{\kappa^2 \lambda} {\cal P}_k \;,\\ \label{e:be3} \left(\frac{k}{a} \right) (\dot{\sigma}_k + H \sigma_k) + \left(\frac{k}{a} \right)^2 \Phi_k &= \frac{3}{\kappa^2} \frac{\rho}{\lambda} {\cal P}_k\;, \\ \label{e:be4} \dot{\Delta}_k + \frac{k}{a} \left(\frac{4}{3} {\cal Z}_k - q_k \right) &=0 \;, \\ \label{e:be5} \dot{q}_k + \frac{1}{3} \frac{k}{a} \Delta_k &=0 \;.\end{aligned}$$ These equations describe the evolution of the intrinsic perturbations to the brane. The usual general relativistic constraint equations are now replaced by the constraints (\[eq:cons1\]) and (\[eq:cons2\]) which determine two of the non-local variables. The lack of a propagation equation for ${\cal P}_k$ reflects the incompleteness of the 1+3 dimensional description of braneworld dynamics. In the following it will prove convenient to adopt the dimensionless independent variable $$\label{e:trans1} x= \frac{k}{Ha}\;,$$ which is (to within a factor of $2\pi$) the ratio of the Hubble length to the wavelength of the perturbations. Using the (modified) Friedmann equations for the background in radiation domination, and with $\mathcal{U}=0$, we find that $$\frac{dx}{dt} = \frac{k}{a}\left(\frac{2+3\rho/\lambda}{2+\rho/\lambda}\right).$$ The relative importance of the local (quadratic) braneworld corrections to the Einstein equation depends on the dimensionless ratio $\rho/\lambda$. In the low-energy limit, $\rho\ll \lambda$, the quadratic local corrections can be neglected although the non-local corrections $\mathcal{E}_{ab}$ may still be important. In the opposite (high-energy) limit the quadratic corrections dominate over the terms that are linear in the energy-momentum tensor. We now consider these two limits separately. Low-energy regime {#sec:lowenergy} ----------------- In the low-energy regime we have $dx/dt \approx k/a$ and $x \approx k \tau$. The total energy density $\rho$ is proportional to $x^{-4}$. Denoting derivatives with respect to $x$ with a prime, using $\rho \ll \lambda$, and assuming that we can neglect the term involving $(\rho/\lambda)\Delta_k$ in Eq. (\[e:be2\]) compared to the other terms, we find $$\begin{aligned} \label{e:le1} 3x^2 \Phi_k'+ 3x \Phi_k + (6+x^2) \sigma_k - x^2 {\cal Z}_k &= \frac{27}{\kappa^4 \lambda} (3 {\cal P}_k - x {\cal P}_k') \\ \label{e:le2} x^2 {\cal Z}_k' + 3 x {\cal Z}_k + 2x^2 \Phi_k - 2x \sigma_k &= - \frac{18}{\kappa^4 \lambda} {\cal P}_k \\ \label{e:le3} x^2 \sigma_k' + x\sigma_k + x^2 \Phi_k &= \frac{9}{\kappa^4 \lambda} {\cal P}_k \\ \label{e:le4} \Delta_k' + \frac{4}{3} {\cal Z}_k - q_k &=0 \\ \label{e:le5} q_k' + \frac{1}{3} \Delta_k &=0 \end{aligned}$$ Combining these equations we find an inhomogeneous, second-order equation for $\Phi_{k}$: $$3x \Phi_k'' + 12 \Phi_k' + x \Phi_k = F_k(x), \label{eq:secondPhi}$$ where $$F_k(x) \equiv -\frac{27}{\kappa^4 \lambda} \left[ {\cal P}_k'' - \frac{{\cal P}_k'} {x} + \left(\frac{2}{x^3} - \frac{3}{x^2} + \frac{1}{x} \right) {\cal P}_k \right].$$ In general relativity the same second-order equation holds for $\Phi_k$ but with $F_k(x)=0$. The presence of terms involving the non-local anisotropic stress on the right-hand side of Eq. (\[eq:secondPhi\]) ensure that $\Phi_k$ cannot be evolved on the brane alone. The resolution of this problem will require careful analysis of the bulk dynamics in five dimensions. In this paper our aims are less ambitious; we shall solve Eq. (\[eq:secondPhi\]) with ${\cal P}_k=0$. Although we certainly do not expect ${\cal P}_{ab}=0$[^4], the solutions of the homogeneous equation may still prove a useful starting point for a more complete analysis. For example, they allow one to construct Green’s functions for Eq. (\[eq:secondPhi\]) which could be used to assess the impact of specific ansatze for ${\cal P}_{ab}$ [@barrow]. With ${\cal P}_k=0$ we can solve Eqs. (\[e:le1\])–(\[e:le5\]) analytically to find $$\begin{aligned} \Phi_k &= \frac{c_1}{x^3} \left[3 \sin \left(\frac{x}{\sqrt{3}} \right) - x\sqrt{3} \cos \left(\frac{x}{\sqrt{3}} \right) \right] + \frac{c_2}{x^3} \left[3 \cos \left(\frac{x}{\sqrt{3}} \right)+ x \sqrt{3} \sin \left(\frac{x}{\sqrt{3}} \right) \right] , \\ \sigma_k &= \frac{3}{x^2} \left[c_2 \cos \left(\frac{x}{\sqrt{3}} \right) + c_1 \sin \left(\frac{x}{\sqrt{3}} \right) \right] + \frac{c_3}{x}, \\ {\cal Z}_k &= \frac{c_3 (6+x^2)}{x^3} + \frac{6 \sqrt{3}}{x^3} \left[c_1 \cos\left(\frac{x}{\sqrt{3}} \right) - c_2 \sin \left(\frac{x}{\sqrt{3}} \right)\right] +\frac{6}{x^2} \left[c_2 \cos \left(\frac{x}{\sqrt{3}} \right) + c_1 \sin \left(\frac{x}{\sqrt{3}} \right) \right], \\ \Delta_k &= c_4 \cos\left(\frac{x}{\sqrt{3}}\right) + c_5 \sin \left(\frac{x}{\sqrt{3}}\right) + \frac{4 c_3}{x^2} + \frac{4}{x} \left[c_2 \cos\left(\frac{x}{\sqrt{3}} \right) + c_1 \sin \left(\frac{x} {\sqrt{3}} \right) \right] \notag \\ & \mbox{} + \left(\frac{4 \sqrt{3}}{x^2}-\frac{2}{\sqrt{3}}\right) \left[c_1 \cos \left(\frac{x}{\sqrt{3}} \right) - c_2 \sin\left(\frac{x}{\sqrt{3}} \right) \right] , \\ q_k &= \frac{c_5}{\sqrt{3}} \cos \left(\frac{x}{\sqrt{3}} \right) - \frac{c_4}{\sqrt{3}} \sin\left(\frac{x}{\sqrt{3}} \right) + \frac{4 c_3}{3x} + \frac{4x}{\sqrt{3}} \left[c_1 \cos \left(\frac{x}{\sqrt{3}}\right) - c_2 \sin \left(\frac{x}{\sqrt{3}} \right) \right] \notag \\ & \mbox{} + \frac{2}{3}\left[c_2 \cos \left( \frac{x}{\sqrt{3}}\right) + c_1 \sin \left(\frac{x}{\sqrt{3}}\right) \right].\end{aligned}$$ The mode labelled by $c_3$ is the CDM velocity isocurvature mode discussed earlier. The modes labelled by $c_1$ and $c_2$ are the same as in general relativity; they describe the adiabatic growing and decaying solutions respectively. However, in the low-energy limit we also find two additional isocurvature modes ($c_4$ and $c_5$) that are not present in general relativity. These arise from the two additional degrees of freedom $\Upsilon_k$ and ${\cal Q}_k$ present in the braneworld model (with $P_k=0$). The mode $c_4$ initially has non-zero but compensating gradients in the total matter and non-local densities, and $c_5$ initially has compensated energy fluxes. Formally these isocurvature solutions violate the assumption that the term involving $(\rho/\lambda)\Delta_k$ be negligible compared to the other terms in Eq. (\[e:be2\]) since all other terms vanish. In practice, there will be some gravitational back-reaction onto the other gauge-invariant variables controlled by the dimensionless quantity $\rho/\lambda$, but the general character of these isocurvature modes will be preserved for $\rho/\lambda\ll 1$. High-energy regime ------------------ We now turn to the high-energy regime, where the quadratic terms in the stress-energy tensor dominate the (local) linear terms. In this limit the scale factor $a \propto t^{1/4}$. The modification to the expansion rate leads to an increase in the amplitude of scalar and tensor fluctuations produced during high-energy inflation [@barrow]. With ${\cal U}=0$ in the background, and $\rho \gg \lambda$, the Hubble parameter is approximately $$H^2 \approx \frac{1}{36} \tilde{\kappa}^4 \rho^2,$$ and $dx/dt \approx 3 k/a$. In terms of conformal time $\tau$, $x \approx 3 k \tau$. ### Power series solutions for the high-energy regime {#power} It is convenient to rescale the non-local variables by the dimensionless quantity $\kappa^4 \rho$. Thus we define $$\begin{aligned} \label{e:trans2} \bar{\Upsilon}_k &\equiv \frac{\Upsilon_k}{\kappa^4 \rho}\;, \\ \label{e:trans3} \bar{{\cal Q}}_k &\equiv \frac{{\cal Q}_k}{\kappa^4 \rho}\;, \\ \label{e:trans4} \bar{{\cal P}}_k &\equiv \frac{{\cal P}_k}{\kappa^4 \rho}\;.\end{aligned}$$ The fractional total (effective) density perturbation and energy flux can be written in terms of the barred variables \[e.g. $\bar{\Upsilon}_a \equiv \Upsilon_a/ (\kappa^4 \rho)$\] in the high-energy limit as $$\begin{aligned} \frac{a D_a \rho^{\text{tot}}}{\rho^{\text{tot}}} &\approx& 2(\Delta_a + 6 \bar{\Upsilon}_a), \\ q_a^{\text{tot}} &\approx& \frac{2\rho^{\text{tot}}}{\rho} (q_a + 6 \bar{\cal Q}_a ).\end{aligned}$$ Making the high-energy approximation $\rho \gg \lambda$ in Eqs. $\eqref{e:be1}$–$\eqref{e:be5}$, we obtain $$\begin{aligned} \label{e:he2a} 9x^2 \Phi^{'}_k + 3x \Phi_k + (12+x^2) \sigma_k - x^2 {\cal Z}_k &=& 54 \left[ \frac{7 \bar{\cal P}_k}{x} - 3 \bar{\cal P}_k' \right]\;, \\ \label{e:he2b} 3x^2 {\cal Z}^{'}_k + 3x {\cal Z}_k - 2x \sigma_k + 2x^2 \Phi_k + 12 \Delta_k &=& -36 \bar{\cal P}_k\;, \\ \label{e:he2c} 3x \sigma^{'}_k + \sigma_k + x \Phi_k &=& 18 \frac{\bar{{\cal P}}_k}{x}\;, \\ \label{e:he2d} \Delta_k^{'} - \frac{1}{3} q_k + \frac{4}{9} {\cal Z}_k &=&0\;, \\ \label{e:he2e} q_k^{'} + \frac{1}{9} \Delta_k &=&0\;.\end{aligned}$$ The non-local quantities $\bar{\Upsilon}_k$ and $\bar{{\cal Q}}_k$ are determined by the constraints $$\begin{aligned} \bar{\Upsilon}_k &=& \frac{1}{18}x^2 \Phi_k + \frac{1}{18} x ({\cal Z}_k-\sigma_k) - \frac{1}{6} \Delta_k + \bar{{\cal P}}_k, \\ \bar{{\cal Q}}_k &=& \frac{1}{54} x^2 ({\cal Z}_k - \sigma_k) - \frac{1}{6} q_k.\end{aligned}$$ We can manipulate Eqs. $\eqref{e:he2a}$–$\eqref{e:he2e}$ to obtain a fourth-order equation for the gravitational potential $\Phi_k$: $$\label{e:order4de} 729 x^2 \frac{{\partial}^4 \Phi_k}{{\partial}x^4} +3888 x \frac{{\partial}^3 \Phi_k}{{\partial}x^3} + (1782+54x^2) \frac{{\partial}^2\Phi_k}{{\partial}x^2} +144x \frac{{\partial}\Phi_k}{{\partial}x} + (90+x^2) \Phi_k = F_k(x)\;,$$ where $$\label{e:p-de1} F_k(x) = -\frac{54}{x^4}\left( 243 x^4 \frac{\partial^4 \bar{\cal P}_k}{ \partial x^4} - 810 x^3 \frac{\partial^3 \bar{\cal P}_k}{\partial x^3} +18x^2(135+2x^2) \frac{\partial^2 \bar{\cal P}_k}{\partial x^2} -30 x(162+x^2) \frac{\partial \bar{\cal P}_k}{\partial x} + [x^4 + 30(162+x^2)]\bar{\cal P}_k\right)\; .$$ Since we do not have an evolution equation for $\bar{\cal P}_k$ we adopt the strategy taken in the low-energy limit and look for solutions of the homogeneous equations ($\bar{{\cal P}}_k=0$). In principle one can use these solutions to construct formal solutions of the inhomogeneous equations with Green’s method. To solve Eq. (\[e:order4de\]) with $\bar{{\cal P}}_k=0$ we construct a power series solution for $\Phi_k(x)$: $$\Phi_k(x) = x^m \sum^{\infty}_{n=0} a_n x^n\;,$$ where $a_0 \neq 0$. The indicial equation for $m$ is $$\label{e:indicial} m (m-1) (3m+5) (3m-4)=0\;.$$ For each value of $m$ we substitute into Eq. (\[e:order4de\]) and solve the resulting recursion relations for the $\{a_n\}$. We then obtain the other gauge-invariant variables by direct integration. The original set of equations (\[e:he2a\])–(\[e:he2e\]) has five degrees of freedom, so we expect one additional solution with $\Phi_k=0$. This solution is the CDM isocurvature solution discussed earlier, and has a finite series expansion: $$\begin{aligned} \Phi_k &=& 0\;, \\ \sigma_k &=& C x^{-\frac{1}{3}}\;, \\ {\cal Z}_k &=& C x^{-\frac{7}{3}} (12 + x^2)\;, \\ \Delta_k &=& 4 C x^{-\frac{4}{3}}\;, \\ q_k &=& \frac{4}{3} C x^{-\frac{1}{3}}\;,\end{aligned}$$ where $C$ is a constant. The non-local variables vanish. The first two terms of the mode with $m=0$ are $$\begin{aligned} \Phi_k &=& b_1 \left(1 - \frac{5}{198} x^2 \right)\;, \\ \sigma_k &=& b_1 \left(-\frac{1}{4} x + \frac{1}{396} x^3 \right)\;, \\ {\cal Z}_k &=& b_1 \left(- \frac{3}{4} x + \frac{5}{864} x^3 \right)\;, \\ \Delta_k &=& b_1 \left(\frac{1}{6} x^2 - \frac{1}{864} x^4 \right)\;, \\ q_k &=& b_0 \left(-\frac{1}{162} x^3 + \frac{1}{38880} x^5 \right)\;, \\ \bar{\Upsilon}_k &=& b_1 \left(-\frac{1}{972} x^4 + \frac{1}{249480} x^6 \right) \;, \\ \bar{{\cal Q}}_k &=& b_1 \left(-\frac{2}{243} x^3 + \frac{1}{17820} x^5 \right)\;,\end{aligned}$$ where $b_1$ is a constant. The form of this solution is similar to the adiabatic growing mode of general relativity. The mode corresponding to $m=1$ is $$\begin{aligned} \Phi_k &=& b_2 \left(x - \frac{13}{1890} x^3 \right)\;, \\ \sigma_k &=& b_2 \left(-\frac{1}{7} x^2 + \frac{1}{1890} x^4 \right)\;, \\ {\cal Z}_k &=& b_2 \left(\frac{72}{7} - \frac{12}{35} x^2 \right)\;, \\ \Delta_k &=& b_2 \left(-\frac{18}{7} x + \frac{1}{15} x^3 \right)\;, \\ q_k &=& b_2 \left(6 + \frac{1}{7} x^2 \right)\;, \\ \bar{\Upsilon}_k &=& b_2 \left(x + \frac{1}{30} x^3 \right)\;, \\ \label{e:Qiso} \bar{{\cal Q}}_k &=& b_2 \left(-1 + \frac{1}{6} x^2 \right)\;,\end{aligned}$$ with $b_2$ a constant. As $t \rightarrow 0$ there are non-zero but compensating contributions to the effective peculiar velocity $q_a^{\text{tot}} /\rho^{\text{tot}}$ from the matter and the non-local energy fluxes. The contributions of these components to the fractional total density perturbation $a D_a \rho^{\text{tot}}/\rho^{\text{tot}}$ vanish as $t \rightarrow 0$. It follows that this solution describes an isocurvature velocity mode where the early time matter and non-local (Weyl) components have equal but opposite peculiar velocities in the CDM frame. The existence of such isocurvature modes was predicted in Refs [@langlois] and [@gordon] for large-scale density perturbations. The mode corresponding to $m=-\frac{5}{3}$ is singular as $t \rightarrow 0$ (it is a decaying mode): $$\begin{aligned} \Phi_k &=& b_3 x^{-\frac{5}{3}} \left( 1 - \frac{5}{18} x^2 \right)\;, \\ \sigma_k &=& b_3 x^{-\frac{2}{3}} \left( 1 + \frac{1}{18} x^2 \right)\;, \\ {\cal Z}_k &=& b_3 \left(\frac{14}{99} x^{\frac{4}{3}} - \frac{1217}{1590435} x^{\frac{10}{3}} \right)\;, \\ \Delta_k &=& b_3 \left(-\frac{8}{297} x^{\frac{7}{3}} + \frac{64}{433755} x^{\frac{13}{3}} \right)\;, \\ q_k &=& b_3 \left(\frac{4}{4455} x^{\frac{10}{3}} - \frac{4}{1301265} x^{\frac{16}{3}} \right)\; ,\\ \bar{\Upsilon}_k &=& b_3 \left(-\frac{1}{162} x^{\frac{7}{3}} + \frac{7}{43740} x^{\frac{13}{3}} \right)\;, \\ \bar{{\cal Q}}_k &=& b_3 \left(-\frac{1}{54} x^{\frac{4}{3}} + \frac{7}{4860} x^{\frac{10}{3}} \right)\;.\end{aligned}$$ A similar mode is found in general relativity but there the decay of $\Phi_k$ is more rapid ($\Phi_k \propto x^{-3}$) on large scales. Finally, for $m=\frac{4}{3}$ we have $$\begin{aligned} \Phi_k &=& b_4 x^{\frac{4}{3}} \left(1 - \frac{17}{3150} x^2 \right)\;, \\ \sigma_k &=& b_4 x^{\frac{4}{3}} \left(- \frac{1}{8} x + \frac{17}{44100} x^3 \right)\;, \\ {\cal Z}_k &=& b_4 x^{\frac{1}{3}} \left(\frac{27}{2} - \frac{117}{392} x^2 \right) \;, \\ \Delta_k &=& b_4 x^{\frac{4}{3}} \left(-\frac{9}{2} + \frac{3}{49} x^2 \right)\;, \\ q_k &=& b_4 x^{\frac{4}{3}} \left(\frac{3}{14} x - \frac{1}{637} x^3 \right)\; ,\\ \bar{\Upsilon} (x) &=& b_4 x^{\frac{4}{3}} \left(\frac{3}{2} + \frac{1}{28} x^2 \right)\;, \\ \bar{{\cal Q}} (x) &=& b_4 x^{\frac{4}{3}} \left(\frac{3}{14} x - \frac{29}{9828} x^3 \right)\;.\end{aligned}$$ In this mode the universe asymptotes to an FRW (brane) model in the past as $t \rightarrow 0$. Note that this requires careful cancellation between $a D_a \rho^{\text{tot}} / \rho^{\text{tot}}$ and $q_a^{\text{tot}}$ to avoid a singularity in the gravitational potential $\Phi_k$ (which would diverge as $x^{-2/3}$ without such cancellation). Like the velocity isocurvature mode ($m=1$) discussed above, this mode has no analogue in general relativity. A covariant expression for the temperature anisotropy {#sec:aniso} ===================================================== In this section we discuss the line of sight solution to the Boltzmann equation for the scalar contribution to the gauge-invariant temperature anisotropy $\delta_T(e)$ of the CMB in braneworld models. We employ the 1+3 covariant approach, and show that our result is equivalent to that given recently by Langlois et al [@langlois] using the Bardeen formalism. Over the epoch of interest the individual matter constituents of the universe interact with each other under gravity only, except for the photons and baryons (including the electrons), which are also coupled through Thomson scattering. The variation of the gauge-invariant temperature perturbation $\delta_T (e)$, where $e^a$ is the (projected) photon propagation direction, along the line of sight is given by the (linearized) covariant Boltzmann equation (valid for scalar, vector, and tensor modes) [@challinor1]: $$\label{e:temperature1} \begin{split} \delta_T (e)' + \sigma_T n_e \delta_T (e) &= -\sigma_{ab} e^a e^b - A_a e^a - \frac{e^a D_a \rho^{(\gamma)}}{4\rho^{(\gamma)}} - \frac{D^a q^{(\gamma)}_a}{4\rho^{(\gamma)}} \\ & + \sigma_T n_e \left(v^{(b)}_a e^a + \frac{3}{16} \rho^{({\gamma})} \pi_{ab}^{({\gamma})} e^a e^b \right)\;, \end{split}$$ where the prime denotes the derivative with respect to a parameter $\lambda$ defined along the line of sight by $d \lambda = - u_a dx^a$. Following the steps in Ref. [@challinor1], we expand the right-hand side of Eq. (\[e:temperature1\]) in scalar harmonics and integrate along the line of sight from the early universe to the observation point $R$. Neglecting effects due to the finite thickness of the last scattering surface, on integrating by parts we find that the temperature anisotropy involves the quantity $$\left(\frac{a}{k} \sigma_k' \right)' + \frac{1}{3} \frac{k}{a} (\sigma_k - {\cal Z}_k) + A_k' - H A_k = - 2 \dot{\Phi}_k + \left(\frac{a}{k} \right)^2 I\; \label{e:temperature10}$$ integrated along the line of sight (after multiplying with $Q^{(k)}$). In simplifying Eq. (\[e:temperature10\]) we have made use of the derivative of the shear propagation equation $\eqref{e:propagation2a}$, substituted for $q_k$ and ${\cal Z}_k$ from equations $\eqref{e:propagation1a}$ and $\eqref{e:constraint1a}$, and finally used equations $\eqref{e:friedmann1}$ and $\eqref{e:raychaudhuri}$. The quantity $I$ is the total sum of all the braneworld corrections: $$I = \left(\frac{a}{k} \right)^2 \left[\dot{I}_1 + \frac{1}{3} \Theta I_1 + I_2 + \frac{1}{3} \left(\frac{k}{a} \sigma_k \right) I_3 + \frac{1}{2} \left(\frac{k}{a} \right) I_4 \right]\;,$$ where $$\begin{split} I_1 &= \frac{1}{24}\left(\frac{\tilde{\kappa}}{\kappa}\right)^4 [-(3 {\gamma}- 2) \kappa^4 \rho^2 \pi_k + 12 \rho {\cal P}_k]\;, \\ I_2 &= \frac{1}{72} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \bigg\{ -\kappa^4 \bigg[6 \left(\frac{k}{a} \right) {\gamma}\rho^2 \sigma_k - 3(\dot{\rho} + 3 \dot{P}) \rho \pi_k - 3 (3 {\gamma}-2) \rho (\rho \dot{\pi}_k + \dot{\rho} \pi_k) - 6 \left(\frac{k}{a} \right) \rho^2 q_k \\ & - (3{\gamma}-2) \rho^2 \Theta \pi_k \bigg] - 48 \left(\frac{k}{a} \right) {\cal U} \sigma_k - 36 (\dot{\rho} {\cal P}_k + \rho \dot{{\cal P}}_k) + 36 \left(\frac{k}{a} \right) \rho {\cal Q}_k - 12 \rho \Theta {\cal P}_k \bigg\} \;, \\ I_3 &= \frac{1}{12} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 [(3 {\gamma}-1) \kappa^4 \rho^2 + 12 {\cal U}]\;, \\ I_4 &= \frac{1}{18} \sigma_k \tilde{\kappa}^4 \rho^2 + \frac{2}{3} \left(\frac{\tilde{\kappa}}{\kappa}\right)^4 {\cal U} \sigma_k - \frac{1}{24}\left(\frac{\tilde{\kappa}}{\kappa}\right)^4 (4 \kappa^4 \rho^2 q_k + 24 \rho {\cal Q}_k)\;. \end{split}$$ A lengthy calculation making use of the propagation and constraint equations shows that $I=0$. The final result for the temperature anisotropies is then $$\label{e:temperature2} \begin{split} [\delta_{T}(e) ]_R &= -\sum_k \left[ \left( \frac{1}{4} \Delta_k^{({\gamma})} + \frac{a}{k} \dot{\sigma}_k + A_k \right) Q^{(k)} \right]_A + \sum_k [(v_k^{(b)} - \sigma_k) e^a Q_a^{(k)} ]_A \\ & + \frac{3}{16} \sum_k (\pi_k^{({\gamma})} e^a e^b Q_{ab}^{(k)})_A + 2 \sum_k \int^{\lambda_R}_{\lambda_A} \dot{\Phi}_k Q^{(k)} d\lambda\;, \end{split}$$ where the event $A$ is the intersection of the null geodesic with the last scattering surface. In retrospect, one could re-derive the result for the temperature anisotropy in braneworld models much more simply by retaining the effective stress-energy variables $\rho^{\text{tot}}$, $P^{\text{tot}}$, $q_{a}^{\text{tot}}$ and $\pi_{ab}^{\text{tot}}$ in the propagation and constraint equations used in the manipulation of the left-hand side of Eq. (\[e:temperature10\]), rather than isolating the braneworld contributions. If we adopt the longitudinal gauge, defined by $\sigma_{ab}=0$, we find that the electric part of the Weyl tensor and the acceleration are related by $\Phi_k = -A_k$ if the total anisotropic stress $\pi_{ab}^{\text{tot}}$ vanishes. It follows that in this zero shear frame we recover the result found by Langlois et al [@langlois]. Regarding the imprint of braneworld effects on the CMB, we note several possible sources. Once the universe enters the low-energy regime the dynamics of the perturbations are essentially general relativistic in the absence of non-local anisotropic stress (see Sec. \[sec:lowenergy\]). If ${\cal P}_{ab}$ really were zero, the only imprints of the braneworld on the CMB could arise from modifications to the power spectrum (and cross correlations) between the various low-energy modes. Since there are two additional isocurvature modes in the low-energy universe due to braneworld effects, it need not be the case that adiabatic fluctuations produced during high-energy (single-field) inflation give rise to a low-energy universe dominated by the growing, adiabatic, general-relativistic mode. The possibility of exciting the low-energy isocurvature (brane) modes from plausible fluctuations in the high-energy regime is worthy of further investigation. In practice we do not expect ${\cal P}_{ab}=0$. In this case the non-local anisotropic stress provides additional driving terms to the dynamics of the fluctuations, and we can expect a significant manifestation of five-dimensional Kaluza-Klein effects on the CMB anisotropies. Conclusion {#sec:discussion} ========== In this paper we have discussed the dynamics of cosmological scalar perturbations in the braneworld scenario from the viewpoint of brane-bound observers making use of the 1+3 covariant approach. We only considered matter components present in the $\Lambda$CDM model, but it is straightforward to include other components such as hot dark matter. We presented approximate, analytic solutions for the fluctuations in the low-energy universe under the assumption that the non-local anisotropic stress was negligible. We obtained two additional isocurvature modes not present in general relativity in which the additional density gradients or peculiar velocities of the total matter are compensated by fluctuations in the non-local variables. In practice we do not expect the non-local anisotropic stress ${\cal P}_{ab}$ necessarily to be negligible; in this case our solutions to the homogeneous equations should form a useful starting point for the construction of solutions to the driven equations. By adopting a four-dimensional approach our presentation is necessarily limited. In particular, we cannot predict the evolution of ${\cal P}_{ab}$ on the brane. However, the four-dimensional approach should be well-suited to a phenomenological description of these five dimensional Kaluza-Klein modes. A simple possibility is to adopt an ansatz for the evolution of ${\cal P}_{ab}$ [@barrow], and this will be explored further in a future paper. We also presented solutions to the perturbation equations in the high-energy regime where braneworld effects dominate. In this limit the gravitational potential satisfies a fourth-order differential equation which we were unable to solve analytically (even with ${\cal P}_{ab}=0$). Instead we constructed power series solutions for the case where the non-local anisotropic stresses vanish; these should prove useful for setting initial conditions in the high-energy regime when performing a numerical solution of the perturbation equations. We found two additional modes over those present in general relativity, one of which can be described as a (brane) isocurvature velocity mode. We also showed that the adiabatic decaying mode varies less rapidly than in general relativity on large scales. The detailed calculation of braneworld imprints on the CMB (in the phenomenological approach discussed above) will be described in a future paper. Here we showed with the 1+3 covariant approach that the line of sight integral for the CMB temperature anisotropies is unchanged in form from general relativity. We also noted that excitation of the additional isocurvature modes present in the low-energy universe could provide an additional imprint in the CMB, over and above that due to the non-local anisotropic stress. B. L., P. D., and A. L. thank the organisers of the 8th course of the International School of Astrophysics “D. Chalonge” held in Erice, 2000 at which this work was initiated. B. L. thanks the Relativity and Cosmology Group, University of Portsmouth for hospitality, and R.Maartens for useful correspondences over issues connected with the non-local anisotropic stress, and for helpful comments on an earlier draft of the paper. B. L. also thanks B. Bassett, C. Gordon, J. C. Hwang, D. Langlois, A. Lewis, Y. L. Loh, C. J. A. P. Martins, J. Soda, D. Wands and C. Van de Bruck for insightful comments. B. L. is supported by an Overseas Research Studentship, the Cambridge Commonwealth Trust and the Lee Foundation, Singapore. P. D. thanks the NRF (South Africa) for financial support and the Cavendish Laboratory for hospitality. A. C. acknowledges a PPARC Postdoctoral Fellowship. Energy frame equations in the radiation-dominated era ===================================================== In this appendix we present a complete set of evolution equations for the total matter variables in the matter energy frame, $q_a = 0$. Note that the four-velocity of the energy frame is not necessarily a timelike eigenvector of the Einstein tensor in the presence of the non-local braneworld corrections to the effective stress-energy tensor. We assume that the matter is radiation dominated, the non-local energy density vanishes in the background, and we ignore local anisotropic stresses. We also assume that the baryons and CDM make a negligible contribution to the fractional gradient in the total matter energy density and to the energy flux, thus excluding the CDM and baryon isocurvature modes. We also give the evolution equations for the non-local density gradient and energy flux in the matter energy frame. Denoting the variables in the energy frame by an overbar[^5], the relevant equations for scalar perturbations are $$\begin{aligned} \dot{\bar{\Delta}}_a &=&\frac{1}{3}\Theta\bar{\Delta}_a-\frac{4}{3} \bar{{\cal Z}}_a\;, \\ {\dot{\bar{\cal Z}}}_a &=& -\frac{2}{3}\Theta \bar{\cal Z}_a -\frac{1}{4} D^2 \bar{\Delta}_a -\left(\frac{\tilde{\kappa}}{\kappa}\right)^4 \rho \bar{\Upsilon}_a -\frac{1}{2} \kappa^2 \rho \bar{\Delta}_a \left(1+\frac{5\rho}{\lambda}\right)\;,\\ \dot{\bar{\Upsilon}}_a &=& -\frac{a}{\rho} D^2 {\bar{\cal Q}}_a \;,\\ {\dot{\bar{\cal Q}}}_a &=& - \frac{4}{3}\Theta \bar{\cal Q}_a - \frac{\rho}{3a} \bar{\Upsilon}_a - \frac{2 \kappa^4 \rho^2}{9 a}\bar{\Delta}_a - D^b \bar{\cal P}_{ab}.\end{aligned}$$ Solutions of these equations are related to those in the CDM frame (Sec. \[sec:radiation\]) by linearising the frame transformations given in Ref. [@maartens4]. If the CDM projected velocity is $\bar{v}^{(c)}_a$ in the energy frame, the variables in the CDM frame are given by $$\begin{aligned} \Delta_a &=& \bar{\Delta}_a - \frac{4}{3} a \Theta \bar{v}^{(c)}_a\; ,\\ {\cal Z}_a &=& \bar{{\cal Z}}_a + \frac{1}{a} D_a D^b \bar{v}^{(c)}_b - \frac{2\kappa^2 \rho}{a}\left(1+\frac{\rho}{\lambda}\right)\bar{v}^{(c)}_a \;,\\ \Upsilon_a &=& \bar{\Upsilon}_a \; ,\\ {\cal Q}_a &=& \bar{{\cal Q}}_a \; , \\ q_a &=& - \frac{4}{3}\rho \bar{v}_a^{(c)}\; ,\end{aligned}$$ where we have used ${\cal U}=0$ in the background. The CDM peculiar velocity evolves in the energy frame according to $$\dot{\bar{v}}_a^{(c)} = - \frac{1}{3}\Theta \bar{v}^{(c)}_a + \frac{1}{4a} \bar{\Delta}_a.$$ [30]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , , , (). , , , ****, (). , , , ****, (). , , , ****, (). , (). , , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , (). , , , , ****, (). , , , (). , (). , ****, (). , ****, (). , (). , ****, (). , , , , . , , , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , (). , ****, (). , , , ****, (). [^1]: If the vorticity is non-vanishing flow-orthogonal hypersurfaces will not exist, and ${\cal R}$ cannot be interpreted as the spatial curvature scalar. [^2]: The notation $O(n)$ is short for $O(\epsilon^n)$ where $\epsilon$ is some dimensionless quantity characterising the departure from FRW symmetry. [^3]: More generally, curvature effects can be ignored for modes with wavelength much shorter than the curvature scale, $k \gg \sqrt{|K|}$, provided the curvature does not dominate the background dynamics. [^4]: We have not investigated the consistency of the condition ${\cal P}_{ab}=0$ with the five-dimensional bulk dynamics in the presence of a perturbed brane. [^5]: This notation should not be confused with our use of the overbar to denote rescaling by $\kappa^4 \rho$ in Sec. \[power\].
--- abstract: 'A modified Dirac equation with general Lorentz- and CPT-violating operators in the electromagnetic field is studied. Constraints on and possible sensitivities to Lorentz-violating coefficients in the nonminimal sector up to mass-dimension six can be obtained by analyzing Penning-trap results involving anomaly frequencies.' address: | Physics Department, Indiana University\ Bloomington, IN 47405, USA author: - Yunhua Ding title: Testing Lorentz and CPT Symmetries in Penning Traps --- Introduction ============ Lorentz and CPT symmetries are fundamental in the Standard Model, which is tremendously successful in describing nature in both theoretical and experimental aspects. However, these symmetries could be violated from the process of spontaneous breaking in the underlying theory including quantum gravity, such as strings.[@string] The general framework characterizing such violations is the Standard-Model Extension (SME), which incorporates General Relativity and the Standard Model.[@SME] Experiments over a broad range provide striking constraints on the Lorentz-violating coefficients.[@datatables] The focus of the current work is possible Penning-trap signals arising from nonminimal fermion sector including interactions up to mass-dimension six. Theory ====== In the SME framework, a charged Dirac fermion $\psi$ with mass $m$ in the presence of Lorentz violation is described by a modified Dirac equation,[@nonminimal] $$(iD_\mu \gamma^{\mu}-m+\hat{\mathcal{Q}})\psi=0,$$ where $iD_\mu=i\partial_\mu-q A_\mu$, with $A_\mu$ being the electromagnetic four-potential. The quantity $\hat{\mathcal{Q}}$ is a general Lorentz-violating operator involving covariant derivatives $iD_\mu$, with anticommutators associated with coefficients for Lorentz violation affecting propagation and with commutators introducing couplings to the field strength controlled by $F$-type coefficients for Lorentz violation. The study of the latter was initiated in the contexts of noncommutative electrodynamics[@chklo] and topological phases.[@be04] We note that not all the coefficients appearing in the general operator $\hat{\mathcal{Q}}$ are observables because possible field redefinitions can be made to remove certain combinations. For precession measurements in Penning traps, the primary interest involves the difference between energy levels. The trap is idealized as a uniform constant magnetic field, and the energy shifts due to Lorentz violation are calculated using perturbation theory, by taking the expectation value with unperturbed Landau wavefunctions of the perturbative hamiltonian corrected only by Lorentz violation. We can expect the energy shifts to contain terms proportional to the Lorentz-violating coefficients, the fermion mass, the magnetic field, or possible combinations. In a typical Penning trap, the size of the magnetic field is of order 1-10 Tesla, which is suppressed relative to the fermion mass by several orders of magnitude. Therefore terms proportional to the magnetic field and associated with propagation effects can be safely ignored during the analysis, and the magnetic field plays a dominant role only for interactions with $F$-type coefficients. Experimental signals ==================== There are two types of energy differences related to measurements in Penning traps, corresponding to the cyclotron and anomaly frequencies. Since the cyclotron motion of a fermion in a Penning trap is created by the presence of magnetic field in the trap, any signals involving the cyclotron frequencies in principal depend on the magnetic field. This is suppressed relative to the fermion mass, so the cyclotron motion can be safely ignored. Therefore in this work we focus our analysis on the experimental signals involving anomaly frequencies in Penning traps, e.g., studies of the $g$ factor and magnetic moment of a single fermion and their difference between particles and antiparticles. Another important feature of Lorentz violation in any local laboratory frame is the sidereal variation due to the Earth rotation. As a consequence the quantities measured in the laboratory oscillate in sidereal time. We adopt the standard Sun-centered inertial frame[@suncenter] to express our results. Applications and results ======================== Precision measurements involving particles and antiparticles in Penning traps can be used to set bounds on the Lorentz-violation coefficients.[@minimal; @penningtrap] Such experiments involve measurements of the ratio between anomaly and cyclotron frequencies.[@experiments] We point out that as the transformation from the local laboratory frame to the Sun-centered frame depends on the colatitude and the magnetic-field configuration in the trap, so in principal each of these experiments is sensitive to different combination of coefficients. The results are summarized in Ref. . They extend the range of previous work[@minimal] for the minimal sector by including the dimension-four $g$ coefficient and also by presenting nonminimal results for mass dimensions five and six. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported in part by the Department of Energy under grant number [DE]{}-SC0010120 and by the Indiana University Center for Spacetime Symmetries. [xx]{} V.A. Kostelecký and S. Samuel, Phys. Rev. D [**39**]{}, 683 (1989); V.A. Kostelecký and R. Potting, Nucl. Phys. B [**359**]{}, 545 (1991); Phys. Rev. D [**51**]{}, 3923 (1995). D. Colladay and V.A. Kostelecký, Phys. Rev. D [**55**]{}, 6760 (1997); Phys. Rev. D [**58**]{}, 116002 (1998); V.A. Kostelecký, Phys. Rev. D [**69**]{}, 105009 (2004). V.A. Kostelecký and N. Russell, 2016 edition, arXiv:0801.0287v9. V.A. Kostelecký and M. Mewes, Phys. Rev. D [**88**]{}, 096006 (2013). S.M. Carroll, J.A. Harvey, V.A. Kostelecký, C.D. Lane, and T. Okamoto, Phys. Rev. Lett. [**87**]{}, 141601 (2001). H. Belich , Eur. Phys. J. C [**41**]{}, 421 (2005); J.B. Araujo, R. Casana, and M.M. Ferreira, Jr., these proceedings. V.A. Kostelecký and M. Mewes, Phys. Rev. D [**66**]{}, 056005 (2002). R. Bluhm, V.A. Kostelecký and N. Russell, Phys. Rev. Lett. [**79**]{}, 1432 (1997); Phys. Rev. D [**57**]{}, 3923 (1998). Y. Ding and V.A. Kostelecký, Phys. Rev. D [**94**]{}, 056008 (2016). R.K. Mittleman , Phys. Rev. Lett. [**83**]{}, 2116 (1999); H. Dehmelt , Phys. Rev. Lett. [**83**]{}, 4694 (1999); D. Hanneke , Phys. Rev. Lett. [**100**]{}, 120801 (2008); S.F. Hoogerheide , Rev. Sci. Instrum. [**86**]{}, 053301 (2015); J. DiSciacca , Phys. Rev. Lett. [**110**]{}, 130801 (2013); A. Mooser , Nature [**509**]{}, 596 (2014); S. Ulmer , Nature [**524**]{}, 196 (2015); M. Niemann , in V.A. Kostelecký, ed., [*CPT and Lorentz Symmetry VI*]{}, World Scientific, Singapore, 2014.
**Electrical charges in gravitational fields,** **and Einstein’s equivalence principle** [**Gerold Gründler**]{}[^1] Astrophysical Institute Neunhof, Nürnberg, Germany Einstein’s Equivalence Principle (EP) ===================================== Due to experimental evaluations and due to gedanken[-]{}experiments, Galilei demonstrated[@Galilei:discorsi] that all bodies, independent of their materials and independent of their weights, would fall down to earth with equal acceleration, if air friction could be eliminated. In Newton’s theory[@Newton:Principia], Galilei’s finding is a simple consequence of the proportionality \[knbgxmybg\]$$\begin{aligned} m_{\text{inertial}}=\text{constant}\cdot m_{\text{gravitational}} \end{aligned}$$ between inertial and gravitational mass. The constant is independent of the materials and weights of masses, and can be chosen by definition as $$\begin{aligned} \text{constant}&\equiv 1\\ \Longrightarrow\quad m&\equiv m_{\text{inertial}}=m_{\text{gravitational}} \ . \end{aligned}$$ As a consequence of , if the force $$\begin{aligned} \boldsymbol{F}=m\boldsymbol{a} \end{aligned}$$ is acting on each body in a laboratory, then it is impossible to find out by any mechanical measurement inside the laboratory (without looking out of the windows) whether the laboratory is at rest within a homogeneous gravitational field, which is exerting the gravitational acceleration $$\begin{aligned} \boldsymbol{g}=+\boldsymbol{a}=\frac{\boldsymbol{F}}{m} \ ,\label{ksdjdjvjna} \end{aligned}$$ or whether the laboratory is — far[-]{}off any measurable gravitational field — being boosted by a rocket with acceleration $$\begin{aligned} \boldsymbol{a}_{\text{boost}}=-\boldsymbol{a}=-\frac{\boldsymbol{F}}{m}\ .\label{ksdjdjvjnb} \end{aligned}$$ Obviously, $\boldsymbol{g}=\eqref{ksdjdjvjna}$ can be constant (within measurement accuracy) only in sufficiently small laboratories, while in larger laboratories tidal effects and other inhomogeneities will be observable. Einstein postulated[@Einstein:GravLicht] the perfect equivalence of accelerated laboratories and sufficiently small inertial laboratories in gravitational fields with regard not only to mechanical phenomena, but to *all* physical phenomena — and hence to *all* laws of nature. $$\begin{aligned} \mbox{\parbox{.86\linewidth}{Einstein's Equivalence Principle (EP)\,:\newline{}All laws of nature are identical in an inertial reference system in a homogeneous gravitational field with gravitative acceleration \raisebox{0mm}[0mm][0mm]{$\boldsymbol{g}$}, and in a reference system which is accelerated by \raisebox{0mm}[0mm][0mm]{$\boldsymbol{a}_{\text{boost}}=-\boldsymbol{g}$} in a region of space which is free of measurable gravitation.}}\label{eq:EP} \end{aligned}$$ The EP is one of the main pillars, onto which Einstein founded his General Relativity Theory (GRT). Accelerated electrical charges ============================== While the EP is correct beyond doubt for mechanical phenomena (as is well[-]{}known since Newton’s days), there is an ongoing dispute since decades, whether the EP is correct with regard to accelerated electrical charges, which are (or are not?) emitting electromagnetic radiation. Based on Maxwell’s classical theory of electromagnetism[@Maxwell:treatise], Larmor[@Larmor:radform] computed the electromagnetic power radiated by a particle with charge $q$ and velocity $\boldsymbol{v}$, which is accelerated by $\boldsymbol{\dot{v}}$ (note that SI units[@SI:units] are used throughout this article): \[ksahghsdg\]$$\begin{aligned} P=\frac{2q^2\,\boldsymbol{\dot{v}\cdot\dot{v}}}{3c^3(4\pi\epsilon _0)}\quad\text{if }v\ll c\label{ksahghsdga}\end{aligned}$$ With $m$ being the particle’s rest mass, and $\dif\! p/\dif\!\tau $ being the derivative of it’s four[-]{}momentum $(p ^\mu )\equiv (\gamma mc,\gamma m\boldsymbol{v})$ with respect to it’s proper time $\tau $, the relativistic generalization of is $$\begin{aligned} P=-\frac{2q^2}{3c^3(4\pi\epsilon _0)m^2}\,{\ensuremath{\frac{\mathrm{d}\hspace{0.1em}p_\nu}{\mathrm{d}\hspace{0.1em}\tau }}}\,{\ensuremath{\frac{\mathrm{d}\hspace{0.1em}p^\nu}{\mathrm{d}\hspace{0.1em}\tau }}}\ . \label{ksahghsdgb}\end{aligned}$$ Both equations are (only) valid in inertial reference systems. The derivation of is demonstrated in very detail in [@apin:se90115]. The qualitative and quantitative correctness of has been confirmed by countless technical applications like radio antennas, X[-]{}ray tubes, and synchrotrons. The contradiction {#sec:contradict} ================= Consider two identical charges, one held at rest some meters above earth surface, and the other falling down from some meters height to earth surface. As the Coriolis acceleration and tidal accelerations (by which a reference system fixed to earth surface differs from a true inertial system) are much smaller than the gravitational acceleration due to the mass of the earth, according to Larmor’s law the falling charge, being accelerated with $\dot{v}\approx 9.81\,\text{m}/\text{s}^2$, should radiate much stronger than the charge at rest relative to earth surface. According to Einstein’s equivalence principle , however, the charge at rest in the earth’s gravitational field ($g\approx 9.81\,\text{m}/\text{s}^2$) should radiate, because the setup is equivalent to a setup in which the same charge is boosted with $\dot{v}\approx -9.81\,\text{m}/\text{s}^2$ in a region of space free of significant gravitation, while the charge in free fall should not radiate at all, because the setup is equivalent to the same charge being not boosted in a region of space free of significant gravitation. Thus the EP and Larmor’s formula seem to be incompatible. Apparently at least one of them must be wrong. Below we will argue, however, that the EP is completely correct, and that Larmor’s formula actually is not wrong, but must be interpreted differently than done above. During the decades, theorists have worked out highly sophisticated constructions, some of them (allegedly) proving that and why free falling charges radiate, while charges supported at rest in gravitational fields do not radiate, others of them (allegedly) proving that and why just the opposite is true. See [@Groen:equivprinc section4] for a review of the theoretical achievements. In a situation where theorists can not find consensus for decades, usually the experimentalists should provide the decision. But in this case that’s easier said than done. No help from direct experimental observation {#sec:dirobs} ============================================ If a cloud of $N$ elementary charges is accelerated by $g\approx 9.81\,\text{m}/\text{s}^2$, then according to Larmor’s law the power $$\begin{aligned} P\stackrel{\eqref{ksahghsdga}}{=}N\cdot\frac{2e^2g^2}{3c^3(4\pi\epsilon _0)}\approx N\cdot 5.5\cdot 10^{-52}\,\text{W} \end{aligned}$$ will be radiated. If we could measure a radiated power $\geq 1\,\text{pW}$ (which is a quite ambitious objective), we would need to let a charge as large as $-e\cdot 10^{40}$ fall down to earth, or hold it at rest above earth surface, to achieve a measurable result, [AND]{} at the same time we would need to make sure that the charge really is subject only to gravitation, but not to the electrostatic force exerted by the charge of $+e\cdot 10^{40}$, which remained on earth when the opposite charge was prepared. Considering that the earth is consisting of about $10^{50}$ atoms, and that the attraction between a charge $-e$ and a charge $+e$ is at same distance about $10^{33}$ times larger than the gravitative attraction between two silicon atoms, that experiment is clearly impossible on earth. Alternatively we might try to observe somewhere in the universe charges falling in gravitational fields, which are much stronger than the field of the earth. But if we are lucky and observe such radiation, it will hardly be possible to prove that the radiating charge, lightyears away from the observer, really is in free fall, and is not accelerated by some external electromagnetic field. Hence it is impossible by today, and will probably (disputably) stay impossible forever, to decide by direct experimental observation whether charges radiate or not, if they respectively are in free fall or supported at rest in gravitational fields. But there is some indirect experimental evidence ================================================ The indirect evidence announced here, results simply from the analysis of radiating charges in antennas, X[-]{}ray tubes, and synchrotrons. First consider the synchrotron. In storage rings, electrically charged particles are subject to an inertial centrifugal acceleration, which is exactly balanced by the centripetal acceleration exerted by the Lorentz force. Hence the orbiting charges feel weightless, like cosmonauts orbiting around the earth in the International Space Station feel weightless. Described in an inertial reference system, however, the charges are accelerated and should radiate. But how can the orbiting charges know that emittance of synchrotron radiation is due, even though they do not feel that acceleration? The obvious answer is: A charged particle indeed does not feel any radial acceleration in a storage ring, but the electromagnetic field, emanating from the particle, does. The electromagnetic field is not an integral part of the charged particle. Instead it is something external to the particle. There is energy stored in the field. Hence the field is subject to gravity and inertial forces. But it is not subject to the Lorentz force, which the magnetic field of the storage ring is exerting onto the charged particle. While the Lorentz force is precisely compensating the inertial centrifugal force acting onto the particle, the centrifugal force acting onto the field is not compensated by any centripetal force. Therefore the Lorentz force is producing a *relative acceleration* between the stored elementary particle and it’s electromagnetic field. In antennas and X[-]{}ray tubes, just the opposite is true: Electrons are accelerated, and feel the acceleration, while their electromagnetic fields are not accelerated. In any case of radiating charges, there is a *relative acceleration* between the charge and it’s field. Nobody ever observed a charge radiating, while there was no relative acceleration between the charge and it’s field. Hence we arrive at a consistent picture, which if we assume that it is just the relative acceleration between a charge and it’s field, which is causing the emission of radiation, and therefore clarify the correct interpretation of Larmor’s formula by this $$\begin{aligned} \mbox{\parbox{.86\linewidth}{Application note for Larmor's formula \eqref{ksahghsdg}\,:\newline{}$\boldsymbol{\dot{v}}$ is not an arbitrary acceleration, but a relative acceleration between the charge and it's electromagnetic field, {\mbox{i.\,e.}\ }an acceleration due to any force but not gravity.}}\label{eq:interpret} \end{aligned}$$ Indeed, if the derivation[@apin:se90115] of Larmor’s formula is carefully reviewed, it becomes obvious that $\boldsymbol{\dot{v}}$ is always understood as an acceleration due to an external force, which does affect only the charged particle, but not the electromagnetic field emanating from it. Therefore $\boldsymbol{\dot{v}}$ must not be confused with gravitational acceleration. Stress is just another name for relative acceleration. The conviction, that stress between a charged particle and it’s field is the true cause of the radiation, is not new. This approach has been worked out in particular by Soker and Harpaz[@Soker:radchargeerst; @Soker:radcharge]. What we have done here, is essentially to point out experimental evidence (synchrotron radiation of charged particles in storage rings, as compared to radiating electrons in antennas and X[-]{}ray tubes) in support of this point of view. Charges supported at rest in gravitational fields {#sec:chargesupport} ================================================= This case is more complex than expected, because the intricate general[-]{}relativistic effect of event[-]{}horizons comes into play. First, we conclude from our previous discussion: A charge, supported at rest in a gravitational field, should radiate, because the support is holding the charge at it’s place due to electromagnetic forces which exactly balance the gravitational force, while the field emanating from the charge is not supported, but subject to gravity. Hence there is stress between the particle and it’s field, resulting into radiation. In apparent contradiction, Unnikrishnan and Gillies[@Unnikrishnan:radcharge] cite two convincing arguments, demonstrating that an observer supported at rest in the same gravitational field, will not observe radiation: First, there is no current and hence no magnetic field in this static setup, which would be an indispensable precondition for electromagnetic radiation. Second, energy conservation would be violated: The energy content of the gravitational field, and in particular the potential energy of the charge at constant height in the gravitational field, is constant while it is radiating. We could surround the radiating charge by antennas, let the received radiation do work, and thus have an inexhaustible source of energy, the perfect perpetuum mobile. But another observer, in free fall passing by the charge, could see the radiation, as is obvious if we apply the EP: If the support of the charge — far off any measurable gravitational field — is accelerated by a rocket drive, then in an inertial reference system ([ ]{}in the system, in which the free falling observer is at rest) there is very well a current and hence a magnetic field. And the accelerated support very well is doing work, supplying the energy for acceleration of the charge and the radiation emitted by the charge. There is no true contradiction, however. In an important article, Rohrlich[@Rohrlich:equivalence] showed that the radiation, emitted by an accelerated charge, can be observed by a detector at rest in an inertial system ([ ]{}accelerated relative to the emitting charge), while a detector, which is co[-]{}moving with the accelerated charge, will only see an electrostatic field. Transformed back into the picture with the charge at rest in the gravitational field, this result says that the co[-]{}moving detector, which as well is at rest in the gravitational field, will observe only an electrostatic field but no radiation, while another detector, which in accelerated movement ([ ]{}in free fall) is passing by the charge, will observe the radiation. According to GRT, there exists an event horizon — often called Rindler[-]{}horizon — between the charge supported at rest in the gravitational field, and the observer at rest relative to the charge. The observer can not see the radiation, which is beyond his event[-]{}horizon. The effect of event horizons has been evaluated in detail by Boulware[@Boulware:acccharge]. For a particular clear treatment, see the article by deAlmeida and Saa[@Almeida:horizon]. Hence the perpetuum mobile, with it’s antennas at rest relative to the charge, will not work. And antennas, passing by in free fall the charge at rest, wouldn’t violate energy conservation, because they loose during their fall much more potential energy than they can gain due to absorption of radiation. Rohrlich’s result has bee rejected by Soker and Harpaz[@Soker:radcharge], who argue that emission and absorption of radiation are objective events, which can not simply disappear by whatever transformation of time[-]{}space coordinates. Even if the co[-]{}moving observer can not see the radiation, he can of course note, that a free falling observer is receiving energy. For example, the free falling observer could use the received energy to shine a light, and the co[-]{}moving observer, at rest relative to the charge, could see the light. But according to observation of the co[-]{}moving observer, no energy has been extracted from the electrostatic field of the charge. Isn’t energy conservation violated? Clearly this question deserves a more detailed analysis. Let’s build a receiver device, consisting of an antenna and a battery, into which all the electromagnetic energy is stored which is received by the antenna. $M$ is the mass of the receiver, when the battery is empty. To carry this receiver from earth surface up to a tower of height $H_0$ above earth surface, the work $$\begin{aligned} W_{\text{start}}=M\!{\int \limits }_0^{H_0}\!\dif\! h\,g(h) \end{aligned}$$ must be done. At height $H_0/2$ a charge is fixed to the tower. Now we let the receiver fall down. Being in free fall, the receiver will see the charge radiating, pick up the radiation energy $\Delta E$, and then crash onto earth surface. The energy $\Delta E$ can be extracted from the battery, and can be used to do some work. From point of view of the observer, who is at rest relative to the charge, no energy has been extracted from the electrostatic field of the charge. Therefore in his reference system, the energy is at the end $$\begin{aligned} W_{\text{end}}=\Delta E +W_{\text{crash}}\ . \end{aligned}$$ Energy conservation requires $$\begin{aligned} W_{\text{start}}=W_{\text{end}}\ . \end{aligned}$$ Unfortunately this setup is not suited to check energy conservation, because the energy $W_{\text{crash}}$ is dissipated as uncontrolled heat into the environment. In a gedanken[-]{}experiment, we can improve the setup and get rid of the crash[^2]: Where the receiver crashed to earth, we dig a hole right down to the center of the earth, and straight further to the antipode surface. In a first experimental run, we remove the charge, and let the receiver fall down from the tower (height $H_0$) into the hole. As air friction is negligible in this gedanken[-]{}experiment, the receiver will swing through the earth, turn at height $H_0$ above antipode earth surface, and swing back to our surface up to height $H_0$.[^3] The battery is empty, of course, because the charge has been removed. In a second experimental run, two identical charges are placed on earth surface diametrically opposed right and left nearby the hole entrance. Thus we make sure that the receiver will not get any transversal momentum, when it absorbs the radiation. Again we let the device fall down (with empty battery) from height $H_0$. When it has returned from it’s round[-]{}trip through the earth, the energy $E_{\text{battery}}>0$ can be extracted from the battery. As — from point of view of the observer, who is at rest relative to the charges — no energy has been extracted from the electrostatic field of the charges, energy conservation requires that the energy stored in the battery must come from the gravitational field. Hence we expect that the returning receiver will not reach height $H_0$ any more, but only height $H_1<H_0$, which is determined by $$\begin{aligned} E_{\text{battery}}=M\!{\int \limits }_{H_1}^{H_0}\!\dif\! h\, g(h)\ .\label{ishngsjdg} \end{aligned}$$ A simple mechanical model, which doesn’t require relativistic treatment and consideration of event horizons, may serve to make this result plausible: The charges are replaced by guns, which fire a bullet each into the receiver, when it passes by. Let $v_{\text{a}}$ be the velocity of the receiver just before it is hit by the bullets on it’s way from the tower down into the hole. $v_{\text{b}}$ is it’s velocity immediately after it absorbed the bullets. When returning after the round[-]{}trip through the earth to the surface, the velocity of the receiver again is $v_{\text{b}}$. Then again two bullets are fired onto it, and $v_{\text{c}}$ is it’s velocity immediately after it absorbed the bullets. With horizontal alignment of the guns, momentum conservation gives $$\begin{aligned} Mv_{\text{a}}=(M+2m_{\text{bullet}})v_{\text{b}}=(M+4m_{\text{bullet}})v_{\text{c}}\ .\label{mnsngnbs} \end{aligned}$$ After the receiver was hit a second time by two bullets, it moves up to height $H_1$, converting all it’s kinetic energy into potential energy: $$\begin{aligned} (M+&4m_{\text{bullet}})\!{\int \limits }_0^{H_1}\!\dif\! h\,g(h)\, =\notag\\ &=\frac{1}{2}\,(M+4m_{\text{bullet}})v_{\text{c}}^2\stackrel{\eqref{mnsngnbs}}{=}\frac{1}{2}\,\frac{M}{1+4m_{\text{bullet}}/M}\, v_{\text{a}}^2=\notag\\ &=M\!{\int \limits }_0^{H_0}\!\dif\! h\,g(h)\ -2m_{\text{bullet}}v_{\text{a}}^2+\mathcal{O}(m_{\text{bullet}}^2/M^2) \end{aligned}$$ From this equation we read $$\begin{aligned} 4m_{\text{bullet}}\!{\int \limits }_0^{H_1}\!\dif\! h\,g(h)&=M\!{\int \limits }_{H_1}^{H_0}\!\dif\! h\,g(h)\ -\notag\\ &-2m_{\text{bullet}}v_{\text{a}}^2+\mathcal{O}(m_{\text{bullet}}^2/M^2)\ .\label{ksmngvnsdngb} \end{aligned}$$ The potential gravitative energy of the bullets in the mechanical model — [ ]{}the left side of this equation — is resembling the electromagnetic energy $E_{\text{battery}}$ stored in the battery. The first term on the right side is the energy extracted from the gravitational field. In , the gravitational field is the only source for the energy gain of the receiver. But in the mechanical model, there is a further source — the second line in — from which the receiver has harvested energy. This further source of energy are the guns ([ ]{}the chemical reactions in their cartridges), which add kinetic energy to the receiver: $$\begin{aligned} \Delta E_{\text{kin}}&=\frac{1}{2}\, (M+4m_{\text{bullet}})\, v_{\text{c}}^2-\frac{1}{2}\, Mv_{\text{a}}^2=\notag\\ {\hspace{0.36em}\hspace{0em}&=\hspace{-0em}\hspace{-1.16em}\stackrel{\eqref{mnsngnbs}}{\phantom{=}}}\frac{1}{2}\,\Big(\frac{M}{(1+4m_{\text{bullet}}/M)}-M\Big)\, v_{\text{a}}^2=\notag\\ &=-2m_{\text{bullet}}v_{\text{a}}^2+\mathcal{O}(m_{\text{bullet}}^2/M^2) \end{aligned}$$ In contrast, the radiation energy of the charges must come exclusively from the gravitational field. Besides this difference, may be considered a confirmation of . Thus there is no conflict with energy conservation. Still the observer (at rest relative to the charges) may wonder what is going on: As he can — in this gedanken[-]{}experiment of classical physics — perform measurements with arbitrary accuracy, he will note that the receiver is decelerated a little bit each time when it passes by the charges, see . The observer will conclude that kinetic energy is transformed into electromagnetic energy, which he finds after the experiment in the battery. The kinetic energy of the receiver again results from it’s potential energy in the earth’s gravitational field. In total, gravitational energy is converted into electromagnetic energy, which is stored in the battery of the falling receiver. The amount of energy conversion is inversely proportional to the square of the distance between the charges and the receiver. If the charges are removed, no energy conversion happens at all. The presence of the charges obviously is indispensable to make the conversion of gravitational energy into electromagnetic energy happen, but still neither the charges nor their electrostatic fields seem to be affected by the process. The charges resemble the catalysts, which are applied in some chemical reactions. The catalysts don’t seem to be involved in the chemical process, but still their presence is indispensable to bring about the reaction. At this point of our considerations, we see that we have not only reached, but actually already slightly violated the limits of the application range of classical physics. Firstly note, that *all* energy radiated by charges at rest in gravitational fields must be absorbed by some accelerated receiver. If radiation would disappear un[-]{}absorbed to infinity, then the radiated energy could not be replenished by gravitational energy due to deceleration of a receiver, and energy conservation would be violated. Note secondly: Energy radiated by the charges needs a finite time to propagate to the receivers. But extraction of energy from the gravitational field only starts, when the radiation is being absorbed by the receiver. In other words: The radiation reaction force, which is supplying the energy of the radiation, here is perplexingly working on the receiver, but not on the emitter of the radiation. Hence energy conservation is violated for the time interval between emission and absorption of radiation. Odd timing of energy transfer is a well[-]{}known problem of the classical model of radiating charged particles, and the issue of radiation reaction never found a satisfying solution within classical physics. When Dirac considered the acceleration of an electron due to an electromagnetic force which is acting only for a short moment ([ ]{}a pulse), he found[@Dirac:classelectron equation (35)] that the electron is already accelerated *before* the pulse arrives at the particle’s position. This is often called pre[-]{}accelerationin the literature. Wrong timing of energy transfer is a general problem of the classical radiation model, not restricted to charges in gravitational fields. Wheeler and Feynman tried in their absorber theory[@Wheeler:absorber], to cope with both problems. They assumed that *all* radiation ever emitted in the universe is absorbed within finite time by an absorber. And they assumed that accelerated charges and absorbers are interacting both retarded and advanced, to overcome the timing problem. Their absorber theory suffers from severe flaws[@Gruendler:absorber], however, and does not overcome the shortcomings of the classical theory of radiating charged particles. As we knew upfront that the classical theory is not able to treat correctly the radiation reaction force acting onto charged particles, which are radiating due to acceleration by electromagnetic forces, we could not reasonably expect that this problem would magically disappear when gravitation comes into play. While it is somewhat disturbing, that the gravitative radiation reaction force is not working onto the emitter but onto the receiver of the radiation, this is not worse than other oddities we are used to encounter with the classical model of radiating charges. Hence it would not be reasonable, now to shift the problem of radiation reaction to GRT, and doubt the validity of the EP. Instead clearly classical electrodynamics is responsible for this problem. We know that electromagnetic energy is emitted and absorbed not in form of continuous waves, but in form of photons. Hence quantum electrodynamics, but not classical electrodynamics, is appropriate to handle any questions regarding emission and absorption of radiation. If we want — in spite the mentioned reservations — to stick to the classical model, then we can summarize our conclusions, regarding a charge supported at rest in a gravitational field, as follows: With the last sentence, we adopted the point of view of quantum theory, which is discussing observable phenomena, but not the point of view of classical physics, which is discussing alleged objective facts even if they are trivially not observable due to lack of an observer. Free falling charges ==================== As an immediate consequence of Larmor’s formula with the application note, a charged particle does not radiate if it is free falling in a gravitational field, because the gravitation is exerting the same acceleration on the particle and it’s field, and hence there is no stress between the particle and it’s field. We remark that inhomogeneities of the gravitational field (like tidal effects or anisotropy) will be negligible in the immediate vicinity of a charged particle (the near[-]{}field region), and thus not induce radiation of free falling charges. The consistency and beauty of theories ====================================== The theories of physics must comply with all observed phenomena (this is the requirement of correctness), they must provide an appropriate description of all observable phenomena (this is the requirement of completeness), and there must be no contradictions between various parts of physical theories (this is the requirement of consistency). The apparent conflict between GRT (respectively it’s integral part, the EP) and classical electrodynamics, described in section\[sec:contradict\], was an inconsistency between theories. The conflict could be removed by a very slight adjustment of electrodynamics, [ ]{}the clarification of Larmor’s formula due to the application note, while GRT stayed untouched. That’s not surprising. Remember for example that the conflict between GRT and quantum field theory, regarding the energy of the vacuum (the cosmological constant problem) could be settled[@Gruendler:ccp] by a slight modification of quantum field theory, while again GRT stayed untouched. If GRT is in conflict with some other theory, then most likely GRT will win, while the other theory will need amendment. The exceptional strength of GRT is caused by the exceptional clarity and simplicity of it’s premises. We could say: The exceptional strength of GRT is caused by it’s beauty. In search for truth, the beauty of physical theories is a reliable guideline. [=100000]{} [^1]: <gerold.gruendler@astrophys-neunhof.de>[www.astrophys-neunhof.de](http://www.astrophys-neunhof.de/) [^2]: I thank Noam Soker (Technion, Haifa), who suggested in private communication this crash[-]{}free setup. [^3]: Probably almost every physicist has computed this little exercise in the first months of his[/]{}her studies.
--- abstract: 'We discuss the counting of minimal geodesic ball coverings of $n$-dimensional riemannian manifolds of bounded geometry, fixed Euler characteristic and Reidemeister torsion in a given representation of the fundamental group. This counting bears relevance to the analysis of the continuum limit of discrete models of quantum gravity. We establish the conditions under which the number of coverings grows exponentially with the volume, thus allowing for the search of a continuum limit of the corresponding discretized models. The resulting entropy estimates depend on representations of the fundamental group of the manifold through the corresponding Reidemeister torsion. We discuss the sum over inequivalent representations both in the two-dimensional and in the four dimensional case. Explicit entropy functions as well as significant bounds on the associated critical exponents are obtained in both cases.' author: - | C.Bartocci , U.Bruzzo $\flat$, M.Carfora $\flat$,$\natural$, A.Marzuoli$\sharp$,$\natural$\  Dipartimento di Matematica, Universitá di Genova\ Via L.B. Alberti, 4 16132 Genova, Italy\ $\flat$ International School for Advanced Studies, SISSA-ISAS\ Via Beirut 2-4, 34014 Trieste, Italy\ $\natural$ Istituto Nazionale di Fisica Nucleare, Sezione di Pavia,Italy\ $\sharp$ Dipartimento di Fisica Nucleare e Teorica dell’Università di Pavia\ Via Bassi 6, I-27100 Pavia, Italy title: | ENTROPY OF RANDOM COVERINGS\ AND\ 4-D QUANTUM GRAVITY --- =-.3cm =1000 =5000 ‘@=11 refe\#1[\#1]{} biblabel\#1[[**[\#1]{}**]{}]{} citexr\[\#1\]\#2[@fileswauxout citearefe[forciteb:=\#2]{}[\#1]{}]{} ‘@=12 =msbm10 \#1 =eufm10 \#1 §[[S]{}]{} \#1[\^[\#1]{}([g]{}\_)]{} [**SISSA Ref. 97/94/FM**]{} e-mail addresses: bartocci@matgen.dima.unige.it, bruzzo@sissa.it, Carfora@pavia.infn.it, Marzuoli@pavia.infn.it Introduction ============ Dynamical triangulations, \[ADF\], \[D2\], \[Ka\], \[We\], have recently attracted much interest as a computationally manageable method for the investigation of discrete models of quantum gravity. This approach deals with a variant of Regge calculus \[R\], \[Wi\] where, in alternative to the standard usage, the edge lengths of the triangulated manifolds are kept fixed and set equal to some minimal short-distance cut-off, whereas the underlying combinatorial structure of the triangulations takes the role of a statistical variable, varying in some ensemble of manifolds contributing to the model. A dynamical content is thus given to the connectivity of the triangulation in such a way that each choice of a triangulation corresponds to a choice of metric by Regge calculus. This particular prominence given to the enumeration of triangulations gives to dynamically triangulated gravity the seemingly simple flavour of a combinatorial theory. However, it must be stressed that this simplicity is largely apparent rather than actual, since at a classical level and at a variance with standard Regge calculus, dynamical triangulations do not afford a simple procedure for recovering the Einstein-Hilbert action out of its combinatorial counterpart, diffeomorphism invariance being now completely lost. 0.5 cm The possible advantages in the use of dynamical triangulations are rather related to the different way in which one realizes, in this approach, the sampling of inequivalent Riemannian structures. This is obtained by choosing a representative metric, (by fixing the edge lengths), and by ergodically varying the combinatorial structure of the triangulation. We do not know of a proof which explicitly shows a correspondence between this procedure and a suitable continuous way of parametrizing the set of inequivalent riemannian structures. Perhaps the Gromov-Hausdorff topology discussed below provides such a correspondence. In any case, it is more or less tacitly assumed that in this way one sweeps a much larger set of riemannian structures as compared to the Regge case, where the formalism, in this respect, is less flexible owing to the constraints expressed by the triangular inequalities. These constraints tend to localize the edge-length varying triangulations used in Regge calculus in a neighborhood of the riemannian structure corresponding to the triangulation originally given. Whereas, one expects that the set of discretized manifolds considered in the dynamically triangulated approach is uniformly distributed over the space of all riemannian structures. This is very appealing for discussing the phase structure in the space of the coupling constants of the theory: the cosmological constant, and the gravitational coupling constant. By defining the regularized partition function as a sum over topologically equivalent triangulations, results for continuum quantum gravity can be extracted by looking for critical points, in the space of coupling constants, where the observables of the model, such as the average number of simplexes, diverge and obey scaling relations. This scaling behavior allows for a renormalization of the couplings in terms of the given edge length of the simplexes so as to obtain finite values for the volume and other simple geometrical quantities characterizing the extended configurations dominating the theory in the continuum limit. In other words, one looks for the onset of a regime where the details of the simplicial approximation become irrelevant and a continuum theory can be constructed. 0.5 cm There is a general comment that should be made at this stage. In order to provide general entropy estimates for discretized manifolds, we find expedient to introduce another kind of discretization, yet, besides dynamical triangulations and Regge calculus. This discretization is associated with metric ball coverings of given radius. While not so useful from a numerical point of view, it provides a good analytical edge on discrete quantum gravity. It blends the simple combinatorial structure of dynamical triangulations with the deep geometrical content of Regge calculus. We feel that such variety of possible models should be considered with a positive attitude, by taking advantage of the respective good properties rather than emphasizing the drawbacks, as is often done. Thus, even if in what follows we emphasize dynamical triangulations versus Regge calculus, this does not mean that we wish to privilege that formalism with respect to the other. The issue we address, the counting of the number of topologically equivalent discretizations of an $n$-manifold of given volume, ($n\geq 3$), is present in both cases (see \[Fro\]), but it has been recently mostly emphasized for dynamical triangulations. 0.5 cm As is well known, the main development of discrete models of quantum gravity, and in particular of dynamically triangulated gravity, has resulted from their role in providing a method for regularizing non-critical bosonic string theory, (see [*e.g.*]{} \[FRS\] for a review). This latter can be seen as two-dimensional quantum gravity interacting with $D$ scalar fields, where $D$ is the dimension of the space where the string is embedded. The associated dynamically triangulated models correctly reproduce, in the continuum limit, the results obtained by conformal field theory. In particular, they are consistent with the computation \[KPZ\], in the context of the Liouville model, of the entropy of closed surfaces with Euler characteristic $\chi$, area $A$ and interacting with matter fields with central charge $c\leq 1$, [*viz.*]{}, $$\begin{aligned} S_{\chi}(A) \simeq ({\Lambda})^{A} {A}^{\frac{{\chi}(\Sigma)}{2}({\gamma}_{str}- 2)-1} \label{Superficie}\end{aligned}$$ where $\Lambda$ is a suitable constant and ${\gamma}_{str}$, the [*string exponent*]{}, is given as a function of the central charge by $$\begin{aligned} {\gamma}_{str}=\frac{1}{12}(c-1-\sqrt{(25-c)(1-c)})\end{aligned}$$ 0.5 cm The above expression for ${\gamma}_{str}$ is valid as long as $c\leq{1}$, and it appears to make sense only in the weak coupling phase corresponding to $c$, (or equivalently $D$), smaller than $1$. For $c>1$, conformal field theory becomes unstable, and the above expression for the string exponent is no longer reliable, (recently, an extension of the KZP scaling to the $c>1$ case has been proposed by M.Martellini, M.Spreafico and K. Yoshida,\[MSY\]). Roughly speaking, it is believed that in this regime the surfaces develop spikes and long tubes, and as seen from a large distance the surface is no longer a two-dimensional object. It collapses into a branched polymers configuration, \[DFJ\]. It is important to stress that two-dimensional dynamically triangulated models are well defined also in these cases, where conformal field theory is no longer trustworthy, and they provide a technique accessible to computer simulations. 0.5 cm A natural question concerns the possibility of extending the techniques and some of the general results of the two-dimensional case to the dimension three and four. This research program has been undertaken by various groups by performing extensive computer simulations of three- and four-dimensional triangulated manifolds. Although these simulated systems have a rather small size as compared to the simulations used for 2D-gravity, (typically one puts together $10^4$ four-simplexes, whereas in the two-dimensional case triangulations with $10^7$ triangles are not unusual \[Ag\]), interesting results about critical phenomena already emerge, (see \[D1\] for a an excellent review). Such results are qualitatively similar in the $3D$ and $4D$ cases, \[Ag\], \[Aj\], \[Va\], in the sense that the phase diagram of the theory as a function of the cosmological constant and the gravitational coupling constant shows the existence of a critical point. Here, the configurations dominating the statistical sum change from being crumpled non-extended objects to extended, finite Hausdorff-dimensional, objects. In three dimensions there is a rather strong evidence that this change is associated with a first order transition indicating the absence of a continuum limit. Whereas, in four dimensions computer simulations indicate that the transition between the crumpled and the extended phases may be of a continuous nature. 0.5 cm There is increasing evidence to the soundness of this picture, and at least from a general foundational point of view, dynamically triangulated gravity seems to be now well established also in dimension three and four. However, there still remain some outstanding problems. The most obvious one is to obtain explicit analytic control on the theory, (here we do not consider as dynamically triangulated models the formulations of 3D-gravity à la Ponzano-Regge). It is not yet not known if it is possible to obtain such a control, and the best results at the moment come from an interplay between computer simulations and the general analytic properties of the various models considered, ([*e.g.*]{}, the choice of the most appropriate measure on the set of triangulated manifolds, \[BM\]). 0.5 cm The experience with the two-dimensional case shows that the delicate point here is to ascertain if the number of dynamically triangulated $n$-manifolds, ($n>2$), of given volume and fixed topology grows with the volume at most at an exponential rate. This is a basic entropy bound necessary for having the correct convergence properties of the partition function defining the model. 0.5 cm In the case of surfaces, the required entropy bounds, such as (\[Superficie\]), are provided either by direct counting arguments, or by quantum field theory techniques \[BIZ\], \[FRS\] as applied to graph enumeration, a technique that has found utility in a number of far reaching applications in surface theory \[Wt1\], \[Ko\], \[Pe\]. In higher dimensions, the natural generalizations of such approaches are not viable even if numerical as well as some analytical evidence \[Am\], \[ADF\], \[Ag\] shows that exponential bounds do hold in simple situations, (typically for manifolds with $n$-sphere topology). Recently, it has even been argued, on the basis of some numerical evidence, that an exponential bound may fail to hold in dimension four \[CKR\], but this analysis is quite controversial \[AJ\]. Conversely, Boulatov has provided a nice argument for proving that for a dynamically triangulated homotopy three-sphere there is an exponential bound, \[Bou\], (the constants in the estimates are not characterized, however). Thus, a systematic method for providing explicit entropic bounds relating topology to the number of topologically equivalent triangulations appears as a major open issue in higher dimensional dynamically triangulated gravity \[D1\]. 0.5 cm Without any control on the topology of manifolds, there is no hope in the search for an exponentially bounded entropy function for the number of equivalent triangulations. For instance, it can be shown \[Am\] that the number of distinct triangulations on (three)-manifolds, with given volume $V$ and arbitrary topological type, grows at least factorially with $V$. Thus suitable constraints on the class of riemannian manifolds considered are necessary for having exponential growth of the number of equivalent triangulations. By analogy with the two-dimensional case, one may simply fix the topology a priori, ([*e.g.*]{}, an $n$-sphere topology, $n=3$, $n=4$). This is a pragmatic point of view. It has the advantage of simplicity, but it has the serious drawback that it does not allow to easily deal with fluctuating topologies, either because it is difficult to know a priori what kind of topological invariants are going to enter the entropy estimates in dimension $n\geq 3$, or because a topological classification of the relevant class of manifolds is often lacking, [*e.g.*]{}, in the case of three-manifolds. 0.5 cm The point of view implicit in the approach above is also motivated by the assumption that the topology of a manifold is not apparently under control in terms of the geometrical invariants characterizing the size of a manifold, (and hence its entropy), namely the volume or other simple geometrical elements such as the diameter, and bounds on curvatures. However, the experience with recent developments in riemannian geometry may suggest a change of this restrictive viewpoint. Such an indication comes from a basic theorem due to Cheeger, (see [*e.g.*]{}, \[Ch\] for a readable account of such finiteness theorems), according to which, for any given dimension, there are a [*finite number of homeomorphism types*]{} in the set of compact riemannian manifolds with volume bounded below, diameter bounded above and sectional curvature bounded in absolute value. Further finiteness results of this type, even under weaker control on the size of the manifolds, have been obtained \[Pt\], \[GPW\], recently. A typical example in this sense is afforded by considering for arbitrary $r\in {\bf R}$, $v\in {\bf R}^+$, $D\in {\bf R}^+$ and integers $n\geq 3$, the set of closed connected Riemannian $n$-manifolds, $M$ whose sectional curvatures satisfy $sec(M)\geq r$, whose volume satisfies $Vol(M)\geq v$, and whose diameter is bounded above by $D$, $diam(M)\leq D$. This is an infinite-dimensional collection of riemannian structures, with different underlying topologies. A huge space, for which one can prove finiteness of the homotopy types (in any dimension), finiteness of the homeomorphism types (in dimension $n=4$), and finiteness of diffeomorphism types (in any dimension $n\geq 5$). Even more generally, one may consider the set of all metric spaces, (smooth manifolds, and more general spaces, [*e.g.*]{}, negatively curved polyhedra), of Hausdorff dimension bounded above and for which a (Toponogov’s) comparison theorem for geodesic triangles locally holds, (Aleksandrov spaces with curvature bounded below \[BGP\], \[Per\]). On a strict geometrical side, we wish to stress that these are the spaces which arise naturally if one wishes to consider simplicial approximations to riemannian manifolds. 0.5 cm It must be stressed that the imposition of (lower) bounds on sectional curvatures does not seem to be fully consistent with the generic triangulations considered in dynamically triangulated models of quantum gravity. A simple two-dimensional example is afforded by noticing that the local contribution to curvature corresponding to a given vertex is ${\pi}/3(6-d)$, where $d$ is the order of the vertex, ([*i.e.*]{}, the number of edges meeting at it). A priori, when considering dynamical triangulations, there is no natural bound to the order $d$, and the local curvature may grow arbitrarily large. Thus spaces of bounded geometry may appear quite unsuitable as an arena for discussing dynamically triangulated models. The fact is that the use of spaces of bounded geometry should be considered simply as a technical step needed in order to get definite mathematical control on problems raised when dealing with enumerative problems for dynamical triangulations. In particular, once the entropy estimates are obtained, we should remove the dependence on the cutoffs artificially introduced. A priori, this removal would call for a rather delicate (inductive) limiting procedure, [*viz.*]{}, considering the behaviour of the sequence of entropy estimates on the nested collection of spaces of bounded geometry obtained by letting the lower bound to the curvature go to (minus) infinity. Actually, the entropy bounds obtained by us turn out to be not sensible to the cutoffs, and the potential shortcomings of the use of spaces of bounded geometry do not appear. 0.5 cm The possibility of getting some mathematical control on the entropy problem by using spaces of bounded geometry is suggested by the topological finiteness results recalled above. To clarify somehow this assertion, let us recall that in any given dimension the set of manifolds which satisfies the hypothesis of these finiteness theorems has a compact closure in a Hausdorff-like topology \[Gr1\]. This topology is naturally adapted to the coarse grained point of view implicit in the discrete approaches to quantum gravity, thus one may reasonably assume that partition functions associated with such discrete models are continuous in such topology. Since the configuration space is compact, and the partition functions are continuous, it follows that out of the sequence of bounded partition functions corresponding to finer and finer triangulations, we can extract a converging subsequence. This implies the corresponding existence of well-behaved entropy bounds. 0.5 cm Obviously, this is a heuristic argument, which however may serve as a guiding principle. Indeed, following this viewpoint, we proved \[CM4\] that, [*up to a sum over inequivalent orthogonal representations of the fundamental group*]{}, it is possible to explicitly provide the entropy function counting the topologically equivalent ways of covering and packing, with metric balls of given radius, $n$-manifolds of bounded geometry, for any $n\geq 3$, (notice that here topological equivalence stands for simple-homotopy equivalence). Strictly speaking this is not the entropy function for dynamical triangulations of the given manifold. However, it is easily seen, (see the following paragraph), that with a dynamically triangulated manifold there is naturally associated a metric ball covering, and that the number of topologically equivalent metric ball coverings of given radius is not-smaller than the number of corresponding dynamical triangulations. Thus, the entropy function determined in \[CM4\] is an upper bound to the entropy function for dynamical triangulations, (for manifolds of bounded geometry). This argument is useful for establishing that one has exponential bounds on the number of equivalent triangulations. However, it is important to stress that [*it does not allow to determine the critical exponents for dynamically triangulated models, (for $n\geq{3}$)*]{}. As a matter of fact, already for $n=2$, critical exponents for geodesic ball coverings can be quite different from the ${\gamma}_{str}$ appearing in (\[Superficie\]). This may be seen as a rather obvious consequence of the intuitive fact that there are many more states accessible to coverings rather than to triangulations, since the latter are combinatorially more rigid. 0.5 cm The analysis in \[CM4\] was rather incomplete, in particular we did not attempt any explicit determination of the critical exponents for geodesic ball coverings, and the connection between this type of discretization and the more familiar ones, like Regge calculus and dynamical triangulations, was quite unclear. Here we carry out an important step in this direction by explicitly providing an entropy estimate for geodesic ball coverings of four-dimensional manifolds and by determining bounds to the corresponding critical exponent. On passing, we also discuss the two-dimensional case, again by explicitly determining entropy estimates and bounds for the critical exponents. 0.5 cm Summary of the results ---------------------- The results obtained can be summarized as follows. 0.5 cm [*Entropy estimates in a given representation of the fundamental group*]{} 0.5 cm Let $M$ be a $n$-dimensional manifold, ($n\geq{2}$), of given fundamental group ${\pi}_1(M)$, and let $[\theta]\in Hom({\pi}_1(M),G)/G$ denote a conjugacy class of representations of ${\pi}_1(M)$ into a Lie group endowed with an Ad-invariant, symmetric, non-degenerate bilinear form, ([*i.e.*]{}, with an Ad-invariant metric). 0.5 cm We think of $M$ as generated by a configuration of $\lambda$ metric balls, $\{B(i)\}$, of fixed radius $\epsilon$ in such a way that the $\epsilon$-balls cover $M$ while the $\frac{\epsilon}{2}$-balls are disjoint. Moreover, at most $d$ balls are allowed to mutually overlap, (such $d$ depends on the geometry of the underlying manifold, but it is otherwise independent from $\epsilon$). We refer to the set of balls with radius $\epsilon/2$ as an [**${\epsilon}/2$-geodesic ball packing**]{} of $M$, while the same set of balls with radius $\epsilon$ defines the corresponding [**$\epsilon$-geodesic ball covering**]{} of $M$. A priori, the balls are topologically non-trivial, namely both the balls themselves and their mutual intersections are not assumed to be contractible, (this allows for arbitrarily large positive curvature in the underlying manifold). Explicitly, the non-trivial topology of the balls is described by their twisted cohomology groups $H^*_{\frak g}$ with coefficients in a certain (adjoint) flat bundle associated with the representation $\theta$. Roughly speaking, such groups provide [*colours*]{} to the balls of the covering, and it is assumed that there are $\lambda$ inequivalent colours to distribute over the $\lambda$ balls. Any two such colourings are considered combinatorially inequivalent if the resulting patterns of the balls belong to distinct orbits of the action of the symmetric group acting on the (centers of the) balls. We prove that such combinatorially inequivalent colourings can be used to construct, in the given representation $\theta\colon{\pi}_1(M)\to{G}$, the [*distinct minimal geodesic ball coverings*]{} of $M$, and thus, according to the previous remarks, they can also be used to enumerate the number of topologically equivalent triangulations, (definitions of what we mean for distinct coverings and distinct dynamical triangulations of a given manifold $M$ are given in section 2.1). To be more precise on the meaning of topological equivalence adopted here, it must be stressed that we are actually counting equivalent triangulations having a common simple homotopy type. This latter remark may need a few words of explanation. 0.5 cm A good counting function of utility for simplicial quantum gravity should provide the number of geodesic ball coverings in manifolds which are piecewise-linearly (PL) equivalent. But according to the finiteness theorems recalled above, asking for such a counting function is too much. In dimension three we have not yet control on the enumeration of the homeomorphism types while in dimension four no elementary enumeration is affordable for the PL types (by Cerf’s theorem we know that every PL 4-manifold carries a unique differentiable structure; there can be only countably many differentiable structures on a compact topological 4-manifolds, while there are uncountably many diffeomorphism classes of 4-manifolds homeomorphic to ${\Bbb R}^4$; in this sense counting PL structures is directly connected with the enumeration of differential structures). Thus in the physically significant dimensions there is no obvious enumerative criterion for PL structures. The necessary compromise between what can be counted and what is of utility for quantum gravity brings into evidence a particular equivalence relation in homotopy known as [*simple homotopy equivalence*]{}. Two polyhedra are simple-homotopy equivalent if they have PL homeomorphic closed regular neighborhoods in some ${\Bbb R}^n$. This notion of topological equivalence associated with simple homotopy may seem too weak for our enumerative purposes, but as we shall see it is sufficient for providing a detailed exponential bound to the enumeration of dynamical triangulations. 0.5 cm It is also important to stress that even if the balls are topologically trivial, ([*i.e.*]{}, if they are contractible), the labelling associated with the use of the twisted cohomology $H^*_{\frak g}$ is non-trivial. In such a case, $H^*_{\frak g}$ reduces to the assignment of the flat bundle, over the corresponding ball, associated with the representation $\theta$. If all balls are contractible, all such bundles are isomorphic, but, obviously, not canonically. Thus, $H^*_{\frak g}$ can be still used as non trivial labels for counting purposes. 0.5 cm The explicit counting of the inequivalent orbits, under permutations of the balls, associated with such colourings is obtained by means of Pólya’s enumeration theorem, \[Bo\]. More precisely, Pólya’s theorem is used for counting geodesic ball packings, so as to avoid the unwieldy complications arising from the intersections of the balls when they cover the manifold. The counting is then extended by a simple argument, (relying, however on a deep compactness theorem by Gromov), to the geodesic ball coverings associated with the packings. 0.5 cm From this enumeration we get that, [*in the given representation $\theta$*]{}, the number, $B_{Cov}({\Delta}^{\frak g},\lambda)$, of distinct geodesic ball coverings with $\lambda$ balls that can be introduced in the manifold $M$ is bounded above, for large $\lambda$, by 0.5 cm $$\begin{aligned} B_{Cov}({\Delta}^{\frak g},\lambda)\leq \frac{1}{\sqrt{2\pi}{\Delta}^{\frak g}(M)} \sqrt{\frac{n+2}{n+1}} {\left [ \frac{(n+2)^{n+2}}{(n+1)^{n+1}}{\tilde w} \right ] }^{\lambda} {\lambda}^{-\frac{1}{2}} \left( 1+O({\lambda}^{-\frac{3}{2}}) \right) \label{summasi}\end{aligned}$$ 0.5 cm where $n$ denotes the dimension of $M$, ${\Delta}^{\frak g}(M)$ is the Reidemeister torsion of $M$ in the [*given representation*]{} $\theta\colon{\pi}_1(M)\to{G}$, and where ${\tilde w}$ is, roughly speaking, the Reidemeister torsion of the [*dominant*]{} twisted cohomology group of the balls. 0.5 cm Recall that, given a manifold $M$ and a representation of its fundamental group ${\pi}_1(M)$ in a flat bundle ${\frak g}_{\theta}$, the Reidemeister torsion is a generalized volume element constructed from the twisted cohomology groups $H^i(M,{\frak g}_{\theta})$. In even dimension, if $M$ is compact, orientable, and without boundary, it can be shown by Poincaré duality that ${\Delta}^{\frak g}(M)=1$. However, this latter result does not hold for the balls of the covering since they have a boundary. In such a case, the corresponding torsion ${\tilde w}$ depends non-trivially on the metric of the ball, too. 0.5 cm Topologically speaking, (\[summasi\]) is estimating the number of geodesic ball coverings on a manifold of given [*simple homotopy*]{} type, (for a given ${\pi}_1(M)$ and a given representation $\theta$, this simple homotopy type is characterized by the torsion). If one is interested in counting coverings, (and triangulations), [*just*]{} on a manifold of given fundamental group, then (\[summasi\]) reduces to 0.5 cm $$\begin{aligned} \frac{1}{\sqrt{2\pi}} \sqrt{\frac{n+2}{n+1}} {\left[ \frac{(n+2)^{n+2}}{(n+1)^{n+1}}\right] }^{\lambda} {\lambda}^{-\frac{1}{2}} \left( 1+O({\lambda}^{-\frac{3}{2}}) \right)\end{aligned}$$ 0.5 cm which does not depend any longer on the representation $\theta\colon{\pi}_1(M)\to{G}$, and provides a significant exponential bound to the number of distinct coverings that one can introduce on $M$. 0.5 cm In particular, the number of distinct geodesic ball coverings, with $\lambda$ balls, that can be introduced on a surface $\Sigma$ of given topology turns out to be asymptotically bounded by 0.5 cm $$\begin{aligned} \frac{2}{\sqrt{6\pi}}{\left [ \frac{4^4}{3^3} \right ] }^{\lambda}{\lambda}^{-1/2}\end{aligned}$$ 0.5 cm This bound is perfectly consistent with the classical result provided by W.Tutte \[BIZ\] according to which the number of distinct triangulations, with $\lambda$ vertices, of a surface (with the topology of the sphere) is asymptotically $$\begin{aligned} \frac{1}{64\sqrt{6\pi}}{\left [ \frac{4^4}{3^3} \right ] }^{\lambda}{\lambda}^{-7/2}\end{aligned}$$ 0.5 cm The finer entropy estimates (\[summasi\]) do depend on the particular representation $\theta$, thus a more interesting object to discuss is their average over all possible inequivalent representations in the given group $G$ obtained by integrating (\[summasi\]) over the representation variety $Hom({\pi}_1(M),G)/G$. 1 cm [*Entropy estimates at fixed $\lambda$, and $n=2$*]{} 0.5 cm Denoting by $\theta$ the dominant representations, (in a formal saddle point evaluation of the integral over inequivalent representations), we get for the entropy estimate, up to some inessential constants 0.5 cm $$\begin{aligned} \int_{Hom({\pi}_1(M),G)/G}B_{Cov}({\Delta}^{\frak g},\lambda) \leq\nonumber\end{aligned}$$ $$\begin{aligned} \sum_{\theta\in Hom_0} \frac{2}{\sqrt{6\pi}{\Delta}_{\theta}^{\frak g}(M)} {\left [ \frac{4^4}{3^3}{\tilde w}_{\theta} \right ] }^{\lambda} {\lambda}^{[-\frac{(h-1)}{2}dim(G)- \frac{dim(z(\theta))}{2}-\frac{1}{2}]} \left( 1+\ldots \right) \label{summary1}\end{aligned}$$ 0.5 cm where $Hom_0$ denotes the (finite) set of representations contributing to the saddle point evaluation, $h$ denotes the genus of the surface $M$, and $z(\theta)$ denotes the centralizer of ${\theta}({\pi}_1(M))$ in the Lie group $G$. 0.5 cm We define the critical exponent ${\eta}(G)$ associated with the entropy function $B_{Cov}({\Delta}^{\frak g},\lambda)$ by means of the relation $$\begin{aligned} \int_{Hom({\pi}_1(M),G)/G} B_{Cov}({\Delta}^{\frak g},\lambda)\equiv Meas{\left(\frac{Hom({\pi}_1(M),G)}{G}\right) } \exp[c\lambda] {\lambda}^{{\eta}_{sup}-3}\end{aligned}$$ 0.5 cm where $c$ is a suitable constant, (depending on $G$). Then (\[summary1\]) provides also a bound for ${\eta}(G)$ given by (for a given $\theta\in Hom_0$), $$\begin{aligned} {\eta}(G)\leq 2+(1-h)\frac{dim(G)}{2}+\frac{1}{2}(1-dim(z(\theta))) \label{duecritico}\end{aligned}$$ For instance, for $G=U(1)$, we get $$\begin{aligned} {\eta}(G)\leq 2+\frac{1}{2}(1-h)\end{aligned}$$ which is consistent with KPZ scaling. This bound is an equality in the obvious case $h=1$, while it is sharp in the remaining cases. It is likely that (\[duecritico\]) holds also in the case where there is a strong coupling of 2D-gravity with matter, namely in the regime where KPZ scaling breaks down. 1 cm [*Entropy estimates at fixed $\lambda$, and $n=4$*]{} 0.5 cm In the four-dimensional case we obtain, again through a formal saddle point evaluation, and up to some inessential factors $$\begin{aligned} \sum_{\theta\in Hom_0} \frac{\sqrt{6}}{\sqrt{10\pi}{\Delta}_{\theta}^{\frak g}(M)} {\left [ \frac{6^6}{5^5}{\tilde w}_{\theta} \right ] }^{\lambda} {\lambda}^{[dim(G){\chi}(M)/8-b(2)/8 -1/2]} \left( 1+\ldots \right)\end{aligned}$$ where ${\chi}(M)$ is the Euler-Poincaré characteristic of $M$ and $b(2)$ is the second Betti number associated with $H^*_{\frak g}(M)$. Notice that in the above expressions we can set ${\Delta}_{\theta}^{\frak g}({\cal O}_h)=1$, (the torsion being trivial in even dimensions for a closed, orientable manifold). 0.5 cm The bound on the critical exponent corresponding to this entropy estimate is (for a given $\theta\in Hom_0$), $$\begin{aligned} {\eta}(G)\leq\frac{5}{2}+ \frac{dim(G){\chi}(M)}{8}-\frac{b(2)}{8}\end{aligned}$$ 0.5 cm This exponent, evaluated for the four-sphere, takes on the value $\frac{11}{4}$ which is larger than the corresponding exponent obtained through numerical simulations, (see [*e.g.*]{}, \[Va\]). In this latter case, the available values of this exponent are typically affected by a large uncertainty. Nonetheless, numerical evidence seems to indicate a rough value around the figures $0.40$, $0.57$, thus our bound is strict and likely not optimal. 0.5 cm We are perfectly aware that this work is incomplete in many respects. In particular, it is annoying that one does not get an entropy estimate directly for triangulated four-manifolds but rather for geodesic ball covered manifolds. However, this estimate is sufficient for controlling the number of topologically (in the simple-homotopical sense) equivalent dynamical triangulations on four-manifolds of bounded geometry, and it is, we believe, a good starting point for a further understanding of discrete models of four-dimensional quantum gravity. 0.5 cm We now turn to a more extensive discussion of our subject. 0.5 cm Metric ball coverings and triangulated manifolds ================================================= As recalled in the introductory remarks, in order to regain a smooth geometric perspective when dealing with a dynamically triangulated manifold ${\cal T}$, we have to move our observation point far away from ${\cal T}$, (for rather different reasons this same point of view, which is the essence of a scaling limit, is advocated in geometric group theory \[Gr2\] . In this way, and under suitable re-scaling for the coupling constants of the theory, the details of the triangulation ${\cal T}$ may fade away at criticality, and the simplexes of ${\cal T}$ coalesce into extended objects, generalized metric manifolds representing the [*spacetime*]{} manifolds (or more correctly, an Euclidean version of them) dominating the statistical sum of the model considered. Technically speaking, this limiting procedure appeals here to a topology in the set of metric spaces coming along with a Hausdorff-type metric. This was rather explicitly suggested in 1981 by J.Fröhlich \[Fro\] in his unpublished notes on Regge’s model. For completely different reasons, and more or less in the same period, this notion of topology was made precise by M.Gromov \[Gr1\], and used by him very effectively to discuss the compactness properties of the space of riemannian structures. A detailed analysis is presented in \[Gr1\],\[CM2\], and instead of repeating it here we give the intuition and a few basic definitions. The rough idea is that given a length cut-off $\epsilon$, two riemannian manifolds are to be considered near in this topology, (one is the $\epsilon$-Gromov-Hausdorff approximation of the other), if their metric properties are similar at length scales $L\geq{\epsilon}$. This intuition can be made more precise as follows. 0.5 cm Consider two riemannian manifolds $M_1$ and $M_2$, (or more in general any two compact metric spaces), let $d_{M_1}(\cdot,\cdot)$ and $d_{M_2}(\cdot,\cdot)$ respectively denote the corresponding distance functions, and let $\phi\colon M_1\to M_2$ be a map between $M_1$ and $M_2$, (this map is not required to be continuous ). If $\phi$ is such that: [*(i)*]{}, the $\epsilon$-neighborhood of ${\phi}(M_1)$ in $M_2$ is equal to $M_2$, and [*(ii)*]{}, for each $x$, $y$ in $M_1$ we have $$\begin{aligned} |d_{M_1}(x,y)-d_{M_2}({\phi}(x),{\phi}(y))|<\epsilon\end{aligned}$$ then $\phi$ is said to be an $\epsilon$-[*Hausdorff approximation*]{}. The [*Gromov-Hausdorff distance*]{} between the two riemannian manifolds $M_1$ and $M_2$, $d_G(M_1,M_2)$, is then defined according to \[Gr1\] $d_G(M_1,M_2)$ is the lower bound of the positive numbers $\epsilon$ such that there exist $\epsilon$-Hausdorff approximations from $M_1$ to $M_2$ and from $M_2$ to $M_1$. \[miauno\] 0.5 cm The notion of $\epsilon$-Gromov-Hausdorff approximation is the weakest large-scale equivalence relation between metric spaces of use in geometry, and is manifestly adapted to the needs of simplicial quantum gravity, (think of a manifold and of a simplicial approximation to it). Notice that $d_G$ is not, properly speaking, a distance since it does not satisfy the triangle inequality, but it rather gives rise to a metrizable uniform structure in which the set of isometry classes of all compact metric spaces, (not just riemannian structures), is Hausdorff and complete. This enlarged space does naturally contain topological (metric) manifolds and curved polyhedra. As stressed in \[Pt\], the importance of this notion of distance lies not so much in the fact that we have a distance function, but in that we have a way of measuring when riemannian manifolds, (or more general metric spaces), look alike. 0.5 cm In order to provide the entropy of four-dimensional triangulated manifolds, we need to use Gromov-Hausdorff topology quite superficially. Explicitly, it only appears in the ensemble of manifolds for which we characterize the entropy function: For $r$ a real number, $D$ and $V$ positive real numbers, and $n$ a natural number, let $\Ricco$ denote the Gromov-Hausdorff closure of the space of isometry classes of closed connected n-dimensional riemannian manifolds $(M,g)$ with sectional curvature bounded below by $r$, [*viz.*]{}, $$\begin{aligned} \inf_{x\in M}\{\inf \{ g_x(Riem_x(u,v)u;v) \colon u,v \in T_xM,orthonormal\}\}\geq r \nonumber\end{aligned}$$ and diameter bounded above by $D$, $$\begin{aligned} diam(M)\equiv\sup_{(p,q)\in M\times M}d_M(p,q)\leq D\nonumber\end{aligned}$$ and volume bounded below by $V$. \[miadue\] 0.5 cm The point in the introduction of $\Ricco$ or of more general classes of metric spaces with a lower bound on a suitably defined notion of curvature, is that for any manifold (or metric space) $M$ in such a class one gets a packing information which is most helpful in controlling the topology in terms of the metric geometry. In the case of $\Ricco$ this packing information is provided by suitable coverings with geodesic (metric) balls yielding a coarse classification of the riemannian structures occurring in $\Ricco$, (notice that these coverings can be introduced under considerably less restrictive conditions, in particular it is sufficient to have a lower bound on the Ricci tensor, and an upper bound on the diameter, \[GP\]). 0.5 cm In order to define such coverings \[GP\], let us parametrize geodesics on $M\in \Ricco$ by arc length, and for any $p\in M$ let us denote by ${\sigma}_p(x)\equiv d_M(x,p)$ the distance function of the generic point $x$ from the chosen point $p$. Recall that ${\sigma}_p(x)$ is a smooth function away from $\{p\cup C_p\}$, where $C_p$, a closed nowhere dense set of measure zero, is the cut locus of $p$. Recall also that a point $y\not= p$ is a critical point of ${\sigma}_p(x)$ if for all vectors ${\bf v}\in TM_y$, there is a minimal geodesic, $\gamma$, from $y$ to $p$ such that the angle between ${\bf v}$ and $\dot{\gamma}(0)$ is not greater than $\frac{\pi}{2}$. For any manifold $M\in \Ricco$ and for any given $\epsilon >0$, it is always possible to find an ordered set of points $\{p_1,\ldots,p_N\}$ in $M$, so that, \[GP\] [*(i)*]{} the open metric balls, (the [*geodesic balls*]{}), $B_{M}(p_{i},\epsilon) = \{x \in M \vert d(x, p_{i})< \epsilon\}$, $i=1,\ldots,N$, cover $M$; in other words the collection $$\begin{aligned} {\{p_1,\ldots,p_N\}}\end{aligned}$$ is an $\epsilon$-net in $M$. [*(ii)*]{} the open balls $B_{M}(p_{i},{\epsilon\over 2})$, $i=1,\ldots,N$, are disjoint, [*i.e.*]{}, $\{p_1,\ldots,p_N\}$ is a [*minimal*]{} $\epsilon$-net in $M$. Similarly, upon considering the higher order intersection patterns of the set of balls $\{B_{M}(p_{i},\epsilon)\}$, we can define the two-skeleton ${\Gamma}^{(2)}(M)$, and eventually the nerve ${\cal N}\{B_i\}$ of the geodesic balls covering of the manifold $M$: Let $\{B_i(\epsilon)\}$ denote a minimal $\epsilon$-net in $M$. The geodesic ball nerve ${\cal N}\{B_i\}$ associated with $\{B_i(\epsilon)\}$ is the polytope whose $k$-simplexes $p_{i_1i_2 \ldots i_{k+1}}^{(k)}$, $k=0,1,\ldots$, are defined by the collections of $k+1$ geodesic balls such that $B_1 \cap {B}_2 \cap \ldots \cap {B}_{k+1} \not= \emptyset$. Thus, for instance, the vertices $p_i^{(0)}$ of ${\cal N}\{B_i\}$ correspond to the balls $B_i(\epsilon)$; the edges $p_{ij}^{(1)}$ correspond to pairs of geodesic balls $\{B_i(\epsilon),B_j(\epsilon)\}$ having a not-empty intersection $B_i(\epsilon) \cap B_j(\epsilon) \not= \emptyset$; and the faces $p_{ijk}^{(2)}$ correspond to triples of geodesic balls with not-empty intersection $B_i(\epsilon)\cap {B}_j(\epsilon)\cap {B}_k(\epsilon) \not= \emptyset$. Notice that, in general, this polytope has a dimension which is greater than the dimension $n$ of the underlying manifold. However, as $\epsilon\to 0$, such dimension cannot grow arbitrarily large being bounded above by a constant depending only on $r$, $n$, and $D$, (see below). Minimal geodesic ball coverings provide a means for introducing a short-distance cutoff as for a dynamical triangulation, while hopefully mantaining a more direct connection with the geometry and in particular with the topology of the underlying manifold. The basic observation here is that such coverings are naturally labelled, (or coloured), by the fundamental groups of the balls. Indeed, according to the properties of the distance function, (see, for instance \[Ch\]), given ${\epsilon}_1<{\epsilon}_2\leq\infty$, if in ${\bar B_i({\epsilon}_2)}\backslash B_i({\epsilon}_1)$ there are no critical points of the distance function ${\sigma}_i$, then this region is homeomorphic to ${\partial}B_i({\epsilon}_1)\times[{\epsilon}_1,{\epsilon}_2]$, and ${\partial}B_i({\epsilon}_1)$ is a topological submanifold without boundary. One defines a [*criticality radius*]{}, ${\epsilon}_i$, for each ball $B_i(\epsilon)$, as the largest $\epsilon$ such that $B_i(\epsilon)$ is free of critical points. Corresponding to such value of the radius $\epsilon$, the ball $B_i({\epsilon})$ is homeomorphic to an arbitrarily small open ball with center $p_i$, and thus it is homeomorphic to a standard open ball. It can be easily checked, through direct examples, that the criticality radius of geodesic balls of manifolds in $\Ricco$ can be arbitrarily small, (think of the geodesic balls drawn near the rounded off tip of a cone), thus arbitrarily small metric balls in manifolds of bounded geometry are not necessarily contractible, and therefore, in general, the $B_i(\epsilon)$ are not homeomorphic to a standard open ball. 0.5 cm Connections with dynamical triangulations ----------------------------------------- Since the geodesic ball coverings are to play an important role in our development, a few remarks about the connection between such coverings and dynamical triangulations are in order. 0.5 cm As recalled in the introductory remarks, a dynamical triangulation of a (pseudo)-manifold can be used to produce a metric on that manifold, by declaring all the simplexes in the triangulation isometric to the standard simplex of the appropriate dimension, and by assuming that the edge lengths are all equal to some fundamental length. An $n$-dimensional dynamical triangulation is actually constructed by successively gluing pairs of such flat $n$-simplexes along some of their $(n-1)$-faces, until one gets a complex without boundary. This gives a collection of compatible metrics, on pieces of the resulting pseudo-manifold, which can be extended to a genuine metric, since between any two points there is a path minimizing the distance, (one speaks of a pseudo-manifolds since, for $n>2$, the complex constructed by this gluing procedure may have some vertices whose neighborhood is not homeomorphic to the standard euclidean ball). 0.5 cm We identify two dynamical triangulations of the same underlying manifold $M$ if there is a one-to-one mapping of vertices, edges, faces, and higher dimensional simplexes of one onto vertices, edges, faces, and higher dimensional simplexes of the other which preserves incidence relations. If no such mapping exists the the two dynamical triangulations are said to be [*distinct*]{}. Notice that sometimes one says that such dynamical triangulations are combinatorially distinct. Since this may be source of confusion, (in dynamical triangulation theory the notion of combinatorial equivalence is synonimus of PL-equivalence, see below), we carefully avoid the use of the qualifier “combinatorial” in this context. 0.5 cm On a dynamical triangulation so constructed, one can define metric balls, and consider minimal geodesic ball coverings. Actually, it is clear that in a generic metric space there are many distinct ways of introducing minimal geodesic ball coverings with a given radius of the balls. As a simple example, consider a portion of Euclidean three-space, (one may wish to identify boundaries so to obtain a flat three-torus). It is well known that a portion of Euclidean three-space can be packed and covered, with small spheres of a given radius, in many inequivalent ways, to the effect that in the limit, for ${\bf R}^3$, there are uncountably many such coverings. 0.5 cm As for dynamical triangulations, we identify two geodesic ball coverings, $\{B_i\}_1$ and $\{B_k\}_2$, of the same underlying manifold $M$ if there is a one-to-one mapping of vertices, edges, faces, and higher dimensional simplexes of the nerve of $\{B_i\}_1$ onto vertices, edges, faces, and higher dimensional simplexes of the nerve of $\{B_k\}_2$, which preserves incidence relations. If no such mapping exists the the two geodesic ball coverings are said to be [*distinct*]{}. 0.5 cm Generally, given a manifold triangulated with $n$-dimensional simplexes with a given edge length, we can always introduce a minimal geodesic ball covering whose properties are closely connected with the properties of the underlying triangulation. This can be done according to the following Let $(M,T)$ denote a manifold (a compact polyhedron) triangulated with fixed edge legth equilateral simplexes, and let $\epsilon$ denote the length of the edges. With each vertex $p_i$ belonging to the triangulation, we associate the largest open metric ball contained in the open star of $p_i$. Then the metric ball covering of $(M,T)$ generated by such balls $\{B_i\}$ is a minimal geodesic ball covering. It defines the geodesic ball covering associated with the dynamically triangulated manifold $(M,T)$. 0.5 cm It is immediate to see that the set of balls considered defines indeed a minimal geodesic ball covering. The open balls obtained from $\{B_i\}$ by halfing their radius are disjoint being contained in the open stars of $\{p_i\}$ in the baricentric subdivision of the triangulation. The balls with doubled radius cover $(M,T)$, since they are the largest open balls contained in the stars of the vertices $\{p_i\}$ of $T$. 0.5 cm In order to connect the enumeration of [*distinct*]{} geodesic ball coverings with the enumeration of [*distinct*]{} triangulations, we recall that any two dynamical triangulations are said to be [*Combinatorially Equivalent*]{} if the two triangulations can be subdivided into the same finer triangulation. In other words, if they correspond to triangulations $T_1$ and $T_2$ of the same abstract compact polyhedron $P$. This last remark follows since any two triangulations of a compact polyhedron have a common subdivision. Notice that quite often, when considering a particular triangulation $(M,T)$ is standard usage to identify the abstract polyhedron $M$ with $|T|$, the union of the cells of $T$, (the underlying polyhedron associated with $T$). The more so when dealing with dynamical triangulations, where the emphasis is on the actual construction of $T$. This identification is a source of confusion in enumerative problems and we shall keep distinct the abstract polyhedron $M$ from $|T|$. 0.5 cm The relation between a dynamically triangulated manifold an the associated geodesic ball covering implies the following 0.5 cm If $(M,T_1)$ and $(M,T_2)$ are any two distinct combinatorially equivalent fixed edge-length triangulations, then the corresponding geodesic ball coverings $\{B_i\}_1$ and $\{B_i\}_2$ are distinct. 0.5 cm [*Proof*]{}. This amounts to prove that the nerve associated with geodesic ball covering corresponding to a fixed edge-length triangulation is isomorphic, as a simplicial complex, to the given triangulation. If this were not the case, then, there should be at least one $k$-simplex in the nerve, $p_{i_1\ldots i_{k+1}}^{k}$, associated with the mutual intersections of $k+1$ balls, for some $k>1$ , such that the vertices of such $k$-simplex correspond to vertices of the triangulation not connected by links. But then the corresponding balls , $B_i$, cannot mutually intersect, since they are contained in the disjoint open stars of the respective vertices. Thus, there cannot be such a simplex $p_{i_1\ldots i_{k+1}}^{k}$ to begin with. 0.5 cm In general, by choosing a different prescription for geodesic ball coverings associated with fixed edge-length triangulations, ([*e.g.*]{}, by choosing differently the centers of the balls $B_i$), we get a nerve which is not necessarily isomorphic to the dynamical triangulation itself. And, as already stressed, the dimension of the nerve is, in general, larger than the dimension of the underlying manifold, and even if we restrict our attention, say, to the four-skeleton, we get a complex which is not the triangulation of a four-manifold. 0.5 cm If we combine this remark with lemma 1 then we get the following For a given minimal short-distance cut-off, $\epsilon$, the number of distinct geodesic ball coverings is not smaller than the number of corresponding dynamical triangulations. 0.5 cm Incidentally, by means of the above construction of a geodesic ball covering associated with a given fixed edge-length triangulation, we can also explain, in terms of dynamical triangulations, the origin of the possible non-trivial topology of the balls. Recall that in a $n$-dimensional simplicial manifold each vertex has a sufficiently small neighborhood which is homeomorphic to the standard $n$-dimensional euclidean ball. And, in such a case, the above minimal geodesic ball covering is necessarily generated by contractible balls. Thus, non-contractible balls are present if we allow for dynamical triangulations associated with simplicial pseudo-manifolds. And this is the typical case, at least in dimension $n>2$, since pseudo-manifolds are the natural outcome of the process of gluing $n$-simplexes along their $(n-1)$-faces. 0.5 cm Homotopy and geodesic ball coverings ------------------------------------ 0.5 cm The above remarks suggest that one should be careful in understanding in what sense, for $\epsilon$ sufficiently small, the geodesic ball nerve gives rise to a polytope whose topology approximates the topology of the manifold $M\in\Ricco$. This is a natural consequence of the fact that the criticality radius for the geodesic balls is not bounded below. In full generality, the geodesic ball nerve controls only the homotopy type of the manifold \[GPW\]. This follows by noticing that the inclusion of sufficiently small geodesic balls into suitably larger balls is homotopically trivial, and the geodesic ball nerve is thus a polytope which is [*homotopically dominating*]{} the underlying manifold, [*viz.*]{}, there exist maps $f\colon{M}\to{\cal{N}}(B_i)$, and $g\colon{\cal{N}}(B_i)\to{M}$, with $g\cdot{f}$ homotopic to the identity mapping in $M$. It may appear rather surprising, but this homotopical control is more than sufficient for yielding the entropic estimates we are looking for. 0.5 cm On the geometrical side, there are a wealth of good properties of geodesic ball coverings which make them particularly appealing for applications in simplicial quantum gravity. As a good start, we can notice that the equivalence relation defined by manifolds with (combinatorially) isomorphic geodesic ball one-skeletons partitions $\Ricco$ into disjoint equivalence classes whose finite number can be estimated in terms of the parameters $n$, $k$, $D$. Each equivalence class of manifolds is characterized by the abstract (unlabelled) graph ${\Gamma}_{(\epsilon)}$ defined by the $1$-skeleton of the $L(\epsilon)$-covering. The order of any such graph ([*i.e.*]{}, the number of vertices) defines the [*filling function*]{} $N_{(\epsilon)}^{(0)}$, while the structure of the edge set of ${\Gamma}_{(\epsilon)}$ defines the (first order) intersection pattern $I_{(\epsilon)}(M)$ of $(M,\{B_i(\epsilon)\})$. 0.5 cm It is important to remark that on $\Ricco$ neither the filling function nor the intersection pattern can be arbitrary. The filling function is always bounded above for each given $\epsilon$, and the best filling, with geodesic balls of radius $\epsilon$, of a riemannian manifold of diameter $diam(M)$, and Ricci curvature $Ric(M)\geq (n-1)H$, is controlled by the corresponding filling of the geodesic ball of radius $diam(M)$ on the space form of constant curvature given by $H$, the bound being of the form \[Gr1\] $N^{(0)}_{\epsilon}\leq N(n,H(diam(M))^2,(diam(M))/\epsilon)$. 0.5 cm The multiplicity of the first intersection pattern is similarly controlled through the geometry of the manifold to the effect that the average degree, $d(\Gamma)$, of the graph ${\Gamma}_{(\epsilon)}$, ([*i.e.*]{}, the average number of edges incident on a vertex of the graph), is bounded above by a constant as the radius of the balls defining the covering tend to zero, ([*i.e.*]{}, as $\epsilon \to 0$ ). Such constant is independent from $\epsilon$ and can be estimated \[Gp\] in terms of the parameters $n$, and $H(diam(M))^2$, (it is this boundedness of the order of the geodesic ball coverings that allows for the control of the dimension of the geodesic ball nerve). 0.5 cm As expected, the filling function can be also related to the volume $v=Vol(M)$ of the underlying manifold $M$. This follows by noticing that \[Zh\] for any manifold $M\in\Ricco$ there exist constants $C_1$ and $C_2$, depending only on $n$, $r$, $D$, $V$, such that, for any $p\in M$, we have $$\begin{aligned} C_1{\epsilon}^n\leq Vol(B_{\epsilon}(p))\leq C_2{\epsilon}^n\end{aligned}$$ with $0\leq \epsilon \leq D$, (actually, here and in the previous statements a lower bound on the Ricci curvature suffices). Explicitly, the constants $C_1$ and $C_2$ are provided by $$\begin{aligned} C_1\equiv \frac{V}{Vol^r(B(D))}{\inf}_{0\geq{\epsilon}\geq{D}} \frac{1}{{\epsilon}^n}\int_0^{\epsilon}(\frac{\sinh\sqrt{-rt}}{\sqrt{- r}})^{n-1}dt\end{aligned}$$ and $$\begin{aligned} C_2\equiv {\sup}_{0\geq{\epsilon}\geq{D}} \frac{1}{{\epsilon}^n}\int_0^{\epsilon}(\frac{\sinh\sqrt{-rt}}{\sqrt{- r}})^{n-1}dt\end{aligned}$$ where $Vol^r(B(D))$ denotes the volume of the geodesic ball of radius D in the (simply connected) space of constant curvature $-r$, and $D$, $r$, $V$, $n$ are the parameters characterizing the space of bounded geometries $\Ricco$ under consideration. Thus, if $v$ is the given volume of the underlying manifold $M$, by the Bishop-Gromov relative comparison volume theorem we obtain that there exists a function ${\rho}_1(M)$, depending on $n$, $r$, $D$, $V$, and on the actual geometry of the manifold $M$, with $C_1\leq ({\rho}_1(M))^{-1}\leq C_2$, and such that, for any $m\geq m_0$, we can write $$\begin{aligned} N^{(0)}_{\epsilon}(M) =v{\rho}_1(M){\epsilon}^{-n} \label{volumemme1}\end{aligned}$$ 0.5 cm We conclude this section by recalling the following basic finiteness results. They provide the topological rationale underlying the use of spaces of bounded geometries in simplicial quantum gravity. We start with a result expressing finiteness of homotopy types of manifolds of bounded geometry \[GPW\] For any dimension $n \geq 2$, and for $\epsilon$ sufficiently small, manifolds in $\Ricco$ with the same geodesic ball $1$-skeleton ${\Gamma}_{(\epsilon)}$ are homotopically equivalent, and the number of different homotopy-types of manifolds realized in $\Ricco$ is finite and is a function of $n$, $V^{-1}D^n$, and $rD^2$. (Two manifolds $M_1$ and $M_2$ are said to have the same homotopy type if there exists a continuous map $\phi$ of $M_1$ into $M_2$ and $f$ of $M_2$ into $M_1$, such that both $f \cdot \phi$ and $\phi \cdot f$ are homotopic to the respective identity mappings, $I_{M_1}$ and $I_{M_2}$. Obviously, two homeomorphic manifolds are of the same homotopy type, but the converse is not true). 0.5 cm Notice that in dimension three one can replace the lower bound of the sectional curvatures with a lower bound on the Ricci curvature \[Zh\]. Actually, a more general topological finiteness theorem can be stated under a rather weak condition of local geometric contractibility. Recall that a continuous function ${\psi}\colon [0,\alpha)\to {\bf R}^+$, $\alpha >0$, with ${\psi}(0)=0$, and ${\psi}(\epsilon)\geq \epsilon$, for all $\epsilon\in [0,\alpha)$, is a local geometric contractibility function for a riemannian manifold $M$ if, for each $x\in M$ and $\epsilon\in (0,\alpha)$, the open ball $B(x,\epsilon)$ is contractible in $B(x,{\psi}(\epsilon))$ \[GrP\], (which says that a small ball is contractible relative to a bigger ball). Given a local geometric contractibility function one obtains the following \[GrP\] Let ${\psi}\colon [0,\alpha)\to {\bf R}^+$, $\alpha >0$, be a continuous function with ${\psi}(\epsilon)\geq \epsilon$ for all $\epsilon\in [0,\alpha)$ and such that, for some constants $C$ and $k\in (0,1]$, we have the growth condition ${\psi}(\epsilon)\leq C{\epsilon}^k$, for all $\epsilon\in [0,\alpha)$. Then for each $V_0>0$ and $n\in {\bf R}^+$ the class ${\cal C}(\psi, V_0,n)$ of all compact $n$-dimensional Riemannian manifolds with volume $\leq V_0$ and with $\psi$ as a local geometric contractibility function contains [*(i)*]{} finitely many simple homotopy types (all $n$), [*(ii)*]{} finitely many homeomorphism types if $n=4$, [*(iii)*]{} finitely many diffeomorphism types if $n=2$ or $n\geq 5$. Actually the growth condition on $\psi$ is necessary in order to control the dimension of the limit spaces resulting from Gromov-Hausdorff convergence of a sequence of manifolds in ${\cal C}(\psi, V_0,n)$. As far as homeomorphism types are concerned, this condition can be removed \[Fe\]. Note moreover that infinite-dimensional limit spaces cannot occur in the presence of a lower bound on sectional curvature as for manifolds in $\Ricco$. Finiteness of the homeomorphism types cannot be proved in dimension $n=3$ as long as the Poincaré conjecture is not proved. If there were a fake three-sphere then one could prove \[Fe\] that a statement such as [*(ii)*]{} above is false for $n=3$. Finally, the statement on finiteness of [*simple homotopy*]{} types, in any dimension, is particularly important for the applications in quantum gravity we discuss in the sequel. Roughly speaking the notion of simple homotopy is a refinement of the notion of homotopy equivalence, and it may be thought of as an intermediate step between homotopy equivalence and homeomorphism. 0.5 cm The machinery needed to characterize the entropy function for geodesic ball coverings in four-dimensional manifolds of bounded geometry is now at hand. Topology and Entropy of metric ball coverings ============================================= The combinatorial structure associated with minimal geodesic ball coverings appears more complex than the combinatorial structure of dynamical triangulations. However, the counting of all possible distinct minimal geodesic ball coverings of given radius, on a manifold of bounded geometry, is more accessible than the counting of distinct dynamical triangulations. This fortunate situation arises because we can label the balls $B_{\epsilon}(p_i)$ with their non-trivial fundamental group ${\pi}_1(B_{\epsilon}(p_i);p_i)$, (obviously, since we are interested to the distinct classes of covering, we need to factor out the trivial labelling associated with the centers $p_i$ of the balls). Thus the counting problem we face is basically the enumeration of such inequivalent topological labellings of the balls of the covering. Such an enumeration is not yet very accessible. As it stands, there are constraints on the fundamental groups ${\pi}_1(B_{\epsilon}(p_i);p_i)$, expressed by Seifert-VanKampen’s theorem, which express the match between the intersection pattern of the balls and the homomorphisms ${\pi}_1(\cap_iB_{\epsilon}(p_i);x_0)\to{\pi}_1(M;x_0)$ associated with the injection of clusters of mutually intersecting balls into $M$, ($x_0$ being a base point in $M\cap_iB_{\epsilon}(p_i)$). Such difficulties can be circumvented by using as labels, rather than the fundamental groups themselves, a cohomology with local coefficients in representations of ${\pi}_1(B_{\epsilon}(p_i);p_i)$ into a Lie group $G$. Roughly speaking, this means that we are using [*flat bundles corresponding to the representation $\theta$*]{} as non-trivial labels for the balls. This construction gives to the counting problem of inequivalent geodesic ball coverings an unexpected interdisciplinary flavour which blends in a nice way riemannian geometry ([*the metric properties of the balls*]{}), topology ([*the action of the fundamental group on homology*]{}), and representation theory ([*the structure of the space of inequivalent representation of the fundamental group of a $n$-dimensional manifold, $n\geq{3}$*]{}), into a Lie group $G$. 0.5 cm We wish to stress that a similar approach may be suitable also for a direct enumeration of dynamical triangulations since flat bundles on the simplexes, (again associated with representations of the fundamental group of the underlying PL-manifold), do provide a natural topological labelling of the simplexes. It is true that such simplexes have no non-trivial topology, (they are contractible), and that a flat bundle, (associated with the given representation $\theta$), over one simplex is isomorphic to the flat bundle over any other simplex. However, such isomorphism is not canonical, as is obviously shown by the fact that one may get a non-trivial flat bundle by gluing such local bundles if the underlying manifold has a non-trivial fundamental group, (we wish to thank J. Ambjørn and B. Durhuus for discussions that draw our attention to this further possibility). 0.5 cm Cohomology with local coefficients and representation spaces ------------------------------------------------------------ In order to describe either the topological aspects or the basic properties of the representation spaces mentioned above and which play a prominent role into our entropy estimate, it will be convenient to recall some basic facts about cohomology with local coefficients. Details of the theory summarized here can be found in \[DNF\],\[RS\], \[JW\]. 0.5 cm Let $(M,\{B_{\epsilon}(p_i)\})\in\Ricco$ be a manifold of bounded geometry endowed with a minimal geodesic ball covering, and thought of as a cellular or simplicial complex, (for instance by associating with $(M,\{B_{\epsilon}(p_i)\})$ the corresponding nerve ${\cal N}$; in what follows we tacitly exploit the fact that a sufficiently fine minimal geodesic ball covering has the same homotopy type of the underlying manifold). We let ${\pi}_1(M)$ denote the fundamental group of $(M,\{B_{\epsilon}(p_i)\})$. Such ${\pi}_1(M)$ is finitely generated, and can be assumed to be finitely presented. 0.5 cm Let $\hat{M}\to{M}$ denote the universal covering of $M$, on which ${\pi}_1(M)$ acts by deck transformations. Let us introduce the homology complex $C_*(\hat{M})=\bigoplus_{i\in{\Bbb N}}C_i(\hat{M})$ where the chains in $C_i(\hat{M})$ are of the form $\sum_{j,\gamma}{\lambda}_{j\gamma}a_j(\hat{\sigma}^i_{\gamma})$ where ${\lambda}_{j\gamma}$ are integers, $a\in {\pi}_1(M)$, and $\hat{\sigma}^i_{\gamma}$ are a set of chosen $i$-cells in $\hat{M}$. This is tantamount to say that the chains $C_i(\hat{M})$ have coefficient in the group ring ${\bf Z}{\pi}_1(M)$, [*i.e.*]{}, in the set of all finite formal sums $\sum{n}_ia_i$, $n_i\in{\bf Z}$, $a_i\in{\pi}_1(M)$, with the natural definition of addition and multiplication. 0.5 cm Let ${\theta}\colon{\pi}_1(M)\to{G}$, be a representation of ${\pi}_1(M)$ in a Lie group $G$ whose Lie algebra ${\frak g}$ carries an $Ad$-invariant, symmetric, non-degenerate bilinear form, ([*i.e.*]{}, a metric). The representation $\theta$ defines a flat bundle, that we denote by ${\frak g}_{\theta}$. This bundle is constructed by exploiting the adjoint representation of $G$ on its Lie algebra ${\frak g}$, [*i.e.*]{}, $Ad\colon{G}\to{End({\frak g})}$, and by considering the action of ${\pi}_1(M)$ on ${\frak g}$ generated by composing the adjoint action and the representation $\theta$: $$\begin{aligned} {\frak g}_{\theta}=\hat{M}\times{\frak g}/{\pi}_1\otimes[Ad({\theta}(\cdot))]^{-1}\end{aligned}$$ where ${\pi}_1\otimes[Ad({\theta}(\cdot))]^{-1}$ acts, through ${\pi}_1(M)$, by deck transformations on $\hat{M}$ and by $[Ad({\theta}(\cdot))]^{-1}$ on the Lie algebra ${\frak g}$. More explicitly, if $\hat{\sigma}_1$, $\hat{\sigma}_2$ are cells in $\hat{M}$, and $g_1$, $g_2$ are elements of ${\frak g}$, then $$\begin{aligned} (\hat{\sigma}_1,g_1)\sim (\hat{\sigma}_2,g_2)\end{aligned}$$ if and only if $$\begin{aligned} \hat{\sigma}_2=\hat{\sigma}_1a\end{aligned}$$ and $$\begin{aligned} g_2=[Ad({\theta}(a))]^{-1}g_1\end{aligned}$$ for some $a\in{\pi}_1(M)$. 0.5 cm In this way we can define a cellular chain complex $C_*(M,{\frak g}_{\theta})$ with coefficients in the flat bundle ${\frak g}_{\theta}$. First we consider chains with coefficients in the Lie algebra ${\frak g}$, [*viz.*]{}, $$\begin{aligned} \sum_jg_j\hat{\sigma}^i_j\end{aligned}$$ with $g_j\in{\frak g}$, and then quotient the resulting chain complex $C(\hat{M})\otimes{\frak g}$ by the action of ${\pi}_1\otimes[Ad({\theta}(\cdot))]^{-1}$. This yields for an action of ${\pi}_1(M)$ on the above chains expressed by $$\begin{aligned} a(\sum_jg_j\hat{\sigma}^i_j)\to \sum_j([Ad({\theta}(a))]^{-1}g_j)a(\hat{\sigma}^i_j)\end{aligned}$$ for any $a\in{\pi}_1(M)$, ([*i.e.*]{}, we are considering ${\frak g}$ as a ${\pi}_1(M)$-module). This action commutes with the boundary operator, and as a consequence of the definition of the flat bundle ${\frak g}_{\theta}$, the resulting homology $H_*(M,{\frak g}_{\theta})$ can be thought of as a homology with local coefficients in the flat bundle ${\frak g}_{\theta}$. By dualizing one defines the cohomology $H^*(M,{\frak g}_{\theta})$, which enjoys the usual properties of a cohomology theory. Sometimes, for ease of notation, we shall denote $H_*(M,{\frak g}_{\theta})$ and $H^*(M,{\frak g}_{\theta})$ by $H^{{\frak g}}_*(M)$ and $H_{{\frak g}}^*(M)$, respectively. 0.5 cm Let $B_{\epsilon}(p_h)$ be the generic ball of the covering $(M,\{B_{\epsilon}(p_i)\})$. If we denote by ${\phi}_h\colon{\pi}_1(B_{\epsilon}(p_h);p_h)\to{\pi}_1(M;p_h)$ the homomorphism induced by the obvious inclusion map, then together with $\theta$, we may also consider the representations $$\begin{aligned} {\theta}_h\colon {\pi}_1(B_{\epsilon}(p_h);p_h)\to{\pi}_1(M;p_h) \to{G}\end{aligned}$$ obtained by composing $\theta$ with the various homomorphisms ${\phi}_h$ associated with the balls of the covering. Notice that since arbitrarily small metric balls in manifolds $M\in\Ricco$ can be topologically rather complicated, it cannot be excluded a priori that the image ${\phi}_h[{\pi}_1(B_{\epsilon}(p_h);p_h)]$ in ${\pi}_1(M)$, (or more generally in the fundamental group of a larger, concentric ball), has an infinite number of generators. However, as follows from a result of S-h.Zhu, in order to avoid such troubles it is sufficient to choose the radius of the balls small enough \[Zh\] There are constants $R_0$, ${\epsilon}_0$ and $C$ depending only on $n$, $r$, $D$, $V$, such that for any manifold $M\in\Ricco$, $p\in{M}$, $\epsilon\geq{\epsilon}_0$, if $i\colon{B_{\epsilon}(p)}\to{B_{R_0\epsilon}(p)}$ is the inclusion, then any subgroup $K$ of ${\phi}_i({\pi}_1(B_{\epsilon}(p)))$ satisfies $order(K)\leq{C}$. Thus in particular, there is no element of infinite order in ${\phi}_i({\pi}_1(B_{\epsilon}(p)))$ whenever $\epsilon\geq{\epsilon}_0$. 0.5 cm According to this latter result, by choosing $\epsilon\geq{\epsilon}_0$ and by using the representations ${\theta}_h$, we may define the cohomologies $H^*_{{\frak g}}(B_{\epsilon}(p_h))$ with local coefficients in the corresponding flat bundles ${\frak g}_{\theta}|(B_{\epsilon}(p_h))$ defined over the balls $B_{\epsilon}(p_h)$. As labels, these cohomology groups are easier to handle than the fundamental groups ${\pi}_1(B_{\epsilon}(p_i))$. This is so because the constraints we have to implement on the intersections of the balls, relating $\{H^*_{{\frak g}}(B_{\epsilon}(p_i))\}_i$ to $H^*_{{\frak g}}(M)$, are simply obtained by iterating the cohomology long exact Mayer-Vietoris sequence obtained from the short exact sequences connecting the cochains $C^*_{{\frak g}}(B_{\epsilon}(p_i))$, $C^*_{{\frak g}}(\cup{ B_{\epsilon}(p_i}))$, and $C^*_{{\frak g}}(\cap{B_{\epsilon}(p_i)})$. For instance, given any two intersecting balls $B(p_i)$ and $B(p_h)$, we get $$\begin{aligned} 0\to C^j_{{\frak g}}(B(p_i)\cup B(p_h))\to C^j_{\frak{g}}(B(p_i))\oplus C^j_{\frak{g}}(B(p_h))\to C^j_{\frak{g}}(B(p_i)\cap B(p_h))\to 0\end{aligned}$$ whose corresponding cohomology long exact sequence reads $$\begin{aligned} \ldots\to H^j_{\frak{g}}(B(p_i)\cup B(p_h))&\to H^j_{\frak{g}}(B(p_i))\oplus H^j_{\frak{g}}(B(p_h))\to\nonumber\\ \to H^j_{\frak{g}}(B(p_i)\cap B(p_h))&\to H^{j+1}_{\frak{g}}(B(p_i)\cup B(p_h))\to\ldots\end{aligned}$$ Similar expressions can be worked out for any cluster $\{B_{\epsilon}(p_i)\}_{i=1,2,\ldots}$ of intersecting geodesic balls, (see section 3.3), and they can be put at work for our counting purposes by introducing the Reidemeister torsion, a graded version of the absolute value of the determinant of an isomorphism of vector spaces. 0.5 cm Torsions -------- Let us start by recalling that by hypothesis $\frak{g}$ is endowed with an Ad-invariant, symmetric, non-degenerate bilinear form, ([*i.e.*]{}, with a metric), thus we can introduce orthonormal bases, $\{X_k\}_{k=1,\ldots,dim(G)}$, for the Lie algebra $\frak{g}$. Since the adjoint representation is an orthogonal representation of $G$ on $\frak{g}$, we can introduce a volume element on the cochain complex $C^*(M,\frak{g}_{\theta})$, by exploiting such orthonormal bases: by identifying $C^i(M,\frak{g}_{\theta})$ with a direct sum of a copy of $\frak{g}$ for each $i$-cell $\hat{\sigma}^i_j$ in $\hat{M}$, we can take $\hat{\sigma}^i_J\otimes{X_k}$ as an orthonormal basis of $C^i(M,\frak{g}_{\theta})$ and define the space of volume forms as the determinant line $detline|C^*(M,{\frak g}_{\theta})|\equiv \prod_{i}(detline|C^i(M,{\frak g}_{\theta})|)^{(-1)^i}$, where $detline|C^i(M,{\frak g}_{\theta})|$ denotes the line of volume elements on $C^i(M,{\frak g}_{\theta})$ generated by all possible choices of the orthonormal bases $\hat{\sigma}^i_J\otimes{X_k}$. Explicitly, if on each $C^i(M,{\frak g}_{\theta})$ we chose the volume forms $t_i$, then the corresponding volume element is obtained by setting $$\begin{aligned} t({\frak g}_{\theta})=\prod_i(t_i)^{(-1)^i}\in detline|C^*(M,{\frak g}_{\theta})|\end{aligned}$$ 0.5 cm Let $d_i\colon{C_{i}^{\frak g}}\to{C_{i+1}^{\frak g}}$ be the coboundary operator in $C^*_{\frak g}$, and as usual let us denote by $Z_{\frak g}^i\equiv ker(d)$, $B^i_{\frak g}\equiv Im(d)$. From the short exact sequences $$\begin{aligned} 0\to{Z^{i}_{\frak{g}}}\to{C^{i}_{\frak{g}}}\to{B^{i+1}_{\frak{g}}} \to{0}\end{aligned}$$ and $$\begin{aligned} 0\to{B^{i}_{\frak{g}}}\to{Z^{i}_{\frak{g}}}\to{H^{i}_{\frak{g}}} \to{0}\end{aligned}$$ we respectively get that there are natural isomorphisms $$\begin{aligned} {\Lambda}^{dim(Z^i)}{Z^{i}_{\frak{g}}}\otimes {\Lambda}^{dim(B^{i+1})}{B^{i+1}_{\frak{g}}}\to {\Lambda}^{dim(C^i)}{C^{i}_{\frak{g}}}\end{aligned}$$ and $$\begin{aligned} {\Lambda}^{dim(B^i)}{B^{i}_{\frak{g}}}\otimes {\Lambda}^{dim(H^{i})}{H^{i}_{\frak{g}}}\to {\Lambda}^{dim(Z^i)}{Z^{i}_{\frak{g}}}\end{aligned}$$ where ${\Lambda}^{dim(\cdot)}$ denotes the top dimensional exterior power on the vector space considered, (recall that if $\ldots\to{V_i}\to{V_{i+1}}\to\ldots$ is a finite exact sequence, then there is a canonical isomorphism $\otimes_{i-even}{\Lambda}^{dim(V_i)}V_i= \otimes_{i-odd}{\Lambda}^{dim(V_i)}V_i$). It follows that there is an isomorphism $$\begin{aligned} {\Lambda}^{dim(B^i)}{B^{i}_{\frak{g}}}\otimes {\Lambda}^{dim(H^{i})}{H^{i}_{\frak{g}}} \otimes{\Lambda}^{dim(B^{i+1})}{B^{i+1}_{\frak{g}}}\to {\Lambda}^{dim(C^i)}{C^{i}_{\frak{g}}}\end{aligned}$$ This isomorphism is explicitly realized by fixing an orthonormal bases ${\bf h}^{(i)}$, and ${\bf b}^{(i)}$ for ${H^{i}_{\frak{g}}}$ and ${B^{i}_{\frak{g}}}$, respectively. Thus, if we denote by ${\nu}_i\equiv{\wedge}_q^{dim(H)^i}h^{(i)}_q$ the corresponding volume form in ${H^{i}_{\frak{g}}}$ (lifted to ${C^{i}_{\frak{g}}}$), we can write $$\begin{aligned} [{\wedge}_q^{dim(B)^i}b^{(i)}_q]\wedge [{\wedge}_q^{dim(B)^{i+1}}{d}b^{(i+1)}_q]\wedge [{\wedge}_q^{dim(H)^i}h^{(i)}_q]=t_i({\nu}_i) [{\wedge}^{dim(C)^i}\hat{\sigma}^i_j\otimes{X_k}]\end{aligned}$$ for some scalar $t_i({\nu}_i)\not= 0$. 0.5 cm With these remarks out of the way, and setting, for notational convenience, ${\mu}_i\equiv{\wedge}^{dim(C)^i}\hat{\sigma}^i_j\otimes{X_k}$, we can define the Reidemeister torsion associated with the cochain complex $C^*_{\frak{g}}$ according to the following For a given choice of volume elements ${\nu}_i$ in ${H^*_{\frak{g}}}$, the torsion of the cochain complex $C^*_{\frak{g}}$ is the volume element $$\begin{aligned} {\Delta}^{\frak g}(M;{\bf{\mu}},{\bf{\nu}})\equiv t({\frak g}_{\theta})=\prod_i[t_i({\nu}_i)]^{(-1)^i}\in detline|C^*(M,{\frak g}_{\theta})|\end{aligned}$$ 0.5 cm Notice that we have selected a particular definition out of many naturally equivalent ones, (see \[RS\], for a more detailed treatment). 0.5 cm As the notation suggests, it is easily checked that ${\Delta}^{\frak g}(M;{\bf{\mu}},{\bf{\nu}})$ is independent of the particular choice of the bases $\{\bf b\}^{(i)}$ for the ${B^{i}_{\frak{g}}}$. Moreover, if the complex ${C^{i}_{\frak{g}}}$ is acyclic then ${\Delta}^{\frak g}(M;{\bf{\mu}},{\bf{\nu}})$ is also independent from the choice of a volume element in ${H^*_{\frak{g}}}$, (recall that the cochain complex ${C^{i}_{\frak{g}}}$ is said to be acyclic if ${H^{i}_{\frak{g}}}=0$ for all $i$). 0.5cm Obviously, we may have worked as well in homology $H_*^{\frak g}$, by obtaining ${\Delta}^{\frak g}(M;{\bf{\mu}},{\bf{\nu}})$ as an element of $detline|C_*(M,{\frak g}_{\theta})|$ depending now on a choice of volume elements ${\nu}^i$ in the homology groups $H_i^{\frak g}$. 0.5 cm It is important to stress that if the complex $C^*(M,{\frak g}_{\theta})$ is not acyclic then ${\Delta}^{\frak g}(M;{\bf{\mu}},{\bf{\nu}})$ is not a scalar but a volume element in $detline|H^*(M,{\frak g}_{\theta})|$ under the natural identification between this latter line bundle and $detline|C^*(M,{\frak g}_{\theta})|$. 0.5 cm The torsion is an interesting combinatorial invariant of a complex, since it is invariant under subdivision of $M$ and it is deeply related to homotopy theory. In particular, given a homotopy equivalence $f\colon(M_1,{\cal N}_1)\to(M_2,{\cal N}_2)$ between two cellular complexes, there is a correspondence between the flat bundles over $M_1$ and the flat bundles over $M_2$ induced by the isomorphism ${\pi}_1(M_1)\to{\pi}_1(M_2)$ and by the representation ${\theta}$ of such groups into the Lie group $G$. However, the corresponding torsions are not necessarily equal, this being the case if and only if $h$ is (homotopic to) a Piecewise-Linear (PL) equivalence between the complexes in question. 0.5 cm Also notice that if the manifold $M$ underlying the complex is an orientable, even dimensional manifold [*without boundary*]{} and the cochain complex $C_*(M,{\frak g}_{\theta})$ is acyclic, then ${\Delta}^{\frak g}(M;{\bf{\mu}},{\bf{\nu}})=1$, (see [*e.g.*]{}, \[Ch\]). Thus it would seem that calling into play such invariant, for counting geodesic ball coverings over a four dimensional manifold, is useless. However, there are three reasons which show that the role of torsion is not so trivial for our counting purposes. First, we shall deal with the torsions of the geodesic balls which are four dimensional manifolds with a non-trivial three-dimensional boundary. Moreover, the complexes we need to use are not acyclic, and the behavior of volume elements ${\bf{\nu}}$ in cohomology will play a basic role. Finally, the fact that we are in dimension four will be imposed only in the final part of our paper, when estimating the dimension of the tangent space to the set of all conjugacy classes of representations of the fundamental group, (see below). In this connection, we wish to stress that [*the analysis which follows holds for for any $n$-dimensional manifold $M\in\Ricco$ with $n\geq 2$*]{}. 0.5 cm We now examine the dependence of ${\Delta}^{\frak g}(M;{\bf{\mu}},{\bf{\nu}})$ on the particular representation ${\theta}\colon{\pi}_1(M)\to{G}$. To this end, let $\frac{Hom({\pi}_1(M),G)}{G}$ denote the set of all conjugacy classes of representations of the fundamental group ${\pi}_1(M)$ into the Lie group $G$. Notice that if $\theta$ and $F{\theta}F^{-1}$ are two conjugate representations of ${\pi}_1(M)$ in $G$, then through the map $Ad(F)\colon{\frak g}\to{\frak g}$ we get a natural isomorphism between the groups $H_{i}(M,{\frak g})$ and $H_{i}(M,F{\frak g}F^{-1})$. Thus it follows that the torsion corresponding to the representation $\theta$ and the torsion corresponding to the conjugate representation $F{\theta}F^{-1}$ are equal, and ${\Delta}^{\cal G}(M;{\bf{\mu}},{\bf{\nu}})$ is actually well defined on the conjugacy class of representations $[\theta]\in\frac{Hom({\pi}_1(M),G)}{G}$. 0.5 cm When defining the Reidemeister torsion, one of the advantages of using the homology $H_*(M,{\frak g})$ with local coefficients in the bundle ${\frak g}_{\theta}$ lies in the fact that the corresponding cohomology is strictly related to the structure of the representation space $\frac{Hom({\pi}_1(M),G)}{G}$. This point is quite important, since we are interested in understanding the dependence of ${\Delta}^{\frak g}(M;{\bf{\mu}},{\bf{\nu}})$ when deforming the particular representation ${\theta}\colon{\pi}_1(M)\to{G}$ through a differentiable one-parameter family of representations ${\theta}_t$ with ${\theta}_0=\theta$ which are not tangent to the $G$-orbit of $\theta\in{Hom({\pi}_1(M),G)}$. 0.5 cm To this end, let us rewrite, for $t$ near $0$, the given one-parameter family of representations ${\theta}_t$ as \[Go\], \[Wa\] $$\begin{aligned} {\theta}_t=\exp[tu(a)+O(t^2)]{\theta}(a)\end{aligned}$$ where $a\in {\pi}_1(M)$, and where $u\colon{\pi}_1(M)\to{\frak g}$. In particular, given $a$ and $b$ in ${\pi}_1(M)$, if we differentiate the homomorphism condition ${\theta}_t(ab)={\theta}_t(a){\theta}_t(b)$, we get that $u$ actually is a one-cocycle of ${\pi}_1(M)$ with coefficients in the ${\pi}_1(M)$-module ${\frak g}_{\theta}$, [*viz.*]{}, $$\begin{aligned} u(ab)=u(a)+ [Ad({\theta}(a))]u(b)\end{aligned}$$ Moreover, any $u$ verifying the above cocycle condition leads to a map ${\theta}_t\colon{\pi}_1(M)\to{G}$ which, to first order in $t$, satisfies the homomorphism condition. This remark implies that the (Zariski) tangent space to $Hom{({\pi}_1(M),G)}$ at $\theta$, can be identified with $Z^1(M,{\frak g}_{\theta})$. In a similar way, it can be shown that the tangent space to the $Ad$-orbit through ${\theta}$ is $B^1(M,{\frak g}_{\theta})$. Thus, the (Zariski) tangent space to $\frac{Hom({\pi}_1(M),G)}{G}$ corresponding to the conjugacy class of representations $[\theta]$ is $H^1(M,{\frak g}_{\theta})$. And, as it is usual in deformation theory, this is the formal tangent space to the representation space. 0.5 cm It must be emphasized that, in general, there are obstructions \[Go\] that do not allow the identification between the Zariski tangent space with the actual tangent space to $\frac{Hom({\pi}_1(M),G)}{G}$. Typically we have troubles in correspondence to reducible representations. Since the tangent space to the isotropy group of the representation $\theta$, is $H^0(M,{\frak g}_{\theta})$, it follows that $H^0(M,{\frak g}_{\theta})\not= 0$ precisely when there are reducible representations. Further obstruction to identifying $H^1(M,{\frak g}_{\theta})$ to the actual tangent space are in $H^2(M,{\frak g}_{\theta})$. In deformation theory it is well known that this space is to contain the obstructions to extend a formal deformation to a finite deformation, ([*i.e.*]{}, in a language more familiar to relativists, $H^2(M,{\frak g}_{\theta})$ is associated to a linearization instability around the given representation in $\frac{Hom({\pi}_1(M),G)}{G}$). The triviality of this space at a (conjugacy class of a) representation $\theta$ is a necessary condition for ${\theta}$ to be a regular point of the representation space $\frac{Hom({\pi}_1(M),G)}{G}$, and for identifying $H^1(M,{\frak g}_{\theta})$ with $T_{\theta}[\frac{Hom({\pi}_1(M),G)}{G}]$. 0.5 cm We shall be ignoring the singularities produced by reducible representations by restricting our considerations to the set of irreducible representations ${\cal S}\in\frac{Hom({\pi}_1(M),G)}{G}$, yet, in general we do not assume that $H^2(M,{\frak g}_{\theta})=0$. Not considering reducible representations is certainly not topologically justified in general, but is not yet clear how to circumvent the difficulties associated with them. Moreover, the results we obtain are well-defined in considerable generality and do not seem to suffer too much by such restrictions. 0.5 cm Recall that $\Hom(\pi_1(X),G)^{\hbox{irr}}$ is a smooth analytic submanifold of $G^m$, for some $m$. This provides ${\S}$ with an analytic structure, possibly outside some singular points. Let $\S_0$ denote the smooth locus of ${\S}$, and let $d$ be its dimension. To be definite, we set $G=U(n)$. If we assume $M$ to be oriented, the space $\S_0$, regarded as the space of gauge equivalence classes of flat connections $\nabla$ on ${\frak g}_{\theta}$, sits inside both $\M_+$ and $\M_-$, these two spaces being the moduli spaces of selfdual and anti-selfdual irreducible instantons on $M$. Since $$\begin{aligned} \dim\M_-=\dim G(b_1-b_+-1)\,,\qquad \dim\M_+=\dim G(b_1-b_--1)\,,\end{aligned}$$ we get the inequalities $$\begin{aligned} d\leq \dim G(b_1-b_+-1),\qquad d\leq\dim G(b_1-b_--1)\,; \label{uno}\end{aligned}$$ by summing the two inequalities we get $$\begin{aligned} d\leq -\unmezzo\dim G\chi(M)\,.\end{aligned}$$ We stress that $d$ is the dimension of the representation variety in the neighborhood of smooth points. In general $d$ is different from the Zariski dimension as computed by the cohomology $H^*(M,{\frak g}_{\theta})$. Let us define $b(k)=\dim H^k(M,{\frak g}_{\theta})$. Then $b(0)=b(4)=0$ by irreducibility (and due to Poincaré duality), while $b(1)=b(3)$. Recall that the space $H^1(M,{\frak g}_{\theta})$ can be thought of as the Zariski tangent space to ${\S}$ at $\lbrack\theta\rbrack$; let us denote $d_Z(\theta)=b(1)$. So $d_Z=d$ at a smooth point, while $d_Z\ge d$ in general, (see \[GM\] ). Indeed a non-vanishing $H^2(M,{\frak g}_{\theta})$ may represent an obstruction to the identification of the Zariski tangent space to the smooth tangent space at the point $[\theta]$. The Zariski dimension $d_Z$ may be computed by using the Atiyah-Singer index theorem. Let ${\Omega}^p({\frak g}_{\theta})$ denote the space of all ${\frak g}_{\theta}$-valued exterior $p$-forms on $M$. We may consider the elliptic complex $$\begin{aligned} 0 \to \op0 \to \op1 \to \op2 \to \op3 \to \op4 \to 0 \label{tre}\end{aligned}$$ whose cohomology is isomorphic to $H^*(M,{\frak g}_{\theta})$. The index of the complex (\[tre\]), $\hbox{ind}=2d_Z(\theta)-b(2)$, may be computed explicitly getting $\hbox{ind}=-\dim G\chi(M)$, so that $$\begin{aligned} d_Z(\theta)=-\unmezzo\dim G\chi(M)+\unmezzo b(2) \label{quattro}\end{aligned}$$ where the $\theta$ dependence is implicit in the twisted Betti number $b(2)=\dim H^2(M,{\frak g}_{\theta})$. Counting minimal coverings ========================== It is known that, under suitable hypotheses, the Reidemeister torsion can count closed (periodic) orbits of a flow on a (hyperbolic) riemannian manifold \[RS\]. Our purpose is to show that it can also count inequivalent geodesic ball coverings. 0.5 cm This result is basically a consequence of a cardinality law satisfied by the torsion. Let $A$ and $B$ denote subcomplexes of the manifold $M$, (as usual thought of as a cellular or a simplicial complex), with $M=A\cup B$, and let us consider a representation $\theta\in Hom({\pi}_1(M),G)$. Let us assume that, for every $i$, volume elements ${\mu}_i(A)$, ${\mu}_i(B)$, and ${\nu}_i(A)$, ${\nu}_i(B)$ are chosen for the cochain complexes $C^i_{\frak g}(A)$, $C^i_{\frak g}(B)$, and the corresponding cohomology groups $H^i_{\frak g}(A)$, $H^i_{\frak g}(B)$, respectively. Let us further assume that such volume elements determine the volume elements on $C^i_{\frak g}(M)$ and $H^i_{\frak g}(M)$. Corresponding to this choice of volumes let us denote by ${\Delta}^{\frak g}(M|A)$, ${\Delta}^{\frak g}(M|B)$, and ${\Delta}^{\frak g}(M|A\cap B)$ the Reidemeister-Franz torsions associated with the subcomplexes $A$, $B$, and $A\cap B$ respectively. Then $$\begin{aligned} {\Delta}^{\frak g}({H}_{A,B}) {\Delta}^{\frak g}(M|A\cup B){\Delta}^{\frak g}(M|A\cap B)= {\Delta}^{\frak g}(M|A){\Delta}^{\frak g}(M|B) \label{long}\end{aligned}$$ where ${H}_{A,B}$ is the long exact cohomology sequence associated with the short exact sequence generated by the complexes $C_{\frak g}^*(A\cup{B})$, $C_{\frak g}^*(A)\oplus{C_{\frak g}^*(B)}$, and $C_{\frak g}^*(A\cap{B})$, (the correction term ${\Delta}^{\frak g}({H}_{A,B})$, associated with the twisted cohomology groups of the above three cochain complexes, disappears when the representation is acyclic). 0.5 cm In order to exploit this cardinality law, let us consider all possible minimal geodesic $\epsilon$-ball coverings of a manifold of bounded geometry $M\in\Ricco$ with a given filling function $\lambda=N^{(0)}_{\epsilon}(M)$. Given a sufficiently small $\epsilon >0$, (in particular, smaller than the ${\epsilon}_0$ provided by Zhu’s theorem), and given a representation $\theta\colon{\pi}_1(M)\to{G}$, and still denoting by $\theta$ its restrictions to representations of the various ${\pi}_1(B_{\epsilon}(p_i))$, we can consider the cohomologies with local coefficients in ${\frak g}_{\theta}$, $H^*_{\frak g}({B_{\epsilon}(p_i)})$ for $i=1,\ldots,\lambda$. We can use them as labels to distribute over the unlabelled balls $\{B_{\epsilon}(p_i)\}$. Obviously, the [*coordinate*]{} labelling of the balls arising from the centers $\{p_i\}$ are to be factored out to the effect that the balls $\{B_{\epsilon}(p_i)\}$ are considered as a collection of $\lambda=N^{(0)}_{\epsilon}(M)$ empty boxes over which distribute the colours $H^*_{\theta}({B_{\epsilon}(p_i)})$. This must be done according to the constraint expressed by the Mayer-Vietoris sequence, associated with the intersection pattern of the covering, so as to reproduce $H^*_{\frak g}(\cup{B_{\epsilon}(p_i)})\simeq H^*_{\frak g}(M)$. 0.5 cm We formalize these remarks as follows. 0.5 cm Let us assume that $M\in\Ricco$ has diameter $diam(M)$, ($diam(M)\leq D$), and Ricci curvature $Ric(M)\geq{r}$. Let us consider the generic ball ${B_{\epsilon}(p_i)}\subset{M}$ as a riemannian manifold with boundary, with metric tensor ${g_{\epsilon}(p_i)}$. According to the coarse-grained point of view, we can assume that such geodesic ball is obtained, by a re-scaling, from a corresponding ball ${\tilde{B}}_a(diam(M))$ of radius $diam(M)$ in a space form ${\tilde M}^r_a$ of constant curvature $r$. Notice that different balls, say $B_{\epsilon}(p_i)$ and $B_{\epsilon}(p_k)$, with $i\not={k}$, may arise from different space forms, resulting from different quotients of the simply connected space of dimension $n$ and constant curvature $r$. Thus, for $\epsilon$ sufficiently small, all balls are locally isometric, but possibly with different underlying topologies. In particular, as far as the metric properties are concerned, we assume that $$\begin{aligned} g_{\epsilon}(p_i)=\frac{2{\epsilon}^2}{diam(M)^2} {\tilde g}_r \label{modello}\end{aligned}$$ for every ball ${B_{\epsilon}(p_i)}$ with ${\epsilon}$ sufficiently small, and where ${\tilde g}_r$ denotes the constant curvature metric on the space form ${\tilde M}^r_a$. In terms of the filling function $N^{(0)}_{\epsilon}(M)=\lambda(\epsilon)$, it is straightforward to check that (\[modello\]) can be equivalently rewritten as $$\begin{aligned} g_{\epsilon}(p_i)={\rho}(M)^{-2/n}{\lambda(\epsilon)}^{-2/n} {\tilde g}_r \label{fillmetric}\end{aligned}$$ where ${\rho}(M)$ is a suitable function, depending on the parameters $n$, $r$, $D$, $V$, and on the actual geometry of the manifold $M$. 0.5 cm For later use, it is also convenient to introduce the deformation parameter $$\begin{aligned} t(\epsilon)\equiv \ln[{\rho}(M)^{-2/n}{\lambda(\epsilon)}^{-2/n}]\end{aligned}$$ so that we can describe the re-scaling (\[modello\]) as obtained through a smooth one-parameter family of conformal deformation $$\begin{aligned} g_t(p_i)=e^{t(\epsilon)}{\tilde g}_r \label{scaling}\end{aligned}$$ interpolating between ${\tilde g}_r$, (corresponding to $t=0$), and the actual $g_{\epsilon}(p_i)$. 0.5 cm As far as topology is concerned, since the ball $B_{\epsilon}(p_i)$ comes from the re-scaling of the reference ball ${\tilde{B}}_a(diam(M))$ in the space form ${\tilde M}^r_a$, we can write $H^*_{\frak g}(B_{\epsilon}(p_i)) \simeq{H^*_{\frak g}({\tilde{B}}_a(diam(M)))}$. Thus, with the balls $B_{\epsilon}(p_1),B_{\epsilon}(p_2),\ldots, B_{\epsilon}(p_{\lambda})$ we can associate the cohomology groups $H^q_{\frak g}(B_{\bar{\epsilon}}(p_i))= H^*_{\frak g}({\tilde{B}}_{a(i)}(diam(M)))$, where $a(i)$ labels the possibly inequivalent space forms ${\tilde M}^r_a$ after which the balls $\{B_{\epsilon}(p_i)\}$ are modelled. Notice that in general $a(i)=a(k)$ for some pair $i\not={k}$ since the balls $B_{\epsilon}(p_i)$ and $B_{\epsilon}(p_k)$ may be modelled after the same space form ${\tilde M}^r_a$. 0.5 cm Scaling of torsions ------------------- 0.5 cm At this stage, there is an important point we wish to stress, namely that even if the twisted cohomology of each ball is not affected by the dilation of the ball, the corresponding volume elements in cohomology do change. In particular, let $$\begin{aligned} {\bar{\mu}}_q(i)\equiv{\mu}_q({\tilde{B}}_{a(i)}(diam(M)))\end{aligned}$$ and $$\begin{aligned} {\bar{\nu}}_q(i)\equiv{\nu}_q({\tilde{B}}_{a(i)}(diam(M)))\end{aligned}$$ respectively denote chosen (reference) volume elements for the cochain complex $C^q_{\frak g}({\tilde{B}}_{a(i)}(diam(M)))$, and for the cohomology group $H^q_{\frak g}({\tilde{B}}_{a(i)}(diam(M)))$ associated with the reference ball balls ${\tilde{B}}_{a(i)}(diam(M))$ corresponding to $B_{\epsilon}(p_i)$. The effect, on the above reference volumes, of scaling to $\epsilon$ the radius of such ball, is described by the following Let $\lambda=N^{(0)}_{\epsilon}(M)$ denote the value of the filling function as a function of $\epsilon$, then, as the radius of the reference ball varies from $diam(M)$ to its actual value, the volume elements ${\bar{\mu}}_q(i)$ and ${\bar{\nu}}_q(i)$ scale, as a function of $\lambda$, according to $$\begin{aligned} \frac{{\nu}_q(i)}{{\mu}_q(i)}(\lambda(\epsilon))= \frac{{\bar{\nu}}_q(i)}{{\bar{\mu}}_q(i)} {\lambda(\epsilon)}^{-\frac{2}{n} (q-\frac{n}{2})b(q)}\end{aligned}$$ where the Betti number $b(q)$, (in real singular homology), is the dimension of the cohomology group $H^q_{\frak g}({\tilde{B}}_{a(i)}(diam(M)))$, and where ${\mu}_q(i)$ and ${\nu}_q(i)$ respectively denote the volume elements for the cochain complex $C^q_{\frak g}(B_{\epsilon}(p_i))$, and for the cohomology group $H^q_{\frak g}(B_{\epsilon}(p_i))$ associated with the given ball $B_{\epsilon}(p_i)$. 0.5 cm This result provides a basic [*anomalous scaling*]{} relation satisfied by the ratio of the volume elements ${\bar{\mu}}_q(i)$ and ${\bar{\nu}}_q(i)$ as the radius, $diam(M)$, of the generic reference geodesic ball is shrunken to $\epsilon$. To prove this lemma we first evaluate $d/dt({\nu}_q(i)/{\mu_q(i)}$, corresponding to the deformation (\[scaling\]), and then integrate (in $t$) the resulting expression between $0$ and $t$. This can be done by an obvious extension of a construction discussed in the paper by Ray and Singer, \[RS\], whereby we proceed by considering the ratio of volume elements $({\nu}_q(i)/{\mu}_q(i))$ as generated by a proper choice of a base in $C^q_{\frak g}({B_{\epsilon}(p_i)})$. 0.5 cm [*Proof*]{}. Let us denote by ${\cal D}^k_{\frak g}({B_{\epsilon}(p_i)})$ the space of $C^{\infty}$-differential forms on ${B_{\epsilon}(p_i)}$ with values in the flat bundle ${\frak g}_{\theta}|{B_{\epsilon}(p_i)}$, and which satisfy relative boundary conditions at each point of the boundary $\partial{B_{\epsilon}(p_i)}$, (for a definition of such boundary conditions see Ray-Singer, [*ibidem*]{} p.162). Corresponding quantities are similarly defined also for the reference ball ${\tilde{B}}_{a(i)}(diam(M))$. 0.5 cm Let ${\bf h}^q(t)\in {\cal H}^q$ be an orthonormal base of harmonic $q$-forms (with coefficients in ${\frak g}_{\theta}$), in the space ${\cal H}^q\subset{{\cal D}^k_{\frak g}({B_{\epsilon}(p_i)})}$, of harmonic forms associated with the metric $g_t$. Let $A^q\colon{\cal H}^q\to{C^q_{\frak g}({B_{\epsilon}(p_i)})}$ denote the twisted deRham map $$\begin{aligned} A^q{\bf h}(\xi\otimes\sigma)=\int_{\sigma}tr({\xi},{\bf h})\end{aligned}$$ where ${\sigma}$ is a $q$-cell in ${B_{\epsilon}(p_i)}$, $\xi\in{\frak g}_{\theta}$, and $tr(\cdot,\cdot)$ denotes the inner product in ${\frak g}_{\theta}$. Since $A^q$ is an injective map of ${\cal H}^q$ onto a linear space of cocycles representing $H^q_{\frak g}({B_{\epsilon}(p_i)})$, we may use it as a part of a base for $C^q_{\frak g}({B_{\epsilon}(p_i)})$. Choose a base ${\bf b}^q=\{b^q_j\}$ for the space of coboundaries $B^q_{\frak g}({B_{\epsilon}(p_i)})$ and for each $b^{q+1}_j$ take an element ${\tilde b}^{q+1}_j$ of $C^q_{\frak g}({B_{\epsilon}(p_i)})$ such that $d{\tilde b}^{q+1}_j=b^{q+1}_j$. Both $b^q_j$ and ${\tilde b}^{q+1}_j$ can be chosen independently of the metric $g_t$. Thus $(b^q_j,{\tilde b}^{q+1}_j,A^q(h^q_j))$ is a base for $C^q_{\frak g}({B_{\epsilon}(p_i)})$ depending from the metric $g_t$ only through the base of harmonic forms $h^q_j$. Following Ray-Singer, we denote by $D^q$ the matrix providing the trasformation from the base ${\hat \sigma}^q_j{X_k}$ of $C^q_{\frak g}({B_{\epsilon}(p_i)})$, generated by the cells of $C^q({B_{\epsilon}(p_i)})$ and the orthonormal base ${\frak g}_{\theta}$, and the base $(b^q_j,{\tilde b}^{q+1}_j,A^q(h^q_j))$ introduced above. Thus $$\begin{aligned} \frac{{\nu}_q(i)}{{\mu}_q(i)}= |\det D^q|\end{aligned}$$ The computation of the derivative of the determinant of $D^q$ is carried out in Ray-Singer, \[RS\], where it is explicitly applied to the discussion of the behavior of the Reidemeister torsion as the metric varies, (see their Theorem 7.6). Explicitly, we get $$\begin{aligned} \frac{d}{dt}\ln\frac{{\nu}_q(i)}{{\mu}_q(i)} =\sum_{j}^{b(q)}(h^q_j,\frac{d}{dt}h^q_j)_{L^2} \label{derivative}\end{aligned}$$ where $b(q)$ is the Betti number, (in real singular homology), of $H^q_{\frak g}({B_{\epsilon}(p_i)})$, and $(\cdot,\cdot)_{L^2}$ denotes the global $L^2$-inner product in the space of ${\frak g}_{\theta}$-twisted harmonic $q$-forms ${\cal H}^q$, namely, for any two such forms ${\bf f}$, and ${\bf g}$, $$\begin{aligned} ({\bf f},{\bf g})_{L^2}= \int_{B_{\epsilon}(p_i)}tr({\bf f}\wedge{*{\bf g}})\end{aligned}$$ where $*$ denotes the Hodge-duality operator, and $tr$ stands for the inner product in ${\frak g}_{\theta}$. 0.5 cm The global inner product $(h^q_j,\frac{d}{dt}h^q_j)_{L^2}$ is easily evaluated corresponding to the conformal deformation (\[scaling\]). Indeed, we may rewrite $(h^q_j,\frac{d}{dt}h^q_j)_{L^2}$ as $(h^q_j,*^{- 1}\frac{d*}{dt}h^q_j)_{L^2}$, (see [*e.g.*]{}, proposition 6.4 of Ray-Singer, \[RS\]). A straightforward computation shows that the derivative of the Hodge map $*_t$, associated to the $t$-flow of metrics $g_t$, defined by (\[scaling\]), is provided by $$\begin{aligned} \frac{d*_t}{dt}|_{t=0}{\bf f}= [q-\frac{1}{2}n]*{\bf f}\end{aligned}$$ for any given $q$-form ${\bf f}$ with values in the flat bundle ${\frak g}_{\theta}$. Thus $$\begin{aligned} (h^q_j,\frac{d}{dt}h^q_j)_{L^2}|_{t=0}= [q-\frac{1}{2}n] \int_{B_{{\epsilon}/2}(p_i)}tr(h^q_j\wedge{*h^q_j})= [q-\frac{1}{2}n]\end{aligned}$$ since the basis $h^q_j$ is orthonormal. 0.5 cm Introducing this latter expression in (\[derivative\]) we get $$\begin{aligned} \frac{d}{dt}\ln\frac{{\nu}_q(i)}{{\mu}_q(i)} =b(q)[q-\frac{1}{2}n] \label{timederivative}\end{aligned}$$ We integrate (\[timederivative\]) with the initial condition $$\begin{aligned} \frac{{\nu}_q(i)}{{\mu}_q(i)}(t=0)= \frac{{\bar{\nu}}_q(i)}{{\bar{\mu}}_q(i)}\end{aligned}$$ where ${\bar{\nu}}_q(i)$ and ${\bar{\mu}}_q(i)$ respectively refer to the original unscaled measures on the cochain complex $C^q_{\frak g}({\tilde{B}}_{a(i)}(diam(M)))$ and on the cohomology group $H^q_{\frak g}({\tilde{B}}_{a(i)}(diam(M)))$. With this initial condition, and if we take into account the explicit expression of $t$ in terms of the filling function $\lambda$ we get $$\begin{aligned} \frac{{\nu}_q(i)}{{\mu}_q(i)}(t(\lambda))= \frac{{\bar{\nu}}_q(i)}{{\bar{\mu}}_q(i)} [{\rho}^{-2/n} {\lambda}^{-2/n}]^{(q-n/2)b(q)}\end{aligned}$$ Thus, we eventually get $$\begin{aligned} \frac{{\nu}_q(i)}{{\mu}_q(i)}(\lambda(\epsilon))= \frac{{\bar{\nu}}_q(i)}{{\bar{\mu}}_q(i)} {\lambda(\epsilon)}^{-\frac{2}{n} (q-\frac{n}{2})b(q)}\end{aligned}$$ where we have traded the term $[{\rho}^{-2/n}]^{(q-n/2)b(q)}$ for a redefinition of the given original unscaled measures ${\bar{\nu}}_q(i)$ and ${\bar{\mu}}_q(i)$. This completes the proof of the stated lemma. 0.5 cm Corresponding to this re-scaling of volume elements, we can evaluate the relation between Reidemeister torsion, for the generic geodesic ball $B_{\epsilon}(p_i)$, as expressed in terms of the scaled ${\nu}_q(i)$, ${\mu}_q(i)$ and unscaled measures ${\bar{\nu}}_q(i)$, ${\bar{\mu}}_q(i)$. A straightforward computation yields $$\begin{aligned} {\Delta}^{\frak g}(B_{\epsilon}(p_i);{\mu}(i),{\nu}(i))= {\Delta}^{\frak g}(B_{\epsilon}(p_i); {\bar{\mu}}(i),{\bar{\nu}}(i)) {\lambda}^{-\frac{2}{n} \sum_q(-1)^q(q-\frac{n}{2})b(q)} \label{storsion}\end{aligned}$$ Notice that the exponent of $\lambda$, [*viz.*]{}, ${\sum_q(-1)^q(1-\frac{2}{n}q)b(q)}$ vanishes, by Poincaré duality, if the ball is compact and without boundary, (in particular it vanishes when $\epsilon\to{diam(M)}$, namely when the ball $B_{\epsilon}(p_i)$ is expanded so as to cover the whole manifold $M$). In this sense, it is a measure of the presence of the boundary. If we set $$\begin{aligned} {\alpha}(i)\equiv [dim(G)]^{-1}\sum_q(-1)^qqb(q)\end{aligned}$$ and recall that $\sum_q(-1)^qb(q)=dim(G){\chi}(i)$, where ${\chi}(i)\equiv{\chi(B_{\epsilon}(p_i))}$ is the Euler-Poincaré characteristic of the given ball $B_{\epsilon}(p_i)$, then we can rewrite (\[storsion\])as $$\begin{aligned} {\Delta}^{\frak g}(B_{\epsilon}(p_i);{\mu}(i),{\nu}(i))= {\Delta}^{\frak g}(B_{\epsilon}(p_i); {\bar{\mu}}(i),{\bar{\nu}}(i)) {\lambda}^{dim(G)\frac{2}{n} [\frac{n}{2}{\chi}(i)-{\alpha}(i)]} \label{anomalo}\end{aligned}$$ 0.5 cm Distinct coverings in a given representation of ${\pi}_1(M)$ ------------------------------------------------------------ 0.5 cm With these preliminary remarks out of the way, our strategy is to construct, out of the sequence of $\lambda$ balls $\{B_{\epsilon}(p_i)\}$, each endowed with the metric $g_t(p_i)$, all possible geodesic ball coverings providing and $\epsilon$-Hausdorff approximation to the original $M$. In order to do so, we need to consider explicitly the generalized Meyer-Vietoris sequence associated with the covering $\{B_{\epsilon}(p_i)\}$, (see \[BT\] for details). To simplify the notation, we shall denote by $B(i)$, with $i=1,\ldots,\lambda$, the generic open ball $B_{\epsilon}(p_i)$. Similarly, we denote the pairwise intersections $B(i)\cap{B(j)}$ by $B(i,j)$, triple intersections $B(i)\cap{B(j)}\cap{B(k)}$ by $B(i,j,k)$, and so on. Recall that for a manifold of bounded geometry, the number of mutually intersecting balls is bounded above by a constant $d$, depending on the parameters $n$, $r$, $D$ characterizing $\Ricco$, but otherwise independent from $\epsilon$. Thus, independently from $\epsilon$, the largest cluster of mutually intersecting balls which can occur for any $M\in\Ricco$ is $B(i_1,i_2,\ldots,i_d)$. 0.5 cm As usual \[BT\], we denote by $\partial_{\eta}$ the inclusion map which ignores the $i_{\eta}$ open ball $B(i_{\eta})$ in the generic cluster $B(i_1,\ldots,i_{\eta},\ldots)$ . For instance $$\begin{aligned} \partial_i\colon\coprod_{i}B(i,j,k)\to{B(j,k)}\end{aligned}$$ By considering the cochain complexes $C^*_{\frak g}(B(i,j,\ldots))$ associated with the intersections $B(i,j,\ldots)$, one can consider the restriction map ${\delta}_{\eta}$ defined by the image of the cocycles under the pullback map induced by the inclusion $\partial_{\eta}$. For instance, corresponding to the above inclusion we get $$\begin{aligned} {\delta}_i\colon{C^*_{\frak g}(B(j,k))}\to \prod_{i}C^*_{\frak g}(B(i,j,k))\end{aligned}$$ 0.5 cm Thus, associated with any given minimal geodesic ball covering $\{B(i)\}$, there is a sequence of inclusions relating the intersections $B(i,j,k,\ldots)$ with the packing $\coprod_{i}B(i)$ $$\begin{aligned} \ldots\to\coprod_{i<j<k}B(i,j,k)\to\coprod_{i<j}B(i,j) \to\coprod_{i}B(i)\to{M} \label{inclusions}\end{aligned}$$ and a corresponding sequence of restrictions $$\begin{aligned} C^*_{\frak g}(M)\to\prod_{i}{C^*_{\frak g}(B(i))}\to \prod_{i<j}{C^*_{\frak g}(B(i,j))}\to\prod_{i<j<k} {C^*_{\frak g}(B(i,j,k))}\to\ldots \label{restrictions}\end{aligned}$$ If in this latter sequence we replace the restriction maps with the corresponding difference operator $\delta\colon\prod {C^*_{\frak g}(B(i_1,\ldots,i_{\eta}))}\to \prod{C^*_{\frak g}(B(i_1,\ldots,i_{\eta},i_{\gamma}))}$ defined by the alternating difference ${\delta}_1-{\delta}_2+\ldots(+/-){\delta}_{\eta}- /+{\delta}_{\gamma}$, then we get the generalized Mayer-Vietoris exact sequence $$\begin{aligned} 0\to{C^*_{\frak g}(M)}\to\prod_{i}{C^*_{\frak g}(B(i))}\to \prod_{i<j}{C^*_{\frak g}(B(i,j))}\to\prod_{i<j<k} {C^*_{\frak g}(B(i,j,k))}\to\ldots \label{GMV}\end{aligned}$$ 0.5 cm The sequences (\[inclusions\]), (\[restrictions\]), (\[GMV\]) intermingle the combinatorics of the geodesic ball packings and of the corresponding coverings with the topology of the underlying manifold $M$. 0.5 cm The function that associates with a manifold of bounded geometry the number of distinct geodesic ball packings extend continuously, (in the Gromov-Hausdorff topology), through the Mayer-Vietoris sequence (\[GMV\]). Thus, our strategy will be to enumerate all possible ${\epsilon}/2$-geodesic ball packings, modulo a permutation of their centers $\{p_i\}$, and then extend by continuity the resulting counting function to the corresponding coverings. 0.5 cm In order to view a manifold of given fundamental group ${\pi}_1(M)$ as generated by packing and gluing metric balls we must choose base points and arcs connecting these points. Only in this way we will be able to consider curves in the balls either as elements of the fundamental groups of the balls themselves or as elements of the fundamental group of the manifold $M$. So we choose as base points in the balls their respective $\lambda$ centers $p_1,p_2,\ldots,p_{\lambda}$. One of these centers (say $p_1$) is then chosen as a base point in $M$. Next we need to choose arcs $L_{ij}$ connecting the points $p_i$ and $p_j$. This amounts in giving a [*reference*]{} intersection pattern for the geodesic ball coverings, namely a [*reference one-skeleton*]{} ${\Gamma}^{(1)}_{\epsilon}(M;ref)$. If $L_i$ is a path in $M$, corresponding to a path in the graph ${\Gamma}^{(1)}_{\epsilon}(M;ref)$, connecting $p_1$ with $p_i$, and $C_i$ is a curve in the ball $B(i)$, then ${\hat C}_i\equiv L^{-1}_i*C_i*L^{-1}_i$ is an equivalence class in ${\pi}_1(M)$. In this connection, it is particularly helpful that isomorphic, (in the combinatorial sense), one-skeleton graphs correspond to manifolds with a same homotopy type. 0.5 cm The remarks above imply that in order to enumerate all possible coverings, we [*need*]{} to start by giving a [*reference* ]{} covering $Cov_{ref}$ $$\begin{aligned} \ldots\to\coprod_{i<j<k}B(i,j,k)\to\coprod_{i<j}B(i,j) \to\coprod_{i}B(i)\to{M} \label{refcov}\end{aligned}$$ specifying the homotopy type of the manifolds $M$ in $\Ricco$ we are interested in. We wish to stress that this reference covering is common to many topologically distinct manifolds, for we are not specifying a priori the topology of each ball. Recall that according to Gromov’s coarse grained point of view, two manifolds $M_1$ and $M_2$ in $\Ricco$, having the same $\epsilon$-geodesic ball covering, define an $\epsilon$ Hausdorff approximation of a same manifold, without necessarily being homeomorphic to each other. For $\epsilon$ small enough, such approximating manifolds only share the homotopy type, (and hence have isomorphic fundamental groups). Thus the reference covering $Cov_{ref}$, (\[refcov\]), may be considered as a bookkeeping device for fixing the homotopy type, (and in particular the fundamental group), of the class of manifolds we are interested in. 0.5 cm From a combinatorial point of view, $Cov_{ref}$ labels the intersection pattern of centers $\{p_i\}$ of the balls in a given order. The strategy is to determine the number of different ways of associating with such centers the actual balls, $\{{\tilde B}_a,H^*_{\frak g}({\tilde B}_a)\}$, after which the geodesic balls are modelled, [*i.e.*]{}, we have to fill the [*reference*]{} balls $B(i)$ with some topology. Any two such correspondence between centers and model balls are considered equivalent if they can be obtained one from the other through the action of the symmetric group acting on the centers. In this way we avoid to count as distinct the re-labellings of the centers of a same pattern of model balls. We prove that in this way, we can obtain all possible coverings. 0.5 cm Let $Perm$ denote the group of permutations of the collection of balls $\{B(i)\}\in {Cov}_{ref}$, namely the symmetric group $S_{\lambda}$ acting on the $\lambda$ centers $\{p_1,\ldots,p_{\lambda}\}$. Also let $\{C^*_{\frak g}(a)\}$, with $a=1,2,\ldots,|C^*_{\frak g}|$, denote the set of possible cochain groups for the model balls $\{{\tilde B}_a\}$, where $|C^*_{\frak g}|$ denotes the cardinality of $\{C^*_{\frak g}(a)\}$. We are tacitly assuming that different balls may have the same $C^*_{\frak g}(a)$. But actually, in the final result we allow for $|C^*_{\frak g}|\to (n+1)\lambda$. As often emphasized, $\{C^*_{\frak g}(a)\}$ is the typical set of [*colours*]{} for the balls $B(i)$ coming from the model balls $\{{\tilde B}_a\}$ in the space forms ${\tilde M}^r_a$. Similarly, we denote by $\{C^*_{\frak g}(a,b)\}\subset \{C^*_{\frak g}(a)\}$ the set of possible cochain groups for the pairwise intersections $B(i,j)$; by $\{C^*_{\frak g}(a,b,c)\}\subset \{C^*_{\frak g}(a)\}$ the possible cochain groups for the triplewise intersections $B(i,j,k)$, etc. All such groups are assumed to be related by a sequence of restrictions analogous to (\[restrictions\]), namely $$\begin{aligned} \prod_{a}{C^*_{\frak g}(a)}\to \prod_{a<b}{C^*_{\frak g}(a,b)}\to\prod_{a<b<c} {C^*_{\frak g}(a,b,c)}\to\ldots \label{modrest}\end{aligned}$$ 0.5 cm It is important to stress that even if the balls, (and their intersections), are topologically trivial, (namely if they are contractible), the labels associated with the $C^*_{\frak g}(a)$ are non trivial. Indeed, for a contractible ball we get $$\begin{aligned} H^0_{\frak g}(B(i))\simeq {\frak g}_{{\theta}_i}\end{aligned}$$ while the remaining twisted cohomology groups all vanish. Thus in this case, the label is provided by local flat bundles over $B(i)$ associated with the representation $\theta$. Since there is no canonical isomorphism between these flat bundles over the balls $B(i)$, we have to assume that the labels $C^*_{\frak g}(a)$ are distinct. 0.5 cm Let us consider the set of all functions, $f\equiv(f_{(i)},f_{(ij)},f_{(ij\ldots)},\ldots)$, compatible with the morphisms of the two complexes (\[inclusions\]) and (\[modrest\]), where $$\begin{aligned} f_{(i_1\ldots{i_p})}\colon\{B(i_1,\ldots,i_p)\} \to\{C^*_{\frak g}(a_1,\ldots,a_p)\}\end{aligned}$$ is the function which associates with the generic mutual intersection of balls $B(i,j,\ldots)$ the corresponding cochain group $C^*_{\frak g}(B(i,j,\ldots))=C^*_{\frak g}(a_1,a_2,\ldots)$ out of the possible ones $\{C^*_{\frak g}(a,b,\ldots)\}$. 0.5 cm Let $\sigma\in{Perm}$ a permutation acting on the balls $\{B(i)\}$. Any such $\sigma$ can be made to act on the set of function $f$, by defining $$\begin{aligned} ({\sigma}^*f)(B(i_1,\ldots,i_k))= f({\sigma}B(i_1,\ldots,i_k))\end{aligned}$$ for any $1\leq{k}\leq{d}$, and where $$\begin{aligned} {\sigma}B(i_1,\ldots,i_k)\equiv {\sigma}B(i_1)\cap\ldots\cap{\sigma}B(i_k)\end{aligned}$$ Thus, the equivalence class of configurations $f$ under the action of $Perm$ is well defined; it is the [*Combinatorial Pattern*]{} of the geodesic ball covering $f(Cov_{ref})$ in the representation $[\theta]$. 0.5 cm Notice that if we assume that the reference covering $Cov_{ref}$ is explicitly realized on a given manifold $M$, then the orbit of the map $$\begin{aligned} f^{(ref)}_{(i_1\ldots{i_p})}\colon\{B(i_1,\ldots,i_p)\} \to\{C^*_{\frak g}(M;i_1,\ldots,i_p)\}\end{aligned}$$ which allocates the balls $\{B(i)\}$ of the reference covering on their centers, corresponds to the given reference covering $Cov_{ref}$ and all isomorphic coverings that can be obtained from the reference covering by relabelling the centers of the balls. Not all possible maps $f_{(i_1\ldots{i_p})}$ belong to the orbit of $f^{(ref)}_{(i_1\ldots{i_p})}$ and in general we can prove the following In a given conjugacy class of (irreducible) representations $[\theta]\in \frac{Hom({\pi}_1(M),G)}{G}$, and given a set of possible colours (\[modrest\]), any two minimal geodesic ${\epsilon}$-ball coverings (\[inclusions\]) are distinct if and only if they correspond to distinct orbits of the permutation group $Perm$ acting on the set of all functions $f\equiv(f_{(i)},f_{(ij)},f_{(ij\ldots)},\ldots)$ 0.5 cm [*Proof*]{}. Let $M\in\Ricco$ be a given manifold Let $Cov_1$ and $Cov_2$ be $\epsilon$-geodesic ball coverings of $M$ having the same number of balls. They are [*isomorphic*]{} if there is an injective mapping $h$ of the balls of $Cov_1$ onto those of $Cov_2$ which satisfies the following condition: [*(i)*]{} Any two distinct balls $B_{\alpha}$ and $B_{\beta}$ of $Cov_1$ mutually intersect each other if and only if their images $h(B_{\alpha})$ and $h(B_{\beta})$ mutually intersects each other in $Cov_2$. This condition is extended to the mutual intersection of any number ($\leq d$), of balls, and can be rephrased in terms of the nerves associated with the coverings $Cov_1$ and $Cov_2$, by saying that vertices of ${\cal N}(Cov_1)$ define a $k$-simplex if and only if their images under $h$ define a $k$-simplex in ${\cal N}(Cov_2)$, (see section 2.1). 0.5 cm Let $f^{(1)}\equiv(f_{(i)},f_{(ij)},f_{(ij\ldots)},\ldots)^{(1)}$ and $f^{(2)}\equiv(f_{(i)},f_{(ij)},f_{(ij\ldots)},\ldots)^{(2)}$ be two functions which are in distinct orbits of the symmetric group. Let us assume that they give rise to two isomorphic geodesic ball coverings according to the definition recalled above. Then there is a mapping $h$ of the balls of the covering $f^{(1)}(Cov_{ref})$ onto the balls of the covering $f^{(2)}(Cov_{ref})$ such that the corresponding nerves are isomorphic. We can use the map $h$ to relabel the vertices of $f^{(2)}(Cov_{ref})$. Thus $f^{(2)}$ and $f^{(1)}$ do necessarily belong to the same orbit of the symmetric group, and we get a contradiction. Conversely, let us assume that $f^{(1)}\equiv(f_{(i)},f_{(ij)},f_{(ij\ldots)},\ldots)^{(1)}$ and $f^{(2)}\equiv(f_{(i)},f_{(ij)},f_{(ij\ldots)},\ldots)^{(2)}$ are in the same orbit of the symmetric group. Then the permutation which maps $f^{(1)}$ to $f^{(2)}$ is an injective mapping of the covering defined by $f^{(1)}$ onto the covering defined by $f^{(2)}$, and the two coverings are isomorphic. 0.5 cm Since the functions $f$ must be compatible with the morphisms of the complexes (\[inclusions\]) and (\[modrest\]), and the action of the symmetric group extends naturally through (\[inclusions\]), there is no need to consider all functions $f_{(i)}$, $f_{(ij)}$, $f_{(ij\ldots)}$ as varying independently. To generate a geodesic ball covering it suffices to assign the set of all functions, $f\equiv\{f_{(i)}\}$, $$\begin{aligned} f_{(i)}\colon\{B(i)\} \to\{C^*_{\frak g}(a)\}\end{aligned}$$ which associate with the generic ball $B(i)$ the corresponding cochain group $C^*_{\frak g}(B(i))=C^*_{\frak g}(a)$ out of the possible ones $\{C^*_{\frak g}(a)\}$. The remaining functions $f_{(ij\ldots)}$ are then determined by the given reference pattern (\[inclusions\]). This circumstance simply corresponds to the fact that the assignment of a [*geodesic ball packing*]{}, [*i.e.*]{}, of $f_{(i)}$, characterizes a corresponding geodesic ball covering, ([*viz.*]{}, the one obtained by doubling the radius of the balls), and if we estimate the number of distinct geodesic ball packings we can also estimate the number of the corresponding geodesic ball coverings. 0.5 cm Thus we need to count the number of the distinct patterns associated with the orbits of $f_{(i)}$ under the symmetric group. This can be accomplished through the use of Pólya’s enumeration theorem \[Bo\]. 0.5 cm Entropy function in a given representation of ${\pi}_1(M)$ ---------------------------------------------------------- 0.5 cm Let us write the generic permutation $\sigma\in{Perm}$ as a product of disjoint cyclic permutations acting on the set of balls $\{B(i)\}$. Denote by $j_k(\sigma)$ the number of cyclic permutations (cycles) of $\sigma$ of length $k$. Recall that the [*cycle sum*]{} of $Perm$ is the polynomial with integer coefficients in the indeterminates $\{t_k\}=t_1,t_2,\ldots,t_{\lambda}$ given by $$\begin{aligned} C(Perm;t_1,\ldots,t_{\lambda})= \sum_{\sigma\in{Perm}}\prod_{k=1}^{\lambda}t_k^{j_k(\sigma)}.\end{aligned}$$ Since $Perm$ is in our case the symmetric group $S_{\lambda}$ acting on $\lambda$ objects, we get $$\begin{aligned} C(S_{\lambda};t_1,\ldots,t_{\lambda})= \sum\frac{{\lambda}!}{\prod_{k=1}^{\lambda}k^{j_k}j_k!} t_1^{j_1}t_2^{j_2}\ldots{t_{\lambda}^{j_{\lambda}}}\end{aligned}$$ where the summation is over all partitions $j_1+2j_2+\ldots+{\lambda}j_{\lambda}=\lambda$. 0.5 cm In order to apply Pólya’s theorem we need to introduce a function $w\colon\{C^*_{\frak g}(a)\}\to{E}$ where $E$ is an arbitrary commutative ring. Such $w$ is meant to provide the weight of the possible twisted cochain groups $\{C^*_{\frak g}(a)\}$. In this way, one can define the weight of a configuration $f$ of such groups over the packing as $$\begin{aligned} w(f)=\prod_{i}w(f(B(i)))\end{aligned}$$ Any two configurations that are equivalent under the action of $Perm=S_{\lambda}$ have the same weight, and the weight of the pattern associated with the $S_{\lambda}$-orbit, ${\cal O}_h$, of a $f$ is just $w({\cal O}_h)=w(f)$. By summing over all distinct orbits ${\cal O}_h$, with $h=1,\ldots,l$ one gets the pattern sum $$\begin{aligned} S=\sum_{h=1}^lw({\cal O}_h)\end{aligned}$$ where ${\cal O}_1,{\cal O}_2,\ldots,{\cal O}_l$ are the distinct patterns of the geodesic ball packings we wish to enumerate. 0.5 cm Pólya’s enumeration theorem, (see [*e.g.*]{}, \[Bo\]), relates the above pattern sum to the cycle sum, namely $$\begin{aligned} |Perm|S=C(Perm;s_1,\ldots,s_{\lambda})\end{aligned}$$ where $|Perm|$ is the order of the group of permutations, $Perm$, considered, (thus, $|Perm|={\lambda}!$ in our case), and $s_k$ is the $k$-th figure sum $$\begin{aligned} s_k=\sum_{a}(w(C^*_{\frak g}(a)))^k \label{figure}\end{aligned}$$ where the sum extends to all cochain complexes in $\{C^*_{\frak g}(a)\}$. 0.5 cm Given the generic cochain $C^*_{\frak g}(a)$, a natural candidate for the weigth $w(C^*_{\frak g}(a))$ is its corresponding torsion $$\begin{aligned} w(C^*_{\frak g}(a,\bar{\mu},\bar{\nu})) \equiv {\Delta}^{\frak g}(a)\end{aligned}$$ where ${\Delta}^{\frak g}(a)$ is the Reidemeister-Franz torsion of the cochain complex $C^*_{\frak g}(a)$ evaluated with respect to the unscaled reference volume elements $\bar{\mu}$ and $\bar{\nu}$ induced by those for the cochain complex $C^*_{\frak g}(a)$ and the corresponding cohomology $H^*_{\frak g}(a)$. 0.5 cm It is preferable to have these weights expressed in terms of the reference volumes $\bar{\mu}$ and $\bar{\nu}$ rather than the $\epsilon$-scaled volume elements $\mu$ and $\nu$, otherwise, according to (\[anomalo\]), we would get $$\begin{aligned} w(C^*_{\frak g}(a;\mu,\nu)) ={\Delta}^{\frak g}(a;\bar{\mu},\bar{\nu}) {\lambda}^{dim(G)\frac{2}{n} [\frac{n}{2}{\chi}(C^*_{\frak g}(a))- {\alpha}(C^*_{\frak g}(a))]}\end{aligned}$$ where we have set $$\begin{aligned} {\alpha}(C^*_{\frak g}(a))= [dim(G)]^{-1}[\sum_q(-1)^qqb(q;a)]\end{aligned}$$ with ${\chi}(C^*_{\frak g}(a))$ and $b(q;a)$ respectively denoting the Euler-Poincaré characteristic and the $q$-th Betti number of $C^*_{\frak g}(a)$. 0.5 cm Such a choice for the weight enhances the effect of the boundaries of the balls as follows from the presence of the anomalous scaling term $$\begin{aligned} {dim(G)\frac{2}{n} [\frac{n}{2}{\chi}(C^*_{\frak g}(a))- {\alpha}(C^*_{\frak g}(a))]} \label{boundaries}\end{aligned}$$ The entropic contribution of these boundaries to the enumeration of packings can be easily seen to be $$\begin{aligned} {\lambda}^{dim(G)\frac{2}{n} [\frac{n}{2}{\chi}(C^*_{\frak g}(a))- {\alpha}(C^*_{\frak g}(a))]\lambda}\end{aligned}$$ thus, it is of a factorial nature, and as such quite disturbing in controlling the thermodynamic limit of the theory. As stressed, its origin lies in the fact that by using as reference measures the $\epsilon$-scaled $\mu$ and $\nu$, we are implicitly providing an intrinsic topological labelling also for the boundaries of the balls, (indeed (\[boundaries\]) would vanish, by Poincaré duality, if the ball were closed and without boundary). Such boundary terms are not relevant if we are interested in coverings, and thus the weight $w(C^*_{\frak g}(a;\mu,\nu))$ is too detailed for our enumerative purposes. The proper choice is rather $w(C^*_{\frak g}(a;\bar{\mu},\bar{\nu}))$. 0.5 cm The remarks above are an example of the typical strategy inherent in Pólya’s theorem. Indeed, it is exactly the proper choice of the weight to be associated with the colours, that allows one to select the details of interest in the patterns we wish to enumerate. 0.5 cm With these remarks out of the way, it can be easily verified that the weight of a configuration $f$ of the cochain complexes $\{C^*_{\frak g}(a)\}$ over the packing $\{B(i)\}$ is nothing but the Reidemeister torsion, in the given representation $[\theta]$ and with respect to the product measures $\prod_i{\bar\mu}_i$, $\prod_i{\bar\nu}_i$, of the disjoint union $\coprod_i\{B(i)\stackrel{f}{\rightarrow}{C^*_{\frak g}(a)}\}$, (this is an immediate application of the cardinality law (\[long\]) for the torsion). 0.5 cm We can write down the generic $k$-th figure sum (\[figure\]), and a standard application of Pólya’s theorem would provide, at least in line of principle, the required enumeration of the distinct coverings. However, for large values of the filling function $\lambda$ explicit expressions are extremely difficult to obtain. Even for small values of $\lambda$ the evaluation of the cycle sum corresponding to the $k$-th figure sums is unwieldy owing to the non-trivial structure of the weight we are using. Nonetheless, a useful estimate of the number of distinct covering can be easily extracted from Pólya’s theorem. This estimate will be sufficient to characterize in a geometrically significant way the rate of growth, with $\lambda$, of the number of geodesic ball packings. 0.5 cm According to Pólya’s theorem, we get that $$\begin{aligned} &\sum_{h=1}^l{\Delta}^{\frak g}({\cal O}_h)=\nonumber\\ &\sum_{\sigma} \frac{1}{J_1(\sigma)!\ldots J_{\lambda}(\sigma)!} \left( \frac{\sum_{a}w(C^*_{\frak g}(a))}{1} \right) ^{J_1(\sigma)} \ldots\left( \frac{\sum_{a} w(C^*_{\frak g}(a))^{\lambda}} {\lambda}\right) ^{J_{\lambda}(\sigma)} \label{pollo}\end{aligned}$$ where the summation is over all partitions $J_1(\sigma)+2J_2(\sigma)+\ldots+{\lambda}J_{\lambda}(\sigma)= \lambda$. 0.5 cm Since we are interested in the large $\lambda$ behavior of the above expression, it is convenient to rewrite the figure sums in (\[pollo\]) in a slightly different way. Let ${\tilde w}(C^*_{\frak g}(a))$ denote the value of $w(C^*_{\frak g}(a))$ corresponding to which the torsion ${\Delta}^{\frak g}(C^*_{\frak g}(a))$ attains its maximum over the set of possible colours $\{C^*_{\frak g}(a)\}$, [*viz.*]{}, $$\begin{aligned} {\tilde w}(C^*_{\frak g})=\max_{a}\left\{ {\Delta}^{\frak g}(C^*_{\frak g}(a)) \right\}\end{aligned}$$ Thus, we can write $$\begin{aligned} s_k=\sum_{a} w^k(C^*_{\frak g}(a;{\mu},{\nu})) \leq |C^*_{\frak g}| {\tilde w}^k(C^*_{\frak g}(a;{\mu},{\nu})) \label{figurasy}\end{aligned}$$ where $|C^*_{\frak g}|$ denotes the number of inequivalent cochain groups $C^*_{\frak g}(a)$ providing the possible set of colours of the balls $B(i)$. 0.5 cm The generating identity determining the cycle sum for the symmetric group is $$\begin{aligned} &\sum_{j=0}^{\infty}C(t_1,t_2,\ldots,t_j)u^j/j!= &\exp\left( ut_1+u^2t_2/2+u^3t_3/3+\ldots \right) \label{generating}\end{aligned}$$ where $u$ is a generic indeterminate. For notational convenience, let us set $$\begin{aligned} \tau\equiv {\tilde w}(C^*_{\frak g}(a;{\mu},{\nu}))\end{aligned}$$ If we replace in (\[generating\]) $t_k$ with the bound, (\[figurasy\]), for the figure sum $s_k$, [*viz.*]{}, $$\begin{aligned} t_k= |C^*_{\frak g}|{\tau}^k\end{aligned}$$ then, in the sense of generating functions, we get $$\begin{aligned} \sum_{j=0}^{\infty}C(t_1,t_2,\ldots,t_j)u^j/j!= \exp\left[ |C^*_{\frak g}| (u\tau+(u\tau)^2/2+(u\tau)^3/3+\ldots) \right]= (1-u\tau)^{-|C^*_{\frak g}|}\end{aligned}$$ Thus $$\begin{aligned} C(t_1,t_2,\ldots,t_{\lambda})= \frac{(|C^*_{\frak g}|+\lambda-1)!}{(|C^*_{\frak g}|-1)!} {\tilde w}^{\lambda}\end{aligned}$$ and according to Pólya’s enumeration theorem we get that the pattern sum over all distinct orbits of the permutation group, acting on the $\{C^*_{\frak g}(a,b,\ldots)\}$ coloured covering (\[inclusions\]), is bounded by $$\begin{aligned} \sum_{h=1}^l{\Delta}^{\frak g}({\cal O}_h) \leq \frac{(|C^*_{\frak g}|+\lambda-1)!}{{\lambda}!(|C^*_{\frak g}|-1)!} {\tilde w}^{\lambda} \label{somma}\end{aligned}$$ Notice that the combinatorial factor in the above expression is exactly the number of $\lambda$-combinations with repetition of $|C^*_{\frak g}|$ distinct objects. 0.5 cm The colour of each ball $B(i)$ has a degeneracy, (the possible [*shades*]{}), equal to $n+1$, where $n$ is the dimension of the manifold $M$. Indeed, since each ball $B(i)$ is topologically non- trivial, its cohomology (with local coefficients) $H^*_{\frak g}$ is generated by $n+1$, a priori distinct, groups $H^{l}_{\frak g}$, with $l=0,1,\ldots,n$. Since there are $\lambda$, a priori distinct, balls we shall set in general $$\begin{aligned} |C^*_{\frak g}|=(n+1)\lambda;\end{aligned}$$ with this assumption, and for $\lambda>>1$ we get, by applying Stirling’s formula $$\begin{aligned} {\left[ \frac{(|C^*_{\frak g}|+\lambda-1)!}{{\lambda}! (|C^*_{\frak g}|-1)!}\right] }_{|C^*_{\frak g}|=(n+1)\lambda } \simeq \frac{1}{\sqrt{2\pi}} \sqrt{\frac{n+2}{n+1}} {\left [ \frac{(n+2)^{n+2}}{(n+1)^{n+1}} \right ] }^{\lambda} {\lambda}^{-\frac{1}{2}} \left( 1+O({\lambda}^{-\frac{3}{2}}) \right)\end{aligned}$$ 0.5 cm It follows from the above results that the asymptotics of the counting function, enumerating the distinct geodesic ball packings with a torsion ${\Delta}^{\frak g}({\cal O}_h)$ in a given representation $[\theta]$, and with respect to the product measures $\prod_i{\bar\mu}_i$, $\prod_i{\bar\nu}_i$, can be read off from the bound $$\begin{aligned} \sum_{h=1}^l{\Delta}^{\frak g}({\cal O}_h) \leq \frac{1}{\sqrt{2\pi}} \sqrt{\frac{n+2}{n+1}} {\left [ \frac{(n+2)^{n+2}}{(n+1)^{n+1}}{\tilde w} \right ] }^{\lambda} {\lambda}^{-\frac{1}{2}} \left( 1+O({\lambda}^{-\frac{3}{2}}) \right) \label{asintoto}\end{aligned}$$ 0.5 cm Explicitly, let $B_{pack}({\Delta}^{\frak g};\lambda)$ denote the number of distinct geodesic ball packings with $\lambda$ balls and with Reidemeister torsion ${\Delta}^{\frak g}$. In terms of $B_{pack}({\Delta}^{\frak g};\lambda)$ we can write $$\begin{aligned} \sum_{h=1}^l{\Delta}^{\frak g}({\cal O}_h)= \sum_{\Delta} B_{pack}({\Delta}^{\frak g};\lambda){\Delta}^{\frak g}\end{aligned}$$ 0.5 cm Since the bound (\[asintoto\]) is a [*fortiori*]{} true for each separate term appearing in the sum, we get an estimate of the asymptotics of the number of distinct geodesic ball packings with torsion ${\Delta}^{\frak g}$ $$\begin{aligned} B_{pack}({\Delta}^{\frak g};\lambda)\leq \frac{1}{\sqrt{2\pi}{\Delta}^{\frak g}({\cal O}_h)} \sqrt{\frac{n+2}{n+1}} {\left [ \frac{(n+2)^{n+2}}{(n+1)^{n+1}}{\tilde w} \right ] }^{\lambda} {\lambda}^{-\frac{1}{2}} \left( 1+O({\lambda}^{-\frac{3}{2}}) \right) \label{packings}\end{aligned}$$ 0.5 cm Rather unexpectedly, this asymptotics is of an exponential nature, whereas one would have guessed that, (allowing for repetitions), there would be a factorial number of ways of distributing $(n+1)\lambda$ distinct labels, (the cohomologies $H^*_{\frak g}$), over $\lambda$ [*empty*]{} balls. This latter is obviously a correct guess but it does not take into account the action of the symmetric group on the coordinate labellings of the centers of the balls. Since we are interested in distinct (under re-labellings) packings, we have to factor out this action. And this reduction is responsible of the transition from a factorial to an exponential growth in (\[packings\]). 0.5 cm Another relevant aspect of (\[packings\]) lies in its dependence from the Reidemeister torsion. At this stage, this is simply a consequence of the choice we made for the weight in applying Pólya’s theorem. And, had we chosen $w(C^*_{\frak g}(a))=1$ we would have obtained in place of (\[packings\]) the estimate 0.5 cm $$\begin{aligned} \frac{1}{\sqrt{2\pi}} \sqrt{\frac{n+2}{n+1}} {\left [ \frac{(n+2)^{n+2}}{(n+1)^{n+1}} \right ] }^{\lambda} {\lambda}^{-\frac{1}{2}} \left( 1+O({\lambda}^{-\frac{3}{2}}) \right)\end{aligned}$$ 0.5 cm This gives a bound to all possible $\epsilon$-geodesic ball packings on a manifold of given fundamental group ${\pi}_1(M)$, which is consistent with the data coming from numerical simulations, and in dimension $n=2$ is in remarkable agreement with the known analytical estimates, \[BIZ\]. 0.5 cm The use of the torsion as weight allows for a finer bound, where we can distinguish between different packings, (each packing being labelled by the corresponding torsion ${\Delta}^{\frak g}({\cal O}_h;{\bar\mu},{\bar\nu})$). For packings this resolution is not particularly significant, since the torsion of a packing does not have any distinguished topological meaning. However, as we pass from the geodesic ball packing $\coprod_iB(i,\epsilon/2)$ to the corresponding covering $\cup_iB(i,\epsilon)$, the torsion, now evaluated for the covering, gets identified with the torsion of the underlying manifold. Correspondingly, the bound (\[packings\]) can be extended by continuity to geodesic ball coverings too. The explicit passage from packings to the corresponding coverings is an elementary applications of Gromov’s compactness for the space $\Ricco$, and we get the following 0.5 cm Let $M\in\Ricco$ denote a manifold of bounded geometry with fundamental group ${\pi}_1(M)$, and let $\theta\colon{\pi}_1(M)\to{G}$ be an irreducible representation of ${\pi}_1(M)$ into a (semi-simple) Lie group G. For $\epsilon>0$ sufficiently small, let $\{B_M(p_i,\epsilon)\}$ denote the generic minimal geodesic ball covering of $M$, whose balls are labelled by the flat bundles ${\frak g}_{\theta}(B_M(p_i,\epsilon))$ associated with the restrictions of $\theta$ to $B_M(p_i,\epsilon)$. If we denote by $N^{(0)}_{\epsilon}(M)\equiv\lambda$ the filling function of the covering, then, for $\lambda>>1$, the number, $B_{cov}({\Delta}^{\frak g};\lambda)$, of such distinct geodesic ball coverings is bounded above by 0.5 cm $$\begin{aligned} B_{cov}({\Delta}^{\frak g};\lambda)\leq \frac{1}{\sqrt{2\pi}{\Delta}^{\frak g}(M)} \sqrt{\frac{n+2}{n+1}} {\left [ \frac{(n+2)^{n+2}}{(n+1)^{n+1}}{\tilde w} \right ] }^{\lambda} {\lambda}^{-\frac{1}{2}} \left( 1+O({\lambda}^{-\frac{3}{2}}) \right) \label{asintoto2}\end{aligned}$$ 0.5 cm where ${\Delta}^{\frak g}(M)$ is the Reidemeister torsion of $M$ in the representation $\theta$. 0.5 cm [*Proof*]{}. From a combinatorial point of view, the injection of the possible ${\epsilon}/2$-geodesic ball packings $\coprod_i\{B(i)\}$ into the possible $\epsilon$-geodesic ball coverings, $\cup_i\{B(i)\}$, is a continuous map in the Gromov-Hausdorff topology. It is also consistent with the generalized Meyer-Vietoris sequence (\[GMV\]) associated with the possible coverings. Thus, corresponding to this injection the torsions, ${\Delta}^{\frak g}({\cal O}_h)$, of the possible distinct ${\epsilon}/2$-geodesic ball packings, naturally extend to the torsion of the underlying coverings $\cup_i\{B(i)\}$. For a given $\lambda$, the bound (\[asintoto\]) depends explicitly on the topology of the packing only through these torsions, and the set of possible packings of a manifold of bounded geometry is compact (it is a finite set) in the Gromov-Hausdorff topology. It immediately follows, by Tiezte extension theorem, that (\[asintoto\]) has a continuous extension to the counting of all inequivalent geodesic ball coverings of the manifold $M\in\Ricco$ in the given representation $[\theta]$. 0.5 cm Notice that ${\Delta}^{\frak g}(M)$ plays in (\[asintoto2\]) the role of a normalization factor. Since there are $\lambda$ balls $B(i)$ in $M$, and since the Reidemeister torsion is multiplicative, ${\tilde w}^{\lambda}/{\Delta}^{\frak g}(M)$ would be of the order of 1 if the balls were disjoint, (recall that ${\tilde w}$ is the typical torsion of the generic ball). Thus, roughly speaking, the torsion depending factor in (\[asintoto2\]) is a measure of the [*gluing*]{} of the balls of the covering. 0.5 cm The dependence from the representation $\theta$ in (\[asintoto2\]) can be made more explicit. To this end, let us assume that each ball is contractible; then from the cardinality formula for the torsion we get that $$\begin{aligned} {\tilde w}={\Delta}^{\frak g}(B(i))=\sqrt{{\Delta}^{\frak g}(S^1)}\end{aligned}$$ where ${\Delta}^{\frak g}(S^1)$ is the torsion of the circle $S^1$ in the given representation $\theta$. Let $A(\theta)$ be the holonomy of a generator of ${\pi}_1(S^1)$ in the given representation $\theta$. If the matrix $I-A(\theta)$ is invertible, then the flat bundle ${\frak g}_{\theta}$, restricted to the generic ball $B(i)$, is acyclic and $$\begin{aligned} {\Delta}^{\frak g}(S^1)=|\det(I-A(\theta))|\end{aligned}$$ Thus, (\[asintoto2\]) can be written explicitly as $$\begin{aligned} B_{cov}({\Delta}^{\frak g};\lambda)\leq \frac{1}{\sqrt{2\pi}{\Delta}^{\frak g}(M)} \sqrt{\frac{n+2}{n+1}} {\left [ \frac{(n+2)^{n+2}}{(n+1)^{n+1}}|\det(I-A(\theta))| \right ] }^{\lambda} {\lambda}^{-\frac{1}{2}} \left( 1+O({\lambda}^{-\frac{3}{2}}) \right) \label{asintoto3}\end{aligned}$$ Summing over coverings and the volume of the space of Riemannian structures =========================================================================== 0.5 cm The dependence from the representation $\theta$ in (\[asintoto2\]), and (\[asintoto3\]) comes from the fact that we are counting distinct coverings on a manifold $M$ endowed with a given flat bundle ${\frak g}_{\theta}$. We can interpret this in an interesting way by saying that (\[asintoto2\]) is a functional associating to an equivalence class of representations $[\theta]$, (or which is the same, to each flat bundle or to each gauge equivalence class of flat connections), a statistical weight which in a sense is counting the inequivalent riemannian structures $M$ can carry. As a matter of fact, from a geometric point of view, the term ${\tilde w}^{\lambda}/{\Delta}^{\frak g}(M)$ is related to a measure density on the representation variety $Hom({\pi}_1(M),G)/G$, and as such it can be used to define an integration on $Hom({\pi}_1(M),G)/G$. The total measure on $Hom({\pi}_1(M),G)/G$ defined by (\[asintoto2\]) is the the actual entropy function for geodesic ball coverings. By summing this entropy function over all $\lambda$ we get an expression that can be considered as providing the measure of the set of all riemannian structures of arbitrary volume and of given fundamental group. 0.5 cm In order to elaborate on this point, let us recall that the torsion ${\Delta}^{\frak g}$ is a generalized volume element in $detline(H^*_{\frak g})$. Similarly, we may consider the product ${\tilde w}^{\lambda}$ as an element of $detline(H^*_{\frak g})$ obtained by pull back from $\oplus{H^*_{\frak g}(B(i))}$ to $H^*_{\frak g}(M)$ according to the Mayer-Vietoris sequence (\[GMV\]). Thus, the ratio ${\tilde w}^{\lambda}/{\Delta}^{\frak g}$ can be thought of as a density to be integrated over the representation variety. 0.5 cm As recalled in section $3.2$, (see also \[JW\]), the choice of a representation $\theta$ in the equivalence class $[\theta]$ identifies the twisted cohomology group $H^1_{\frak g}$ with the Zariski tangent space at $[\theta]$ to the representation variety $Hom({\pi}_1(M),G)/G$. Thus, given a choice of a volume element $\nu$ in $H^1_{\frak g}$ we may think of $\frac{{\tilde w}^{\lambda}}{{\Delta}^{\frak g}}\nu$ as providing a measure on (the dense open set of irreducible representations in ) $Hom({\pi}_1(M),G)/G$. This construction is actually very delicate since the representation variety $Hom({\pi}_1(M),G)/G$ is not smooth, and consequently the density bundle may be ill-defined. The singularities come from the reducible representations, and given a representation $\theta$, the tangent space to the isotropy group of such $\theta$ is $H^0_{\frak g}$, (again see section $3.2$ or \[JW\]). As already stressed we shall be ignoring the singularities of the representation variety in the general setting. One can make an exception for the two-dimensional case, where the structure of $Hom({\pi}_1(M),G)/G$ is better understood. 0.5 cm Given the reference measure $\nu$ on $Hom({\pi}_1(M),G)/G$, the associated measure $\frac{{\tilde w}^{\lambda}}{{\Delta}^{\frak g}}\nu$ is ill-behaved as $\lambda\to\infty$. In order to take care of this problem, we introduce as a damping term the Gibbs factor $\exp[-a\lambda]$ which provides a discretized version of the (exponential of the) volume of the manifold $M$, with $a$ the (bare) cosmological constant. In this way we have arrived at the natural setting for providing a measure on the space of riemannian structures of given fundamental group induced by the counting function $B_{cov}({\Delta}^{\frak g};\lambda)$: 0.5 cm $$\begin{aligned} Meas(RIEM(M),{\pi}_1(M))=\sum_{\lambda}^{\infty} \int_{Hom({\pi}_1(M),G)/G} B_{cov}({\Delta}^{\frak g},\lambda) \exp[-a\lambda]d\nu([\theta]) \label{moduli}\end{aligned}$$ 0.5 cm This volume (\[moduli\]) of the corresponding space of riemannian structures depend on the bare cosmological constant, here in the role of a chemical potential controlling the average number of geodesic balls. It is the partition function, in the $\lambda\to\infty$ limit, of a discrete model of quantum gravity based on geodesic ball coverings, (at least when the action can be reduced to the cosmological term). All this is strongly reminiscent of the interplay between two-dimensional quantum Yang- Mills theory and the intersection parings on moduli spaces of flat connections on a two-dimensional surface, \[Wt2\]. In this connection it is worth stressing that the representation variety $Hom({\pi}_1(M),G)/G$ has a more direct geometrical meaning which enlightens the connection with quantum gravity better than the usual interpretation as a moduli space for flat connection. In dimension $2$ is known that, by taking $G\simeq PSL(2,{\Bbb R})$, the representation variety has a connected component homeomorphic to the Teichmüller space of the surface. Analogous Teichmüller components can be characterized for other choices of the group $G$, (see [*e.g.*]{}, \[Go\], \[Hi\]), and thus by considering the representation variety in place of the moduli space of complex structure as is the case for $2D$-gravity, implies that we are considering an extension of $2D$-gravity. In dimension larger than two, the representation variety $Hom({\pi}_1(M),G)/G$ can be interpreted as the deformation space of local $G$-structures on $M$, \[Go\]. For instance, if $G=O(n)$ is the orthogonal group, then $Hom({\pi}_1(M),O(n))/O(n)$ is the moduli space of locally flat Euclidean structures on $M$. This last remark thus explains why it is natural to sum over $Hom({\pi}_1(M),G)/G$. Indeed, since counting coverings can be thought of as an approximation to compute integrals over the space of riemannian structure, the sum over the representation variety $Hom({\pi}_1(M),G)/G$ is needed in order to take into account the [*size*]{} of the set of metrics realizing such $G$-structures in the space of all riemannian structure, (of bounded geometry). Bounds on the critical exponents -------------------------------- 0.5 cm The relation between the counting function $B_{cov}({\Delta}^{\frak g};\lambda)$ and the measures on the representations variety $Hom({\pi}_1(M),G)/G$ allows us to provide bounds on the critical exponents associated with $B_{cov}({\Delta}^{\frak g};\lambda)$ by (formal) saddle-point estimation. A sounder application of this technique would require a deeper discussion of the properties of the measure $\nu([\theta])$ on $Hom({\pi}_1(M),G)/G$, in particular one needs to understand in details the extension of a measure from the set of irreducible representations to the reducible ones corresponding to the singular points of the representation variety. We are not able to address this interesting question here. Nevertheless, we venture since the results obtained may be helpful. Let us fix our attention on the two-dimensional case first. 0.5 cm To begin with, let us be more specific on the choice of the group $G$ into which we are considering representations of ${\pi}_1(M)$. A natural example is provided by $G=U(1)$. In such a case, the $U(1)$ conjugation action on $Hom({\pi}_1(M),U(1))/U(1)$ is trivial, and $ Hom({\pi}_1(M),U(1))$ is just the Jacobian variety of the riemann surface generated by the covering considered. Moreover, regardless of the complex structure, one has that topologically, $ Hom({\pi}_1(M),U(1))\simeq U(1)^{2h}$ where $h$ is the genus of the surface, (see [*e.g.*]{}, \[Go\]). We can consider the average of (\[asintoto\]), for $n=2$, as the representation $\theta$ runs over $Hom({\pi}_1(M),U(1))$. Namely $$\begin{aligned} \frac{2}{\sqrt{6\pi}}{\left [\frac{4^4}{3^3}\right ] }^{\lambda} \int_{Hom({\pi}_1(M),U(1))}\frac {({\tilde w})^{\lambda}} {{\Delta}^{\frak g}(M)} {\lambda}^{-1/2} \left( 1+O({\lambda}^{-3/2}) \right)d\nu([\theta]) \label{repaverage}\end{aligned}$$ 0.5 cm On applying Laplace method, and denoting by $Hom_0$ the finite set in $Hom({\pi}_1(M),U(1))$ where the differential of $\log{\tilde w}$ vanishes and where the corresponding Hessian is a non-degenerate quadratic form, we can estimate the above integral in terms of ${\lambda}^{1/2}$, (which is the power of $\lambda$ characterizing the subleading asymptotics in (\[asintoto\])), and obtain the bound $$\begin{aligned} \int_{Hom({\pi}_1(M),G)/G} B_{Cov}({\Delta}^{\frak g},\lambda)\leq\nonumber\end{aligned}$$ $$\begin{aligned} \frac{2(2\pi)^h}{\sqrt{6\pi}}{\left [\frac{4^4}{3^3}\right ] }^{\lambda} \sum_{\theta\in Hom_0}\sqrt{a_{\theta}} \frac {({\tilde w}_{\theta})^{\lambda}} {{\Delta}_{\theta}^{\frak g}(M)} {\lambda}^{-\frac{h}{2}-\frac{1}{2}} \left( 1+O({\lambda}^{-3/2}) \right) \label{prima}\end{aligned}$$ 0.5 cm where $a_{\theta}$ is the inverse of the determinant of the hessian of $\log{\tilde w}$. 0.5 cm As recalled in the introductory remarks, we define the critical exponent ${\eta}(G)$ associated with the entropy function $B_{Cov}({\Delta}^{\frak g},\lambda)$ by means of the relation $$\begin{aligned} \int_{Hom({\pi}_1(M),G)/G} B_{Cov}({\Delta}^{\frak g},\lambda)\equiv Meas{\left(\frac{Hom({\pi}_1(M),G)}{G}\right) } \exp[c\lambda] {\lambda}^{{\eta}_{sup}-3}\end{aligned}$$ 0.5 cm where $c$ is a suitable constant, (depending on $G$). Thus, corresponding to (\[prima\]) we get the following upper bound for the critical exponent $\eta(G)$, $$\begin{aligned} \eta(G=U(1)) \leq 2+\frac{1}{2}(1-h) \label{Bcritical}\end{aligned}$$ 0.5 cm One may wish to compare this bound with the exact critical exponent associated with (\[Superficie\]), namely $$\begin{aligned} {\eta}_{Sup}=2+(1-h)\left( \frac{c-25- \sqrt{(25-c)(1-c)}}{12}\right) \label{Scritical}\end{aligned}$$ 0.5 cm It follows that (\[Bcritical\]) correctly reproduces the KPZ scaling in the case $h=1$, (notice however that $\eta =2$ is not a good testing ground since this value of the critical exponent holds for genus $h=1$ surfaces regardless both of the presence of matter and of the fluctuations of the metric geometry \[D1\]). The bound (\[Bcritical\]) is strict both for genus $h=0$ and $h>1$, and it remains consistent with KPZ scaling. One may suspect that it may be also consistent with a strong coupling of $2D$ gravity with matter, namely in the regime where KPZ is believed not to be reliable. As a matter of fact, conformal field theory has not been used in deriving our entropy estimates. To discuss this point further, let us extend the above analysis to representations in more general groups. 0.5 cm Recall that the group $G$ is endowed with an Ad-invariant, symmetric, nondegenerate bilinear form. This metric induces \[Go\], for $n=2$, a symplectic structure on $Hom({\pi}_1(M),G)/G$, which can be used to give meaning to the integration of (\[asintoto\]) over $Hom({\pi}_1(M),G)/G$, similarly to what was done in (\[repaverage\]). 0.5 cm More in details, if we denote by $z(\theta)$ the centralizer of ${\theta}({\pi}_1(M))$ in $G$, then the dimension of the Zariski tangent space, $H^1_{\frak g}(M)$, to $Hom({\pi}_1(M),G)/G$ at $\theta$ is given by \[Go\],\[Wa\] $$\begin{aligned} (2h-2)dim(G)+2dim(z(\theta))\end{aligned}$$ 0.5 cm Thus, again on formal application of Laplace method, we get the bound (up to the usual exponential factor $(\frac{4^4}{3^3})^{\lambda}$), $$\begin{aligned} \sum_{\theta\in Hom_0}(2\pi)^{(h-1)dim(G)+ dim(z(\theta))} \sqrt{a_{\theta}} \frac {{\tilde w}_{\theta}^{\lambda}} {{\Delta}_{\theta}^{\frak g}({\cal O}_h)} {\lambda}^{[-\frac{1}{2}(h-1)dim(G)- \frac{1}{2}dim(z(\theta))-\frac{1}{2}]} \left( 1+\ldots \right)\end{aligned}$$ 0.5 cm with obvious meaning of $Hom_0$, and where $\ldots$ stands for terms of the order $O({\lambda}^{[-\frac{1}{2}(h-1)dim(G)- \frac{1}{2}dim(z(\theta))-\frac{3}{2}]})$. The corresponding bound to the critical exponent is (for a given $\theta\in Hom_0$), $$\begin{aligned} {\eta}(G)\leq 2+(1-h)\frac{dim(G)}{2}+\frac{1}{2}(1-dim(z(\theta)))\end{aligned}$$ 0.5 cm As for the $G=U(1)$ case, the structure of this critical exponent is consistent with KPZ scaling, and it may be a good starting point for discussing a strong coupling regime between matter and $2D$-gravity. 0.5 cm The four-dimensional case ------------------------- 0.5 cm The four-dimensional case can be readily discussed along the same lines of the two-dimensional case. 0.5 cm By (formally) integrating (\[asintoto\]) over $Hom({\pi}_1(M),G)/G$, and again on applying Laplace method, we get the asymptotics 0.5 cm $$\begin{aligned} \sqrt{\frac{6}{5}} {\left [ \frac{6^6}{5^5} \right ] }^{\lambda} \sum_{\theta\in Hom_0} (2\pi)^{-dim(G){\chi}(M)/4+b(2)/4-1/2} \sqrt{a_{\theta}} \frac {{\tilde w}_{\theta}^{\lambda}} {{\Delta}_{\theta}^{\frak g}({\cal O}_h)} {\lambda}^{[dim(G)\frac{{\chi}(M)}{8}-\frac{b(2)}{8} -\frac{1}{2}]} \left( 1+\ldots \right) \label{asyquattro}\end{aligned}$$ 0.5 cm where $\ldots$ stand for terms of the order $O({\lambda}^{[dim(G){\chi}(M)/8-b(2)/8 -3/2]})$. As usual, $Hom_0$ denotes the finite set in $Hom({\pi}_1(M),G)$ where the differential of $\log{\tilde w}$ vanishes and where the corresponding Hessian is a non-degenerate quadratic form. Notice that in the above asymptotics we used (\[quattro\]) providing the formal dimension of the Zariski tangent space to $Hom({\pi}_1(M),G)/G$. Notice also that in the above expression we can set ${\Delta}_{\theta}^{\frak g}({\cal O}_h)=1$, (the torsion being trivial in dimension four for a closed manifold-see the remarks in section 3.2; the same holds in dimension two). 0.5 cm The bound to the critical exponent corresponding to the estimate (\[asyquattro\]) is (for a given $\theta\in Hom_0$), $$\begin{aligned} {\eta}(G)\leq \frac{5}{2}+ \frac{dim(G){\chi}(M)}{8}-\frac{b(2)}{8}\end{aligned}$$ 0.5 cm As recalled in the introductory remarks this bound is fully consistent with the (limited) numerical evidence at our disposal. In our opinion, a more careful treatment of the integration over the representation variety may considerably improve, (also in the two-dimensional case), these bounds. We will not address these interesting questions any further here. In particular, one needs to understand in considerable detail the geometry of $Hom({\pi}_1(M),G)/G$, for $n\geq 3$. For instance, the rather naive approach to integration over the representation variety adopted above is not suitable for the three-dimensional case. In dimension $3$ the Reidemeister torsion is not trivial, and integration over $Hom({\pi}_1(M),G)/G$ is rather delicate, (see [*e.g.*]{}, \[JW\] for a remarkable analysis), and a separate study is needed for discussing the three-dimensional case in full details. A look to (\[asintoto\]) shows that the entropy estimates, for $n=3$, have exactly the structure one would expect in this case. Indeed the integration of the (Ray-Singer) torsion over a moduli space of flat connections, (our $Hom({\pi}_1(M),G)/G$)), is the basic ingredient in Witten’s approach to $3D$-gravity \[Sc\]. Details on this case will be presented in a forthcoming paper. 1 cm [**Acknowledgements**]{} 0.5 cm One of the authors (M.C.) would like to thank J. Ambjørn and B.Durhuus for many interesting discussions on entropy estimates in quantum gravity which motivated several improvements. He is also indebted to P.Petersen, V for a very informative discussion on recent finiteness results in riemannian geometry. This work was completed while the first author was visiting the State University of New York at Stony Brook, and the second was visiting the Victoria University of Wellington, New Zealand. They thank the respective Departments of Mathematics for their hospitality. This work was supported by the National Institute for Nuclear Physics (INFN), by the EEC-Contract [*Constrained Dynamical System*]{}, (Human Capital and Mobility Programme) n. CHRX-CT93-0362, and by the research project “Metodi geometrici e probabilistici in fisica matematica” of M.U.R.S.T. 0.5 cm References {#references .unnumbered} ========== M.E.Agishtein, A.A.Migdal, Mod.Phys.Lett. [**A 6**]{} (1991)1863; Nucl.Phys. [**B350**]{}, (1991)690 J.Ambjørn, Nucl.Phys.[**B**]{}(Proc.Suppl.)[**25A**]{} (1992)8; J.Ambjørn, B.Durhuus, J.Fröhlich, Nucl.Phys. [**B257**]{}, (1985)433 J.Ambjørn, J.Jurkiewicz, [*On the exponential bound in four dimensional simplicial gravity*]{}, Preprint NBI-HE-94-29 (1994). D.Bessin, C.Itzykson, J.B.Zuber, Adv.Appl.Math. [**1**]{},109(1980); E.Brézin, C.Itzykson, G.Parisi, J.B.Zuber, Commun.Math.Phys. [**59**]{}, 35 (1978); D.Bessin, Commun.Math.Phys. [**69**]{}, 143 (1979); W.J.Tutte, Canad.Journ.Math. [**14**]{}, 21 (1962). B.Bollobás, [*Graph theory an introductory course*]{}, Springer Verlag, [**GTM 63**]{}, New York (1979). R.Bott, L.W.Tu, [*Differential forms in algebraic topology*]{}, Springer Verlag, [**GTM 82**]{}, New York (1982). D.V.Boulatov, [*On entropy of $3$-dimensional simplicial complexes*]{}, Preprint NBI-HE-94-37 (1994). B.Brügmann, E.Marinari, Phys.Rev.Lett. [**70**]{}, (1993)1908. Yu.Burago, M.Gromov, G.Perel’man, [*A.D.Alexandrov spaces with curvature bounded below*]{}, Uspekhi Mat.Nauk [**47:2**]{}, 3-51, 1992, (Russ.Math.Surv. [**47:2**]{}, 1-58, 1992). S.Catterall, J.Kogut, R.Renken, [*On the absence of an exponential bound in four dimensional simplicial gravity*]{}, Preprint CERN-TH-7197/94 (1994). M.Carfora, A.Marzuoli, Class.Quantum Grav. [**9**]{}(1992)595; Phys.Rev.Lett. [**62**]{} (1989)1339. M.Carfora, A.Marzuoli [*Finiteness theorems in Riemannian geometry and lattice quantum gravity*]{}, Contemporary Mathematics [**132**]{}, 171-211 (proceedings of AMS research meeting [*Mathematical Aspects of Classical Field Theory* ]{}, Am.Math.Soc. Eds. M.J.Gotay, J.E.Marsden, V.Moncrief (1992). M.Carfora, A.Marzuoli, Intern.Journ.Modern Phys.A, [**8**]{}, 1933-1980 (1993). M.Carfora, A.Marzuoli, [*Entropy estimates for simplicial quantum gravity*]{} Preprint NSF-ITP-93-59 to appear in Jour. Geom. Physics J.Cheeger [*Critical points of distance functions and applications to Geometry*]{} in [*Geometric Topology: Recent Developments*]{}, P.de Bartolomeis, F.Tricerri eds. Lect.Notes in Math. [**1504**]{}, 1-38, (1991). M.M.Cohen, [*A course in simple homotopy theory*]{}, GTM 10, (Springer Verlag, New York, 1973); see also J.W.Milnor, Bull.Amer.Math.Soc. [**72**]{} 358 (1966); C.P.Rourke, B.J.Sanderson [*Introduction to Piecewise-Linear Topology*]{}, (Springer Verlag, New York, 1982). F.David, [*Simplicial Quantum Gravity and Random Lattices*]{}, Lect. given at Les Houches Nato A.S.I., [*Gravitation and Quantizations*]{}, (1992), Saclay Prep. T93/028 F.David, Nucl.Phys. [**B257**]{}, (1985)45. B.Doubrovin, S.Novikov, A.Fomenko [*Géométrie contemporaine*]{} (MIR, Moscou, 1987). B.Durhuus, J.Fröhlich, T.Jónsson, Nucl.Phys. [**B240**]{}, (1984)453. R.Fernandez, J.Fröhlich, A.Sokal, [*Random walks, critical phenomena, and triviality in quantum field theory*]{},TMP (Springer-Verlag, Berlin Heidelberg 1992) S.C.Ferry, [*Finiteness theorems for manifolds in Gromov-Hausdorff space*]{}, Preprint SUNY at Binghamton, (1993); S.C.Ferry, [*Counting simple homotopy types in Gromov-Hausdorff space*]{}, Preprint SUNY at Binghamton, (1991). J.Fröhlich, [*Regge calculus and discretized gravitational functional integrals*]{} Preprint IHES (1981), reprinted in [*Non-perturbative quantum field theory-Mathematical aspects and applications*]{}, Selected Papers of J.Fröhlich, (World Sci. Singapore 1992); W.Goldman, Contemp.Math. [**74**]{} (1988) 169. W.M.Goldman, J.J.Millson [*Deformations of flat bundles over Kähler manifolds*]{} in Geometry and Topology eds. C.McCrory and T.Shifrin, Lect. Notes Pure and Appl. Math. 105 M.Dekker (1987), 129-145. R.Greene, P.Petersen, [*Little topology, big volume*]{}, Duke Math.Journ., [**67**]{}, 273-290, 1992. M.Gromov, [*Structures métriques pour les variétés Riemanniennes*]{} (Conception Edition Diffusion Information Communication Nathan, Paris, 1981);see also S.Gallot, D.Hulin, J.Lafontaine [*Riemannian Geometry*]{}, (Springer Verlag, New York,1987). A particularly clear account of the results connected with Gromov-Hausdorff convergence of riemannian manifolds is provided by the paper of K.Fukaya [*Hausdorff convergence of riemannian manifolds and its applications*]{}, Advanced Studies in Pure Math. $18$-I, [*Recent topics in differential and analytic geometry*]{}, 143-238, (1990). M.Gromov, London Math. Soc. Lecture Notes $182$. K.Grove, P.V.Petersen, Annals of Math. [**128**]{}, 195 (1988). K.Grove, P.V.Petersen, J.Y.Wu, Bull.Am.Math.Soc. [**20**]{}, 181 (1989); Invent.Math. [**99**]{}, 205 (1990) (and its Erratum). N.J.Hitchin, Topology [**31**]{}, 449-473 (1992). L.C.Jeffrey, J.Weitsman, [*Half density quantization of the moduli space of flat connections and Witten’s semiclassical manifold invariants*]{} Preprint IASSNS-HEP-91/94. V.A.Kazakov, Mod. Phys.Lett.A [**4**]{},2125 (1989). M.Kontsevitch, Commun.Math.Phys. [**147**]{} 1 (1992). M.Martellini, M.Spreafico, K.Yoshida, Mod.Phys.Lett. [**A7**]{}, (1992)1667; see also by the same authors, [*A generalized model for two dimensional quantum gravity and dynamics of random surfaces for $d>1$*]{}, and [*A continuous approach to $2D$ quantum gravity for $c>1$*]{}, Preprints (february 1994). K.Morrison, Contemp.Math. [**74**]{} (1988) 220. R.Penner, Bull.Amer.Math.Soc. [**15**]{} 73 (1986); see also: J.Harer,D.Zagier, Invent.Math. [**185**]{} 457 (1986) G.Perel’man, [*Alexandrov spaces with curvature bounded below II*]{}, Preprint LOMI, 1991. P.Petersen V, [*A finiteness theorem for metric spaces*]{}, Journal of Diff.Geom. [**31**]{}, 387-395, 1990; P.Petersen V, [*Gromov-Hausdorff convergence of metric spaces*]{}, Proc. Symp.in Pure Math., 1990 Summer Institute on Differential Geometry, [**54**]{}, Part 3, 489-504 (1993) D.B.Ray, I.M.Singer, Advances in Math. [**7**]{}(1971)145; D.B.Ray, Advances in Math. [**4**]{}(1970)109; D.Fried, [*Lefschetz formulas for flows*]{}, Contemp.Math. [**58**]{}, Part III, 19-69, (1987); J.Cheeger, Ann.Math.[**109**]{}(1979)259. T.Regge, Il Nuovo Cimento [**19**]{}, 558 (1961); A.Schwarz, Lett.Math.Phys.[**2**]{}(1978)247; E.Witten, Commun.Math.Phys. [**117**]{}(1988)353. S.Varsted, Nucl.Phys. [**B412**]{}, (1994)406. K.Walker, [*An extension of Casson’s invariant*]{}, Princeton Univ.Press, Princeton, New Jersey (1992). D.Weingarten, Phys.Lett. [**90B**]{}, (1980)285 R.Williams, Class.Quantum Grav. [**9**]{} 1409-1422 (1992) (for a very comprehensive bibliography and review); see also H.W. Hamber, R.M. Williams, Phys.Rev.D [**47**]{} 510-532 (1993). E.Witten, Surveys in Diff.Geom. [**1**]{} 243-310 (1991). E.Witten, Journ. Geom. Phys. [**9**]{} 303-368 (1992). S.Zhu, [*A finiteness theorem for Ricci curvature in dimension three*]{}, Preprint IAS, 1992; see also S.Zhu, Bull.Amer.Math.Soc., Oct.1990; and S.Zhu, [*Bounding topology by Ricci curvature in dimension three*]{}, Ph.D. Thesis SUNY at Stony Brook, (1990).
--- abstract: 'Dimensionality reduction and classification play an absolutely critical role in pattern recognition and machine learning. In this work, we present a quantum neighborhood preserving embedding and a quantum local discriminant embedding for dimensionality reduction and classification. These two algorithms have an exponential speedup over their respectively classical counterparts. Along the way, we propose a variational quantum generalized eigenvalue solver (VQGE) that finds the generalized eigenvalues and eigenvectors of a matrix pencil $(\mathcal{G},\mathcal{S})$ with coherence time $O(1)$. We successfully conduct numerical experiment solving a problem size of $2^5\times2^5$. Moreover, our results offer two optional outputs with quantum or classical form, which can be directly applied in another quantum or classical machine learning process.' author: - 'Jin-Min Liang' - 'Shu-Qian Shen' - Ming Li - 'Lei Li\' title: Variational Quantum Algorithms for Dimensionality Reduction and Classification --- Introduction ============ Dimensionality reduction is significant to many algorithms in pattern recognition and machine learning. It is intuitively regarded as a process of projecting a high-dimensional data to a lower-dimensional data, which preserves some information of interest in the data set [@Sarveniazi2014; @Sorzano2014]. The technique of dimensionality reduction has been variously applied in a wide range of topics such as regression [@Hoffmann2009], classification [@Vlachos2002], feature selection [@Chizi2010]. Broadly speaking, all of these techniques were divided into two classes: linear and non-linear methods. Two most popular methods for linear dimensionality reduction are principal component analysis (PCA) and linear discriminant analysis (LDA). PCA is an orthogonal projection that minimizes the average projection cost defined as the mean squared distance between the data points and their projections [@PCA]. The purpose of LDA is to maximize the between-class variance and minimize within-class scatter when the data has associated with class labels [@LDA]. Perhaps, the most popular algorithm for non-linear dimensionality reduction is manifold learning [@Cayton2005]. The manifold learning algorithm aims to reconstruct an unknown nonlinear low-dimensional data manifold embedded in a high-dimensional space [@ML2012]. A number of algorithms have been proposed for manifold learning, including Laplacian eigenmap [@LE2001], locally linear embedding (LLE) [@LLE2000], isomap [@Iso2000]. Manifold learning has been successfully applied for video-to-video face recognition [@Hadid2009]. These nonlinear methods consider the structure of the manifold on which the data may possibly reside compared with Kernel-based techniques (e.g., Kernel PCA and Kernel LDA). We are witnessing the development of quantum computation and quantum hardware. The discovery of quantum algorithm for factoring [@Shor1994], database searching [@Grover1996] and quantum matrix inverse [@HHL2009] have shown that quantum algorithms have the capability of outperforming existed classical counterparts. Recently, quantum information combines ideas from artificial intelligence and deep learning to form a new field: quantum machine learning (QML) [@QML2017]. For classification and regression, QML algorithms [@QSVM2014; @Wiebe2015; @LR2016; @LR2017; @Aimeur2013] also have shown advantages over their classical machine learning algorithms. However, much algorithms rely on the large-scale, fault-tolerate, universal quantum computer which may be achieved in the long future. Specifically, these algorithms will require enormous number of qubits and long depth of circuit to achieve quantum supremacy. Fortunately, noisy intermediate-scale quantum (NISQ) devices is thought of as a significant step toward more powerful quantum computer [@NISQ2018]. This NISQ technology will be available in the near future. In this setting, hybrid algorithmic approaches demonstrate quantum supremacy in the NISQ era. This hybridization reduces the quantum resources including qubit counts, numbers of gates, circuit depth and numbers of measurements [@Larose2019]. Variational hybrid quantum-classical algorithms aim to tackle complex problems using classical computer and near term quantum computer. The classical computer find the optimal parameters by minimizing the expectation value of objective function which is calculated entirely on the quantum computer. The first class variational quantum algorithms have been proposed for preparing the ground state of a Hamiltonian [@VQE2014]. For a Hamiltonian $\mathcal{H}$ which is too large to diagonalize, one can approximate the ground state of the given Hamiltonian using the Rayleigh-Ritz variational method. After parameterizing the trail quantum states, one can perform a optimization subroutine to find the optimal state by tuning the optimal parameter. Variational method also is applied to obtain the excited state of a Hamiltonian [@Higgott2019; @Jones2019] and diagonalize a quantum state [@Larose2019]. Another class hybrid algorithms is designed to find application in machine learning including the quantum approximate optimization algorithm (QAOA) [@QAOA2014], variational quantum algorithms for nonlinear partial differential equations [@Lubasch2019] and linear systems of equations [@Xu2019; @An2019; @Huang2019; @Carlos2019]. Inspired by the significant advantage of quantum algorithms, some authors designed quantum algorithms to reduce the dimension of a large data set in high dimensional space. Quantum principal component analysis (qPCA) [@QPCA2014] and quantum linear discriminant analysis (qLDA) [@QLDA2016] are two potential candidates capable of compressing high dimensional data set and reducing the runtime to be logarithmic in the number of input vectors and their dimensions. These two protocols yield mappings for linear dimensionality reduction and obtain the projected vectors with only quantum form. Thus, a complicated quantum tomography [@Nielsen2000] is needed if one would like to know all information of the projected vectors. Motivated by manifold learning and quantum computation, one natural question arises of whether there have a quantum algorithm for dimensionality reduction and pattern classification, and in which preserves the local structure of original data space. To tackle this issue, we present two variational quantum algorithms. First one is quantum neighborhood preserving embedding (qNPE) which defines a map both on the training set and test set. The core of qNPE is a variational quantum generalized eigensolver (VQGE) based on Rayleigh quotient, a variant of quantum variational eigenvalue solver (QVE) [@VQE2014], to prepare the generalized eigenpair $(\lambda, x)$ of the generaliezd eigenvalue problem $Ax=\lambda Bx$. Based on the presented VQGE, we propose a quantum version of local discriminant embedding [@LDE2005] for pattern classification on high-dimensional data. We show that these two algorithms achieve an exponential speedup over their classical counterparts. The organization of the paper is as follows. In Section II, we give a quantum neighborhood preserving embedding (qNPE) for dimensionality reduction. The numerical experiments are conducted using $5$-qubits to demonstrate the correctness of VQGE in subsection E of Section II. In Section III, we introduce the quantum local discriminant embedding (qLDE) in detail for classification problem. A summary and discussion are included in Section IV. Quantum Neighborhood Preserving Embedding ========================================= Local linear embedding (LLE) [@LLE2000] is an unsupervised method for nonlinear dimensionality reduction but it does not evaluate the maps on novel testing data points [@NPE2005]. Neighborhood preserving embedding (NPE) is thought of as a linear approximation to the LLE algorithm [@NPE2005]. NPE tries to find a projection suitting for the training set and testing set. Different from other linear dimensional reduction methods (PCA and LDA) which aim at maintaining the global Euclidean structure, NPE preserves the local manifold structure of data space. We assume that the regions will appear to be locally linear when the size of neighborhood is small and the manifold is sufficiently smooth. Experiments on face recognition have been conducted to demonstrate the effectiveness of NPE [@NPE2005]. Here, we introduce a quantum neighborhood preserving embedding (qNPE). Given a set of points $\{x_i\}_{i=0}^{M-1}\in\mathcal{M}$ and $\mathcal{M}$ is a nonlinear manifold embedded in a $D$-dimensional real space $\mathcal{R}^{D}$, our qNPE attempts to retain the neighborhood structure of the manifold by representing $x_i$ as a convex combination of its nearest neighbors. In particular, qNPE finds a transformation matrix $A$ that maps these $M$ points and test point $x_{test}$ into a set of points $y_0,y_1,\cdots,y_{M-1},y_{test}\in\mathcal{R}^d$ in a lower-dimensional manifold space, where $y_i=A^{\dag}x_i$, $y_{test}=A^{\dag}x_{test}$, and the superscript $\dag$ denotes the conjugate transpose. In the quantum setting, a quantum state preparation routine is necessary to construct the quantum states $\{|x_i\rangle\}_{i=0}^{M-1}$ corresponding to vectors $\{x_i\}_{i=0}^{M-1}$. Assume that we are given oracles for data set $\{x_i|x_i\in\mathcal{R}^D\}_{i=0}^{M-1}$ that return quantum states $\{|x_i\rangle\}_{i=0}^{M-1}$. Mathematically, an arbitrary $D-$dimensional vector $\vec{x}_i=\{x_{i0},x_{i1},\cdots,x_{i(D-1)}\}$ is encoded into the $D$ amplitudes $x_{i0},x_{i1},\cdots,x_{i(D-1)}$ of an $O(\log D)$-qubits quantum system, $|x_i\rangle=\sum_{j=0}^{D-1}x_{ij}|j\rangle$, where $\{|j\rangle\}$ is the computational basis [@Schuld2016]. Find the $K$-nearest neighbors ------------------------------ The first step of qNPE is the construction of a neighborhood graph according to the given data set. The construction of an adjacency graph $G$ with $M$ nodes relies on the $K$ nearest neighbors of $x_i$. If $x_j$ is one of the $K$ nearest neighbors of $x_i$, then a directed edge will be drawn from the $i$th node to the $j$th node; otherwise, there is no edge. To preserve the local structure of the data set, we firstly develop an algorithm (Algorithm 1) to search the $K$ nearest neighbors of point $x_i$. Some notations are needed to understand Algorithm 1. Let $\{f(i)|i\in[0,1,\cdots,M-1]\}$ be an unsorted table of $M$ items. We would like to find $K$ indexes set $\mathcal{N}=\{j_{1},j_{2},\cdots,j_K\}$ of the element such that $f(j_1)\leq f(j_2)\leq\cdots\leq f(j_K)\leq f(j)$ where $\{j|j\in[0,1,\cdots,M-1]\}$ and $j\notin \mathcal{N}$. We call it quantum $K$ nearest neighbors search which is a direct generalization of the quantum algorithm for finding the minimum [@Durr1996]. One of our results is the following theorem. *Theorem 1.* For a given quantum state set $\{|x_i\rangle\}_{i=0}^{M-1}$, $\epsilon$ denotes the estimation error of the inner product. Let $[0,1,\cdots,M-1]$ be an unsorted database of $M$ items, each holding an inner product value. Algorithm 1 finds all lower $K$ indexes with probability at least $\frac{1}{2}$ in runtime $$O\Bigg(\frac{M(M-1)}{2}\epsilon^{-1}\log D\Bigg),$$ with query complexity $O(KM\sqrt{M}).$ *Proof.* Quantum $K$ nearest neighbors search tries to find the $K$ lower values of a unsorted data set. In step 1, given a state set $$\{|x_i\rangle=\sum_{j=0}^{D-1}x_{ij}|j\rangle\}_{i=0}^{M-1},$$ we firstly estimate the square of inner product $|\langle x_i|x_k\rangle|^2$ over all data points for $i,k=0,1,\cdots,M-1$ via swap test each running in times $O(\epsilon^{-1}\log D)$ with a given tolerate error $\epsilon$ [@swaptest]. The number of performing swap test is $$T_{swap}=\sum_{i=0}^{M-1}i=\frac{M(M-1)}{2}.$$ Thus, the overall runtime of estimating square of inner product is $O(\frac{M(M-1)}{2}\epsilon^{-1}\log D)$. In steps 2-4, we find $K$ lower index set $\mathcal{N}$ of one point $|x_i\rangle$. By adjusting $s=s-1$, the index set $T_s$ deletes one element every times. We repeat $K$ times on the updated index set $T_s$ to obtain the $K$ lower index $\mathcal{N}$ mapping to $K$ smallest values. Durr et al. [@Durr1996] have shown the query complexity of finding the minimum value is $O(\sqrt{M})$. In our algorithm, the query complexity of finding $K$ nearest neighbors of one state $|x_i\rangle$ is $$O\Bigg(\sum_{k=1}^{K}\sqrt{M-(k-1)}\Bigg)<O\Bigg(K\sqrt{M}\Bigg),$$ which has an upper bound $O(K\sqrt{M})$. Thus, the overall query complexity of traversal all quantum state set have an upper bound $Q=O(KM\sqrt{M})$. If we implement the entire algorithm on a classical computer, the time complexity requires exponential large $O(\frac{MD(M-1)}{2})$ with $D$ and the query complexity is $O(KM^2)$. Obviously, the quantum $K$ nearest neighbours search achieves an exponential speedup in the dimensionality of quantum states.$\hfill\blacksquare$ *step 1*: Estimate the overall square of inner product value via swap test with error at most $\epsilon$ in time $O(\frac{M(M-1)}{2}\epsilon^{-1}\log D)$.\ **Repeat the following steps $K$ times:**\ *step 2*: Define an index set $T_s=[0,1,\cdots,s-1]$ where $s$ is initialized as $M$.\ *step 3*: Apply the minimum searching algorithm [@Durr1996] and output a minimum index $j$ in runtime $O(\sqrt{M})$ with probability at least $\frac{1}{2}$.\ *step 4*: Reset $s=s-1$.\ **Outputs:** $\mathcal{N}=\{j_1,j_2,\cdots,j_K\}$. The query complexity of the presented algorithm can be further reduced to $O(M\sqrt{KM})$ using the idea of [@Durr2006; @Miyamoto2019]. Durr et al. [@Durr2006] transformed the problem of finding $d$ smallest values to find the position of the $d$ zeros in the matrix consisting of boolean matrices with a single 0 in every row, which can be seen as a part of graph algorithm. Different from [@Durr2006], Miyamoto and Iwamura [@Miyamoto2019] firstly found a good threshold by quantum counting and then values of all $d$ indices are found via amplitude amplification. The values of all $d$ indices are less than the value of the threshold index. In summary, Algorithm 1 finds the $K$ nearest neighbors $\mathcal{N}_i=\{x_0^i,x_1^i,\cdots,x_{K-1}^i\}$ of quantum state $|x_i\rangle$ [@nearest]. The presented algorithm is based on two algorithms: finding minimum and swap test. Firstly, we re-formulate the algorithm for finding $K$ indices by updating the search set. Secondly, we explicitly analyse the time complexity and query complexity. For implementation of the quantum $K$ nearest neighbours search, only one free parameter, $K$, is taken into account. The threshold $K$ affects the performance of qNPE. Specifically, it remains unclear how to select the parameter $K$ in a principled manner. The qNPE will lose its nonlinear character and behave like traditional PCA if $K$ is too large. In this case, the entire data space is seen as a local neighbourhood. In particular, if the threshold $K$ is bigger than the dimension of data point, the loss function (2) described in subsection B will have infinite solutions and the optimal question will be irregular. Obtain the weight matrix ------------------------ Let $W$ denote the weight matrix with element $\omega_j^i$ having the weight of the edge from node $i$ to node $j$, and 0 if there is no such edge. For maintaining the local structure of the adjacency graph, we assume each data node can be approximated by the linear combination of its local neighbor nodes. It is the weight matrix that characters the relationship between the data points. The weights can be calculated by the following convex optimization problem, $$\begin{aligned} &\min\quad \Phi(\omega_j^i)=\sum_{i=0}^{M-1}\|x_i-\sum_{j=0}^{K-1}\omega_j^ix_j^i\|^2\\ &s.t.\quad \sum_{j=0}^{K-1}\omega_j^i=1,i=0,1,\cdots,M-1. \end{aligned}$$ Using the Lagrange multiplier to enforce the constraint condition $\sum_{j}\omega_j^i=1$, the optimal weights are given by: $$\begin{aligned} \omega_i&=(\omega_1^i,\omega_2^i,\cdots,\omega_K^i)^{\dag}=\frac{G_i^{-1}\vec{1}}{\vec{1}^\dag G_i^{-1}\vec{1}}, \end{aligned}$$ where the covariance matrix is defined as $G_i=A_i^{\dag}A_i$, and $A_i=X_i-N_i\in\mathcal{R}^{D\times K}$, $X_i=(x_i,x_i,\cdots,x_i)\in\mathcal{R}^{D\times K}$, $\vec{1}=(1,1,\cdots,1)^\dag\in\mathcal{R}^{K}$, $N_i=(x_1^i,x_2^i,\cdots,x_K^i)\in\mathcal{R}^{D\times K}$. The column vector $x_j^i\in\mathcal{N}_i$ of $N_i$ represents the $K$-nearest data points close to the data point $x_i$. Each $K$-nearest data point in a $D$-dimensional real space $\mathcal{R}^D$. Our goal is to find weight quantum state $|\omega_i\rangle$ that satisfies $$\begin{aligned} |\omega_i\rangle\propto\frac{|G_i^{-1}\vec{1}\rangle}{|\vec{1}^TG_i^{-1}\vec{1}|}, \end{aligned}$$ where $|G_i^{-1}\vec{1}\rangle=G_i^{-1}\vec{1}/\sqrt{\langle G_i^{-1}\vec{1}|G_i^{-1}\vec{1}\rangle}$. A key idea is to find the inverse of the matrix $G$ with quantum technique. In the following process, we make use of the matrix inverse algorithm shown in [@HHL2009; @QSVD2018] to prepare the quantum state $|\omega_i\rangle$. Let the singular value decomposition (SVD) of $A_i$ be $A_i=U\Sigma V^{\dag}=\sum_j\sigma_j^i|u_j^i\rangle\langle v_j^i|$, then the eigenvalue decomposition of covariance matrix $G_i$ [@Explanation1] is $$\begin{aligned} G_i=\sum_{j=0}^{K-1}(\sigma_j^i)^2|v_j^i\rangle\langle v_j^i|. \end{aligned}$$ Thus, $|G_i^{-1}\vec{1}\rangle$ can be reexpressed as $$\begin{aligned} |G_i^{-1}\vec{1}\rangle=\sqrt{\frac{1}{\sum_{j=0}^{K-1}|\beta_j^i|^2/|\sigma_j^i|^4}}\sum_{j=0}^{K-1}\frac{\beta_j^i}{(\sigma_j^i)^2}|v_j^i\rangle, \end{aligned}$$ where $\beta_j^i=\langle v_j^i|\vec{1}\rangle$. Assume that we are given a matrix oracle $O_i$ which accesses the element $A_{mn}^i$ of the matrix $A_i$: $$|m\rangle |n\rangle|0\cdots0\rangle\mapsto|m\rangle |n\rangle|A_{mn}^i\rangle=|m\rangle |n\rangle|x_m^i-x_{nm}^i\rangle.$$ This oracle $O_i$ can be provided by quantum random access memory (qRAM) using $O(KD)$ storage space in $O(\log^2 \max(K,D))$ operations [@qRAM2008]. With these preparations, we are able to efficiently simulate the unitary $U_i=e^{\imath\hat{A}_i}$ and prepare the weights state $|\omega_i\rangle$, where $$\hat{A}_i=\begin{pmatrix}0 & A_i\\ A_i^{\dag} & 0\end{pmatrix}.$$ To understand our algorithm quickly, we will give some details below. First of all, we perform quantum singular value decomposition (QSVD) of the matrix $A_i$ on an initial state $|0\cdots0\rangle|\vec{1}\rangle$ to obtain the state $\sum_j\beta_j^i|\sigma_j^i\rangle|v_j^i\rangle$ containing singular values and right singular vectors of $A_i$. The first register is assigned to store the singular value and the second register to decompose $|\vec{1}\rangle$ in the space spanned by the right singular vectors of $A_i$. The vector $\vec{1}=(1,1,\cdots,1)^{\dagger}$ corresponds to a quantum state $|\vec{1}\rangle=\frac{1}{\sqrt{K}}(1, 1, \cdots, 1)^{\dagger}$ where $K$ is a normalized constant. The quantum state $|\vec{1}\rangle=\sum_{j=0}^{K-1}\frac{1}{\sqrt{K}}|j\rangle$ can be easily prepared by applying $O(\log K)$ Hadamard gates on $O(\log K)$ qubits $|0^{\otimes\log K}\rangle$. Mathematically, $$\begin{aligned} &H^{\otimes\log K}|0^{\otimes\log K}\rangle=\frac{1}{(\sqrt{2})^{\log K}}(|0\rangle+|1\rangle)^{\otimes\log K}\\ &=\frac{1}{\sqrt{K}}(|0\rangle+|1\rangle)\otimes\cdots\otimes(|0\rangle+|1\rangle)\\ &=\frac{1}{\sqrt{K}}(|00\cdots0\rangle+|00\cdots1\rangle+\cdots+|11\cdots1\rangle\\ &=\sum_{j=0}^{K-1}\frac{1}{\sqrt{K}}|j\rangle. \end{aligned}$$ Now, we apply a unitary transformation taking $\sigma_j^i$ to $\frac{C_i}{|\sigma_j^i|^2}\sigma_j^i$, where $C_i$ is a normalized constant. Actually, this rotation can be realized by applying $R_y(\sin^{-1}\frac{C_i}{|\sigma_j^i|^2})$ on the ancilla qubit $|0\rangle$, $$\begin{aligned} &\sum_{j=0}^{K-1}\beta_j^i|\sigma_j^i\rangle|v_j^i\rangle|0\rangle\stackrel{R_y}{\longrightarrow}\\ &\sum_{j=0}^{K-1}\beta_j^i|\sigma_j^i\rangle|v_j^i\rangle\Bigg(\frac{C_i}{|\sigma_j^i|^2}|1\rangle+\sqrt{1-\frac{C_i^2}{|\sigma_j^i|^4}}|0\rangle\Bigg). \end{aligned}$$ Next, uncompute the singular value register and measure the ancilla qubit to obtain 1. The system are left with a state proportional to $$\begin{aligned} |\omega_i\rangle&\propto\sqrt{\frac{1}{\sum_{j=0}^{K-1}|C_i\beta_j^i|^2/|\sigma_j^i|^4}}\sum_{j=0}^{K-1}\frac{C_i\beta_j^i}{|\sigma_j^i|^2}|v_j^i\rangle. \end{aligned}$$ Obviously, the weight states $\{|\omega_i\rangle\}_{i=0}^{M-1}$ can be prepared by repeating the above process $M$ times separately with the number of Hadmard gates scaling as $O(M\log K)$. However, taking into account the extraction of embedding vectors requiring a reconstructed weight matrix, we introduce an improved approach which achieves a parallel speedup in the preparation of the weight matrix. We reconstruct the weight matrix $W=(|\omega_0\rangle,|\omega_1\rangle,\cdots,|\omega_{M-1}\rangle)$ via preparing a entanglement state $|\psi_W\rangle=\sum_{i=0}^{M-1}|\omega_i\rangle|i\rangle$. Theorem 2 validates the gate resources can be further reduced. *Theorem 2.* For a given quantum state set $\{|x_i\rangle\}_{i=0}^{M-1}$, the task of preparing $|\psi_W\rangle=\sum_{i=0}^{M-1}|\omega_i\rangle|i\rangle$ with error at most $\epsilon$ has runtime $$T_W=O\Bigg(\frac{\log^2(K+D)}{\epsilon^3}\sum_{i=0}^{M-1}\|A_i\|_{max}^2\Bigg).$$ The required gate resources are $O(\log MK)$. *Proof.* We add an ancilla $M$ dimension system which determines the applied unitary operator. Given the initial state $|\vec{1}\rangle_1|0^{\otimes\log K}\rangle_2|0^{\otimes\log M}\rangle_3|0\rangle_4$, where $|\vec{1}\rangle=\frac{1}{\sqrt{K}}(1,1,\cdots,1)^{\dagger}$. The register 3 gives the number of data set. After performing $O(\log MK)$ Hadamard gates on registers 2 and 3, we apply the conditional Hamiltonian evolution $e^{\imath\hat{A}_it}\otimes|i\rangle$ on state $$|\vec{1}\rangle_1\sum_{i=0}^{M-1}\sum_{j=0}^{K-1}|j\rangle_2|i\rangle_3|0\rangle_4,$$ which achieves the following transformation, $$\begin{aligned} &|\vec{1}\rangle_1\sum_{i=0}^{M-1}\sum_{j=0}^{K-1}|j\rangle_2|i\rangle_3|0\rangle_4\mapsto\\ &|\vec{1}\rangle_1\sum_{i=0}^{M-1}\sum_{j=0}^{K-1}(e^{\imath\hat{A}_it}\otimes|i\rangle\langle i|)|j\rangle_2|i\rangle_3|0\rangle_4\\ &=\sum_{i=0}^{M-1}\sum_{j=0}^{K-1}|\sigma_j^i\rangle_1|v_j^i\rangle_2|i\rangle_3|0\rangle_4. \end{aligned}$$ And then rotate the singular value by applying $R_y(\sin^{-1}\frac{C_i}{|\sigma_j^i|^2})$ on the ancilla qubit $|0\rangle_4$. The system state is $$\begin{aligned} \sum_{i=0}^{M-1}\sum_{j=0}^{K-1}|\sigma_j^i\rangle_1|v_j^i\rangle_2|i\rangle_3 \Bigg(\sqrt{1-\frac{|C_i\beta_j^i|^2}{|\sigma_j^i|^4}}|0\rangle_4+\frac{C_i\beta_j^i}{|\sigma_j^i|^2}|1\rangle_4\Bigg). \end{aligned}$$ Finally, uncompute the first register and measure the fourth register to see 1, we obtain the state $$\begin{aligned} \sum_{i=0}^{M-1}\sqrt{\frac{1}{\sum_{j=1}^K|C_i\beta_j^i|^2/|\sigma_j^i|^4}}\sum_{j=0}^{K-1}\frac{C_i\beta_j^i}{|\sigma_j^i|^2}|v_j^i\rangle|i\rangle \end{aligned}$$ which is proportional to the entangled state $\sum_{i=0}^{M-1}|\omega_i\rangle|i\rangle$. The runtime of preparing the state $\sum_{i=0}^{M-1}|\omega_i\rangle|i\rangle$ is dominated by the quantum singular value estimation of $A_i\in\mathcal{R}^{D\times K}$. In the process, we consider an extended matrix $\hat{A}_i\in\mathcal{R}^{(K+D)\times(K+D)}$ and obtain the eigenvalues of $\hat{A}_i$ by performing quantum phase estimate. According to [@QSVD2018], we prepare the state $|\omega_i\rangle$ with accuracy $\epsilon$ in runtime $O(\|A_i\|_{max}^2\log^2(K+D)/\epsilon^3)$ where $\|A_i\|_{max}$ is the maximal absolute value of the matrix elements of $A_i$. Therefore, the entangled state $\sum_{i=0}^{M-1}|\omega_i\rangle|i\rangle$ is prepared in runtime $$T_W=O\Bigg(\frac{\log^2(K+D)}{\epsilon^3}\sum_{i=0}^{M-1}\|A_i\|_{max}^2\Bigg).$$ Overall, only $O(\log MK)$ Hadamard gates is required along the way. Thus, the number of Hadamard gates is reduced to $O(\log MK)$ rather than $O(M\log K)$.$\hfill\blacksquare$ Variational quantum generalized eigenvalue solver ------------------------------------------------- In this subsection, we obtain the projection matrix by solving the following cost function based on the locally linear reconstruction errors: $$\begin{aligned} \Phi(y)=\sum_{i=0}^{M-1}\Bigg(y_i-\sum_{j=0}^{K-1}\omega_j^iy_j\Bigg)^2. \end{aligned}$$ Here, the fixed weights $\omega_j^i$ characterize intrinsic geometric properties of each neighborhood. Each high-dimensional data $x_i\in\mathcal{R}^D$ is mapped to a low-dimensional data $y_i\in\mathcal{R}^d$. The embedding vector $y_i$ is found by minimizing the cost function (15) over $y_i$. Following some matrix computation [@LLE2000; @LDE2005], the cost function can be reduced to the generalized eigenvalue problem: $$\begin{aligned} XQX^{\dag}a=\lambda XX^{\dag}a, \end{aligned}$$ where $X=(x_0,x_1,\cdots,x_{M-1}),Q=(I-W)^{\dag}(I-W),I=\textrm{diag}(1,\cdots,1)$. The detailed derivation is shown in Appendix A. *step 1*. Design a quantum circuit, controlled by a set of experimental parameters $\{\theta_i\}$, which can prepare states $|\varphi\rangle=|\varphi(\{\theta_i\})\rangle$.\ *step 2*. Define a objective function $f(\{\theta_i\})=\frac{\langle\varphi|\mathcal{H}_\mathcal{G}|\varphi\rangle}{\langle\varphi|\mathcal{H}_\mathcal{S}|\varphi\rangle}$ which $f$ maps parameters to a Rayleigh quotient of $|\varphi\rangle$ if $\langle\varphi|\mathcal{H}_\mathcal{S}|\varphi\rangle\neq0$.\ *step 3*. Find all the generalized eigenvalues and corresponding generalized eigenstates.\ (a) Compute the expectation $\langle\mathcal{H}_{\mathcal{G}(\mathcal{S}),1}\rangle,\langle\mathcal{H}_{\mathcal{G}(\mathcal{S}),2}\rangle,\cdots$, on $|\varphi_n\rangle=|\varphi_n(\{\theta_i\})\rangle$ for all terms of $\mathcal{H}_{\mathcal{G}(\mathcal{S})}$ by quantum expectation estimation [@VQE2014], which $n$ denotes the iteration times of repeating Step 3.\ (b) Sum these values with appropriate weights, $h_{\mathcal{G}(\mathcal{S})}$, to obtain $$f_n=\frac{\langle\varphi_n|\mathcal{H}_\mathcal{G}|\varphi_n\rangle}{\langle\varphi_n|\mathcal{H}_\mathcal{S}|\varphi_n\rangle}.$$\ (c) Apply the classical minimization algorithm (e.g. gradient descent) to minimize $f_n$ and determine the new parameter $\{\theta_i^{n}\}$.\ (d) Using step 1 to generate the state $|\varphi_{n}\rangle$.\ *step 4*. Update the Hamiltonian:\ (a) if $\mathcal{H}_{\mathcal{S}}$ commutes with $\mathcal{H}_{\mathcal{S}}$, $\mathcal{H}_{\mathcal{G}}=(\mathcal{H}_{\mathcal{G}}-\tau\mathcal{H}_{\mathcal{S}})^2$, $\mathcal{H}_{\mathcal{S}}=(\mathcal{H}_{\mathcal{S}})^2$, else go to (b).\ (b) $\mathcal{H}_{\mathcal{G}}^{'}=\mathcal{H}_{\mathcal{G}}-\tau\mathcal{H}_{\mathcal{S}}$, $\mathcal{H}_{\mathcal{S}}^{'}=\mathcal{H}_{\mathcal{S}}$, where $\tau$ is a parameter.\ *step 5*. Perform Setp 3 for a searched parameter $\tau$.\ **Output:** eigenstates $|\varphi_{1}\rangle,|\varphi_{2}\rangle,\cdots$ with eigenvalues $0\neq \lambda_1=f_1\leq \lambda_2=f_2\leq\cdots,\leq \lambda_n=f_n$. The generalized eigenvalue problem, $\mathcal{G}x=\lambda \mathcal{S}x$, is an important challenge in scientific and engineering applications. Although Cong and Duan [@QLDA2016] has presented a Hermitian chain product to solve the generalized eigenvalue problem by replacing $\mathcal{S}^{-1}$ with $\mathcal{S}^{-1/2}$, the computation of matrix inverse is extremely difficult on classical computer. Alternatively, quantum phase estimation (QPE) is a better candidate, but the simulation of $e^{i\mathcal{S}^{-1}\mathcal{G}}$ remains a fundamental challenge. Even though one can efficiently perform QPE, it still requires fully coherent evolution. Due to the above circumstances, Theorem 3 gives a variational quantum generalized eigenvalue solver (VQGE) for solving the generalized eigenvalue problem. Like the variational quantum eigenvalue solver (VQE) [@VQE2014], our VQGE can also be run on near-term noisy devices. *Theorem 3.* For an Hermitian matrix pencil $(\mathcal{G},\mathcal{S})$ with invertible matrix $\mathcal{S}$, let $\epsilon>0$ be a precision parameter. Algorithm 2 has the coherence time $O(1)$ that outputs all generalized eigenstates of the following generalized eigenvalue problem: $$\mathcal{G}|\varphi\rangle=\lambda \mathcal{S}|\varphi\rangle,$$ requiring $O(1/\epsilon^2)$ repetitions, where $\mathcal{G},\mathcal{S}\in\mathcal{R}^{n\times n}$ and $|\varphi\rangle$ is the generalized eigenstate corresponding to the generalized eigenvalue $\lambda$. *Proof.* We first briefly review the subroutine quantum expectation estimation (QEE) [@VQE2014] in step 2. The QEE algorithm calculates the expectation value of a given Hamiltonian $\mathcal{H}$ for a quantum state $|\varphi\rangle$. Any Hamiltonian can be rewritten as $M$ terms [@Berry2007; @Childs2011; @VQE2014], for real parameter $h_{12\cdots}^{ij\cdots}$ $$\begin{aligned}\label{equ:Hamiltonian} \mathcal{H}&=\mathcal{H}^1+\mathcal{H}^2+\cdots\\ &=\sum_{i1}h_1^i\sigma_1^i+\sum_{ij12}h_{12}^{ij}\sigma_1^i\otimes\sigma_2^j+\cdots, \end{aligned}$$ where Roman indices $i,j,\cdots$ denote the subsystem on which the operator acts, and $1,2$ identify the Pauli operator. Each subitem $\mathcal{H}^{m}$ is a tensor product of Pauli operators. According to Eq. (17), the expectation value is $$\begin{aligned} \langle\mathcal{H}\rangle&=\langle\mathcal{H}^1\rangle+\langle\mathcal{H}^2\rangle+\cdots\\ &=\sum_{i1}h_1^i\langle\sigma_1^i\rangle+\sum_{ij12}h_{12}^{ij}\langle\sigma_1^i\otimes\sigma_2^j\rangle+\cdots. \end{aligned}$$ As a result, each expectation $\langle\mathcal{H}^m\rangle$ is directly estimated using fermionic simulations [@Ortiz2001] or statistical sampling [@Romero2019]. In step 1, given a series of parameter vectors $\theta$, the quantum circuit $U$ is defined as $$\begin{aligned} U(\theta)&=U_L(\theta_L)U_{L-1}(\theta_{L-1})\cdots U_1(\theta_1) \end{aligned}$$ with $L$ components. Mathematically, after preparing a $N$ qubits initial quantum state $|0\rangle^{\otimes N}$, the generated quantum state is defined as $$\begin{aligned} |\varphi\rangle=\Pi_{i=1}^LU_i(\theta_i)|0\rangle^{\otimes N}. \end{aligned}$$ Note that the number of parameters and $N$ are logarithmically proportional to the dimension of the generated state $|\varphi\rangle$ [@HardwareVQE2017; @PQC20181; @PQC20182]. These parameterized quantum circuits has been shown significant potential in generative adversarial learning [@GAN20181; @GAN20182] and quantum circuit Born machines [@BornMachine2018]. In step 3, we show how to obtain the generalized eigenstate and corresponding generalized eigenvalue. Our results rely on the fact that the Rayleigh quotient [@Parlett1998] $$\begin{aligned} R(|\varphi\rangle;\mathcal{G},\mathcal{S})=\frac{\langle\varphi|\mathcal{G}|\varphi\rangle}{\langle\varphi|\mathcal{S}|\varphi\rangle}, \langle\varphi|\mathcal{S}|\varphi\rangle\neq0 \end{aligned}$$ is stationary at $|\varphi\rangle\neq0$ if and only if $(\mathcal{G}-\lambda \mathcal{S})|\varphi\rangle=0$ for some scalar $\lambda$. Let $\mathcal{H_\mathcal{G}}=\mathcal{G}$ and $\mathcal{H_\mathcal{S}}=\mathcal{S}$ which also have the decomposition like (3). The first iteration obtains the generalized eigenstate with the lowest generalized eigenvalue. To find overall eigenstates of $\mathcal{S}^{-1}\mathcal{G}$, we update the Hamiltonian $\mathcal{H}_{\mathcal{G}}=(\mathcal{H}_{\mathcal{G}}-\tau\mathcal{H}_{\mathcal{S}})^2$, $\mathcal{H}_{\mathcal{S}}=(\mathcal{H}_{\mathcal{S}})^2$, where $\tau$ is a parameter close to the energy of the generalized eigenstates, which turns the generalized eigenvalues into the ground state energy of updated Hamiltonian ($\mathcal{H}_{\mathcal{G}},\mathcal{H}_{\mathcal{S}}$). The following derivation ensure this modification provides all generalized eigenvalues. For the generalized eigenvalue problem $\mathcal{G}|\varphi\rangle=\lambda \mathcal{S}|\varphi\rangle$, the equation: $$\begin{aligned} (\mathcal{H}_{\mathcal{G}}-\tau\mathcal{H}_{\mathcal{S}})^2|\varphi\rangle&=(\mathcal{H}_{\mathcal{G}}^2+\tau^2\mathcal{H}_{\mathcal{S}}^2 -2\tau\mathcal{H}_{\mathcal{G}}\mathcal{H}_{\mathcal{S}})|\varphi\rangle\\ &=(\lambda+\frac{\tau^2}{\lambda}-2\tau)\mathcal{H}_{\mathcal{G}}\mathcal{H}_{\mathcal{S}}|\varphi\rangle\\ &=(\lambda-\tau)^2\frac{1}{\lambda}\mathcal{H}_{\mathcal{G}}\mathcal{H}_{\mathcal{S}}|\varphi\rangle\\ &=(\lambda-\tau)^2\mathcal{H}_{\mathcal{S}}^2|\varphi\rangle. \end{aligned}$$ The second equality uses the assumption that $\mathcal{G}$ commutes with $\mathcal{S}$ ($\mathcal{H}_{\mathcal{G}}\mathcal{H}_{\mathcal{S}}=\mathcal{H}_{\mathcal{S}}\mathcal{H}_{\mathcal{G}}$). Therefore, the Rayleigh quotient is $$\begin{aligned} R_1=\frac{\langle\varphi|(\mathcal{H}_{\mathcal{G}}-\tau\mathcal{H}_{\mathcal{S}})^2|\varphi\rangle}{\langle\varphi|\mathcal{H}_{\mathcal{S}}^2|\varphi\rangle} =(\lambda-\tau)^2. \end{aligned}$$ Obviously, since the Rayleigh quotient is quadratic function of the variable $\tau$, the ground generalized eigenstate of the updated Hamiltonian is found on the unique minimum point. However, the above approach is useless when it is applied to the general situation such as $\mathcal{G}\mathcal{S}\neq\mathcal{S}\mathcal{G}$. An alternative approach now is presented. We update the Hamiltonian to the following form $$\mathcal{H}_{\mathcal{G}}^{'}=\mathcal{H}_{\mathcal{G}}-\tau\mathcal{H}_{\mathcal{S}},\mathcal{H}_{\mathcal{S}}^{'}=\mathcal{H}_{\mathcal{S}}.$$ The presented Hamiltonian induces a cost function $$\begin{aligned} R_1^{'}=\Bigg(\frac{\langle\varphi|(\mathcal{H}_{\mathcal{G}}-\tau\mathcal{H}_{\mathcal{S}})|\varphi\rangle} {\langle\varphi|\mathcal{H}_{\mathcal{S}}|\varphi\rangle}\Bigg)^2 =(\lambda-\tau)^2. \end{aligned}$$ The classical computer then minimizes the function of (23), (24) and obtains the optimal parameter $\theta$. The optimal values of (23), (24) are nearly zero for the suitable variable $\tau$. For example, if the scanned $\tau$ is placed inside the energy gap, minimization of $R(|\varphi\rangle;(\mathcal{G}-\tau\mathcal{S})^2,\mathcal{S}^2)$ results in the generalized eigenstates energy of $\mathcal{H}_{\mathcal{S}^{-2}(\mathcal{G}-\tau\mathcal{S})^2}$. The searched method is similar to the idea of [@Wang1994; @Shen2017]. Finally, we sort the generalized eigenvalues and output all eigenstates via the unitary circuit in step 1. The time the quantum computer remain coherent is $O(1)$ which is determined by the extra depth of used circuit for preparing the parameterized state. If the desired error is at most $\epsilon$, the cost of the expectation estimation of local Hamiltonian $\mathcal{H}^m$ is $O(|\max\{h_{12\cdots}^{ij\cdots}\}|^2/\epsilon^2)$ repetitions of the preparation and measurement procedure. The overall generalized eigenstates can be prepared via $n$ times queries for the parameter quantum circuit and $M$ Hamiltonian items. Thus, we require $$O(nM|\max\{h_{12\cdots}^{ij\cdots}\}|^2/\epsilon^2)$$ samples from the parameterized circuit with coherence time $O(1)$.$\hfill\blacksquare$ With the assistance of Theorem 3, only replacing $\mathcal{G}$, $\mathcal{S}$ with $XQX^{\dag}$ and $XX^{\dag}$ can we find the $d$ lower eigenstates $\{|a_i\rangle\}_{i=0}^{d-1}$ as the column of the projection matrix $A$ with runtime $O(1/\epsilon^2)$. Note that $XQX^{\dag}$ and $XX^{\dag}$ are positive definite matrix in $\mathcal{R}^{D\times D}$. One can firstly calculate these two Hermitian matrices by matrix multiplication algorithm [@MM1990]. Assuming that these two matrices can be regarded as raw-computable Hamiltonian, Berry et. al [@Berry2007] have shown that $XQX^{\dag}$ and $XX^{\dag}$ may be decomposed as a sum of at most $O(6D^2)$ $1$-sparse matrices each of which is efficiently simulated in $O(\log D)$ queries to the Hamiltonian. Extract the lower-dimensional manifold -------------------------------------- We now extract the low-dimension manifold based upon the projection matrix $A$. Firstly, a qRAM returns an equal superposition state $\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|i\rangle$ with $d$ qubits. This state prepares in quantum parallel the state $|A\rangle=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|a_{i}\rangle|i\rangle$ in $O(\log Dd)$ run time [@qRAM2008]. The state $|A\rangle$ encodes the information of the projection matrix $A$. The embedding state is given as: $$\begin{aligned} &|x_i\rangle\mapsto |y_i\rangle=A^{\dag}|x_i\rangle,\\ &A=(|a_{0}\rangle,|a_{1}\rangle,\cdots,|a_{d-1}\rangle), \end{aligned}$$ where $|y_i\rangle$ is a $d$-dimensional vector and $A$ is a $D\times d$ matrix. Our qNPE maps arbitrary high dimensional vector to a lower-dimensional vector. Thus if one is given a test vector $|x_{test}\rangle$, then the embedding vector is $|y_{test}\rangle=A^{\dag}|x_{test}\rangle$. Here, we propose two optional methods for the extraction of the embedding states. One of them is based on QSVD. Like [@QSVD2018], an extended matrix is considered as $$\tilde{A}=\begin{pmatrix}0 & A\\A^{\dag}&0\end{pmatrix}.$$ Assume that $\tilde{A}$ has eigenvalue decomposition $$\tilde{A}=\sum_{j}\sigma_j|\tilde{u}_{+}\rangle\langle\tilde{u}_{+}|-\sigma_j|\tilde{u}_{-}\rangle\langle\tilde{u}_{-}|$$ with singular value decomposition $A=\sum_{j}\sigma_j|u_j\rangle\langle v_j|$, where $|\tilde{u}_{\pm}\rangle=|u_j\rangle|0\rangle\pm|v_j\rangle|1\rangle$. We then perform QPE on the initial state $|0,x_i\rangle_1|0,\cdots,0\rangle_2|0\rangle_3$ and obtain a state $$\sum_{j}\pm\alpha_j^{\pm}(-1)^{f_j}|\tilde{u}_{\pm}\rangle_1|\frac{\sigma_j}{d+D}\rangle_2|0\rangle_3,$$ where $\alpha_j^{\pm}=\pm\frac{\langle v_j|x_i\rangle}{\sqrt{2}}$. The third register indicates the flag qubit. If the eigenvalue is greater than the value of flag qubit 0, then $f_j=0$, otherwise, $f_i=1$. Performing a Pauli operator $\sigma_z$ on the flag qubit and applying $R_y(2\arcsin\sigma_i)$ on an ancilla qubit $|0\rangle$, we generate a state $$\sum_{j}\alpha_j^{+}|\tilde{u}_{\pm}\rangle_1\Bigg(\frac{\sigma_j}{d+D}|0\rangle+\sqrt{1-(\frac{\sigma_j}{d+D})^2}|1\rangle\Bigg).$$ To this end, we project onto the $|u_j\rangle$ part and measure the final qubit to 0 resulting in a state $$\sum_{j}\frac{\sigma_j}{d+D}\alpha_j^{+}|u_{j}\rangle\propto U\sum V^{\dag}|x_i\rangle=A^{\dag}|x_i\rangle.$$ Repeating the above process $M$ times, the embedding state $|y_0\rangle,|y_1\rangle,\cdots,|y_{M-1}\rangle$ will be prepared with error $\epsilon$ in time $O(M\log^2(D+d)/\epsilon^3)$. Alternatively, another approach is based on the well-known swap test [@swaptest]. Since the embedding low-dimensional data is $$|y_i\rangle=A^{\dag}|x_i\rangle=(\langle a_{1}|x_i\rangle,\langle a_{2}|x_i\rangle,\cdots,\langle a_{d}|x_i\rangle)^{\dag},$$ we convert formula (25) into a computation of inner product item $\langle a_{k}|x_i\rangle$. The well-known swap test [@swaptest] calculates the square of the inner product by the expectation of operators. But here the magnitude and sign of these inner products are also required. Fortunately, the inner product can be estimated with $O(\log D)$ number of measurements [@Liu2018; @zhao2019]. The embedding low-dimensional vector can be computed using resources scaling as $O(Md\log D)$. In summary, our algorithm outputs the embedding vectors with quantum (classical) form which can be directly applied in other quantum (classical) machine learning process. Numerical simulations and performance analysis ---------------------------------------------- In this subsection, we present a numerical experiment to simulate the proposed VQGE. The source code and the selected parameters of our numerical experiment can be accessed from [@code]. For the implementation, we consider the following two $32\times32$ matrices (using $5$-qubits). *Example 1:* $$\begin{aligned} &\mathcal{G}_1=\mathds{1}+0.2\sigma_1^1\otimes\sigma_2^2+0.2\sigma_1^1,\\ &\mathcal{S}_1=\mathds{1}+0.0741\sigma_1^1\otimes\sigma_2^2+0.3939\sigma_1^1, \end{aligned}$$ which has four generalized eigenvalues $\lambda_1=0.7577,\lambda_2=0.9537,\lambda_3=1.1278,\lambda_4=1.4702$. *Example 2:* $$\begin{aligned} &\mathcal{G}_2=\mathds{1}+0.2\sigma_1^1\otimes\sigma_3^2+0.2\sigma_1^1+0.2\sigma_3^1,\\ &\mathcal{S}_2=\mathds{1}+0.1741\sigma_1^1\otimes\sigma_3^1+0.2981\sigma_1^1, \end{aligned}$$ where $\sigma_i^j$ denotes the Pauli operator $\sigma_i$ acts on the $j$th subsystem. Example 2 gives a general case for $\mathcal{G}_2\mathcal{S}_2\neq\mathcal{S}_2\mathcal{G}_2$ which also has four generalized eigenvalues $\lambda_1=0.7780,\lambda_2=0.7987,\lambda_3=1.2533,\lambda_4=1.2891$. Here, the parameterized state is generated via the unitary circuit $U(\vec{\theta})$: $$\begin{aligned} |\varphi(\vec\theta)\rangle&=U(\vec\theta)|0\rangle^{\otimes5}\\ &=R_y(\theta_5)R_y(\theta_4)R_y(\theta_3)R_y(\theta_2)R_y(\theta_1)|0\rangle^{\otimes5}\\ &=\bigotimes_{k=1}^{5}\Bigg(\cos\frac{\theta_j}{2}|0\rangle+\sin\frac{\theta_j}{2}|1\rangle\Bigg). \end{aligned}$$ The vector $\vec\theta$ is defined as $\vec\theta=(\theta_1,\theta_2,\cdots,\theta_5)^{\dag}$ and the rotation operator is $R_y=e^{-i\theta Y/2}$. The experiments results of the VQGE implementation are shown in Fig. 1. As shown in Fig. 1(a), the expectation have a minimal value which implies the generalized ground state of a matrix pencil $(\mathcal{G},\mathcal{S})$. In Fig. 1(b,c,d), we plot the expectation values (cost function) of the updated Hamiltonian with the optimization step. The generalized ground state of the updated Hamiltonian is always nearly zero. In these case, the generalized eigenvalues are reduced by the controlled parameter $\tau$. Finally, once all optimal parameters are determined, we obtain the generalized eigenvalues via the expectation values of different Hamiltonian. ![The required resource complexity of quantum and classical methods.](SPEEDUP.pdf){width="3in"} Fig. 2 shows the required resources of quantum and classical methods. Classically, performing Theorem 2 takes $O((K+D)^3M)$. Complexity of the generalized eigenvalue problem is of order $O(n^3)$ on classical computation devices [@Golub2012]. Since every step of qNPE has an exponential speedup, our qNPE absolutely outperforms classical NPE. Quantum Local Discriminant Embedding ==================================== In this section, based on the variational quantum generalized eigenvalues (VQGE), we develop a quantum algorithm for pattern classification which preserves the local manifold. This algorithm is a quantum version of local discriminant embedding [@LDE2005] (qLDE). The task is to classify a high-dimensional vector into one class, given $M$ data points of the form $\{(x_i,y_i):x_i\in\mathcal{R}^{D},y_i\in\{1,2,\cdots,P\}\}_{i=0}^{M-1}$ where $y_i$ depends on the class to which $x_i$ belongs. Fig. 3 shows the expected effect of local discriminant embedding. After finding a associated submanifold of each class, the qLDE separates the embedded data points into a multi-class lower-dimensional Euclidean space. First of all, one needs to construct two neighborhood graphs: the intrinsic graph $G_{w}$ (within-class graph) and the penalty graph $G_{b}$ (between-class graph). For each data point $x_i$, we define a subset $\mathcal{N}_{w,i,K}$ ($\mathcal{N}_{b,i,K^{'}}$) which contains the $K$ ($K^{'}$) neighbors having the same (different) class label with $x_i$. For graph $G_{w}$, we consider each pair of $x_i$ and $x_j$ with $y_i=y_j$. An edge is added between $x_i$ and $x_j$ if $x_j\in\mathcal{N}_{w,i,K}$. To construct $G_{b}$, likewise, we consider each pair of $x_i$ and $x_j$ with $y_i\neq y_j$. An edge is added if $x_j\in\mathcal{N}_{b,i,K^{'}}$. Theorem 1 can help us to finish the construction of $G_{w}$ and $G_{b}$ by finding $K$ ($K^{'}$) neighbors. ![The expected effect of LDE [@Dornaika2013]. The point $x_1$ have four neighbors. The points with same color and shape belong to the same class. The within-class graph connects nearby points with the same label. The between-class graph connects nearby points with different labels. (b) After LDE, the local margins between different classes are maximized, and the distances between local homogeneous samples are minimized.](LDE.pdf){width="2.5in"} Next, we determine the weight matrix $W_{w(b)}=(W_{w(b),ij})$ of graph $G_w$($G_b$) by the following convex optimization formulation: $$\begin{aligned} &\min\quad \sum_i\|x_i-\sum_jW_{w(b),ij}x_j\|^2\\ &s.t.\quad \sum_{j}W_{w(b),ij}=1,i=1,2,\cdots,M. \end{aligned}$$ Theorem 2 prepares two weight states $$|\psi_{W_w}\rangle=\sum_{i=0}^{M-1}|\omega_{wi}\rangle|i\rangle,|\psi_{W_b}\rangle=\sum_{i=0}^{M-1}|\omega_{bi}\rangle|i\rangle,$$ with error at most $\epsilon$ in runtime $$O\Bigg(\frac{\log^2(K(K^{'})+D)}{\epsilon^3}\sum_{i=0}^{M-1}\|A_i\|_{max}^2\Bigg).$$ The required gate resource count is $O(\log MK(K^{'}))$. We next turn to find a matrix transform $A$ that maximizes the local margins among different classes and pushes the homogenous samples closer to each other [@Dornaika2013]. The overall process corresponds to the below mathematical formula: $$\begin{aligned} &\min_{A}\frac{1}{2}\quad \sum_{ij}\|A^\dag(x_i-x_j)\|^2W_{w(b),ij}. \end{aligned}$$ After simple matrix algebra (seeing details in [@Dornaika2013]), the columns of optimal $A$ are the generalized eigenvectors with the $l$ largest eigenvalues in $$\begin{aligned} T_b|a\rangle=\lambda T_w|a\rangle. \end{aligned}$$ where $T_{w(b)}=X(I_{w(b)}-W_{w(b)})X^{\dag}$, $X=(x_0,x_1,\cdots,x_{M-1})$ and $I_{w(b)}$ is a diagonal matrix with $I_{w(b),ii}=\sum_{j}W_{w(b),ij}$. Then, we apply Theorem 3 to obtain the $l$ generalized eigenvectors with $l$ largest eigenvalues of (37). Once we have learned the projection matrix $A$ using qLDE, quantum nearest neighbor algorithm [@Wiebe2015] is directly applied on multi-class classification tasks by computing the distance metrics between the test point $|y_{test}\rangle$ and other training points with a known class label. For example, for a given two clusters $\{U\}$ and $\{V\}$, if $$\begin{aligned} \min_{u\in \{U\}}\quad D(|y_{test}\rangle,|a\rangle)\leq\min_{v\in \{V\}}\quad D(|y_{test}\rangle,|b\rangle), \end{aligned}$$ then we can assign $|y_{test}\rangle$ to cluster class $\{U\}$, where $D$ denotes the trace distance. The classification performance show exponential reductions with classical methods [@Wiebe2015]. Conclusions and discussion ========================== In conclusion, this work presented qNPE and qLDE for dimensionality reduction and classification. Both of them preserve the local structure of the manifold space in the process of dimensionality reduction. We demonstrated that qNPE achieves an exponential advantage over the classical case since every steps of qNPE have an exponential speedup. The performance of qLDE on classification tasks is also competitive with classical analog. Along the way, we developed two useful subroutines in machine learning and scientific computation. The first one is quantum $K$ nearest neighborhood search which finds $K$ lowest values in an unordered set with $O(K\sqrt{N})$ times. It may help us sort an unordered list with an upper bound $O(N\sqrt{N})$. Another subroutine is a variation hybrid quantum-classical algorithm for solving the generalized eigenvalue problem. In electronic structure calculations, for instance, the electron density can be computed by obtaining the eigenpairs $(E_m,\Psi_m)$ of the Schrödinger-type eigenvalue problem $\mathcal{H}\Psi_m=E_m\mathcal{S}\Psi_m$ with different discrete energies $E_m$, where $\mathcal{H}$ denotes the Hamiltonian matrix and $\mathcal{S}$ is a symmetric positive matrix [@Polizzi2009]. Our variational quantum generalized eigenvalue solver can obtain the eigenpairs $(E_m,\Psi_m)$ in runtime $O(1/\epsilon^2)$ with error $\epsilon$ independent of the size of the Hamiltonian. In addition, our VQGE can also determine the ground state and other excited states of an updated Hamiltonian. The presented method does not use the Hamiltonian simulation, amplitude amplification and phase estimation. We have performed numerical experiments solving the generalized eigenvalues problems with size $2^5\times2^5$. Furthermore, our results provided two different output forms considering further study purpose. The output may be quantum or classical form, depending on the computation of interest. The quantum form encodes the monolithic information to a quantum state while the classical form directly outputs a classical discrete vector by quantum technique. These two output forms can be embedded into other large scale quantum or classical machine learning algorithms. Although we have presented two algorithms for dimensionality reduction and classification, some questions still need further study. For example, how to construct the Hamiltonian $X^{\dag}QX(X^{\dag}X)$ from the entanglement state $$|\psi_W\rangle=\sum_{i=0}^{M-1}|\omega_i\rangle|i\rangle.$$ Finally, as the effect of artificial neural networks to the quantum many-body problem [@Carleo2017], it would be interesting to investigate if our algorithms can also reduce the exponential complexity of the many-body wave function down to a tractable computational form. *Acknowledgments* The authors thank anonymous referees and editor for useful feedback on the manuscript. This work is supported by NSFC (11775306) and the Fundamental Research Funds for the Central Universities (18CX02035A, 18CX02023A, 19CX02050A). The derivation of Eq. (16) ========================== In Appendix A, we give a detailed derivation of (16). The method can be found in the paper [@NPE2005]. The cost function is $$\begin{aligned} \Phi(y)=\sum_{i}\Bigg(y_i-\sum_j\omega_j^iy_j\Bigg)^2. \end{aligned}$$ The fixed weights $\omega_j^i$ characterize intrinsic geometric properties of each neighborhood. Each high-dimensional data $x_i\in\mathcal{R}^D$ is mapped to a low-dimensional data $y_i\in\mathcal{R}^d$. The weight matrix $W=(\omega_j^i)_{i,j=0}^{M-1}$ is an $M\times M$ sparse matrix. Suppose the transformation is linear ($y^{\dag}=a^{\dag}X$), where the $i$-th column vector of $X$ is $x_i$. We define $$\begin{aligned} z_i=y_i-\sum_j\omega_j^iy_j. \end{aligned}$$ This equation can be rewritten as the vector form $$\begin{aligned} z=y-Wy=(I-W)y. \end{aligned}$$ Thus, the (A1) turns to $$\begin{aligned} \Phi(y)&=\sum_{i=0}^{M-1}z^{\dag}z=y^{\dag}(I-W^{\dag})(I-W)y\\ &=a^{\dag}X(I-W^{\dag})(I-W)X^{\dag}a=a^{\dag}XQX^{\dag}a. \end{aligned}$$ Obviously, the matrix $XQX^{\dag}$ is symmetric and semi-positive define. To remove an arbitrary scaling factor, we impose a constrain $$\begin{aligned} y^{\dag}y=1\Rightarrow a^{\dag}XX^{\dag}a=1. \end{aligned}$$ Finally, the minimization problem reduces to $$\begin{aligned} &\arg\min_a\quad a^{\dag}XQX^{\dag}a\\ &\emph{s.t.}\quad a^{\dag}XX^{\dag}a=1. \end{aligned}$$ Using the Lagrange multipliers and setting the derivative to zero, we can obtain the transformation vector $a$ by the following generalized eigenvalues problem $$\begin{aligned} XQX^{\dag}a=\lambda XX^{\dag}a. \end{aligned}$$ [99]{} A. Sarveniazi, [Am. J. Comput. Math, **4**, 55 (2014)](http://dx.doi.org/10.4236/ajcm.2014.42006). C. O. S. Sorzano, J. Vargas, and A. P. Montano, [arXiv:1403.2877](http://arxiv.org/abs/arXiv:1403.2877). H. Hoffmann, S. Schaal, and S. Vijayakumar, [Neural Process Lett. **29**, 109 (2009)](https://doi.org/10.1007/s11063-009-9098-0). M. Vlachos, C. Domeniconi, D. Gunopulos, G. Kollios, and N. Koudas, in *Proc. ACM Int. Conf. Knowl. Discovery Data Mining*, [**645** (2002)](https://doi.org/10.1145/775047.775143). B. Chizi and O. Maimon, in *Data mining and knowledge discovery handbook*, [**83** (2010)](https://doi.org/10.1007/978-0-387-09823-4_5). K. Pearson, *The London, Edinburgh and Dublin Philosophical Magazine and Journal of Science, Sixth Series*, [**2**, 559 (1901)](https://doi.org/10.1080/14786440109462720). K. Fukunaga, *Introduction to Statistical Pattern Recognition* (Academic Press, New York, 1972). L. Cayton, *Univ. of California at San Diego Tech. Rep*, **12**, 1 (2005). A. J. Izenman, [Comput. Stat. **4**, 439 (2012)](https://doi.org/10.1002/wics.1222). M. Belkin and P. Niyogi, in *Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic* (MIT Press, Cambridge, 2001), Vol. 14, pp. 585-591. S. T. Roweis and L. K. Saul, [Science **290**, 2323 (2000)](https://doi.org/10.1126/science.290.5500.2323). J. B. Tenenbaum, V. de Silva, and J. C. Langford, [Science **290**, 2319 (2000)](http://dx.doi.org/10.1126/science.290.5500.2319). A. Hadid and M. Pietikainen. *European Workshop on Biometrics and Identity Management*. Springer, Berlin, Heidelberg, 2009. P. Shor, in *Symposium on Foundations of Computer Science* (IEEE, Piscataway, NJ, 1994), pp. 124-134. L. K. Grover, [Phys. Rev. Lett. **79**, 325 (1997)](https://doi.org/10.1103/PhysRevLett.79.325). A. W. Harrow, A. Hassidim, and S. Lloyd, [Phys. Rev. Lett. **103**, 150502 (2009)](https://doi.org/10.1103/PhysRevLett.103.150502). J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, [Nature **549**, 195 (2017)](https://doi.org/10.1038/nature23474). P. Rebentrost, M. Mohseni, and S. Lloyd, [Phys. Rev. Lett. **113**, 130503 (2014)](https://doi.org/10.1103/PhysRevLett.113.130503). N. Wiebe, A. Kapoor, and K. M. Svore, [Quantum Inf. Comput. **15**, 316 (2015)](https://www.microsoft.com/en-us/research/publication/quantum-nearest-neighbor-algorithms-for-machine-learning). M. Schuld, I. Sinayskiy, and F. Petruccione, [Phys. Rev. A **94**, 022342 (2016)](https://doi.org/10.1103/PhysRevA.94.022342). G. Wang, [Phys. Rev. A **96**, 012335 (2017)](https://doi.org/10.1103/PhysRevA.96.012335). E. Aimeur, G. Brassard, and S. Gambs, [Mach. Learn. **90**, 261 (2013)](https://doi.org/10.1007/s10994-012-5316-5). J. Preskill, [Quantum **2**, 79 (2018)](https://doi.org/10.22331/q-2018-08-06-79). R. LaRose, A. Tikku, É. O¡¯Neel-Judy, L. Cincio, and P. J. Coles, [npj Quantum Inform. **5**, 57 (2019)](https://doi.org/10.1038/s41534-019-0167-6). A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’Brien, [Nat. Commun. **5**, 4213 (2014)](https://doi.org/10.1038/ncomms5213). O. Higgott, D. Wang, and S. Brierley, [Quantum **3**, 156 (2019)](https://doi.org/10.22331/q-2019-07-01-156). T. Jones, S. Endo, S. McArdle, X. Yuan, and S. C. Benjamin, [Phys. Rev. A **99**, 062304 (2019)](https://doi.org/10.1103/PhysRevA.99.062304). E. Farhi, J. Goldstone, and S. Gutmann, [arXiv:1411. 4028 (2014)](https://arxiv.org/abs/1411.4028). M. Lubasch, J. Joo, P. Moinier, M. Kiffner, D. Jaksch, [arXiv:1907. 09032 (2019)](https://arxiv.org/abs/1907.09032). X. Xu, J. Sun, S. Endo, Y. Li, S. C. Benjamin, and X. Yuan, [arxiv:1909.03898 (2019)](https://arxiv.org/abs/1909.03898). D. An and L. Lin, [arXiv:1909. 05500 (2019)](https://arxiv.org/abs/1909.05500). H.-Y. Huang, K. Bharti, P. Rebentrost, [arXiv:1909. 07344 (2019)](https://arxiv.org/abs/1909.07344). C. Bravo-Prieto, R. LaRose, M. Cerezo, Y. Subasi, L. Cincio, and P. J. Coles, [arXiv:1909. 05820 (2019)](https://arxiv.org/abs/1909.05820). S. Lloyd, M. Mohseni, and P. Rebentrost, [Nat. Phys. **10**, 631 (2014)](https://doi.org/10.1038/nphys3029). I. Cong and L. Duan, [New J. Phys. **18**, 073011 (2016)](http://dx.doi.org/10.1088/1367-2630/18/7/073011). M. A. Nielsen, I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge Univ. Press, 2000). H.-T. Chen, H.-W. Chang, and T.-L. Liu, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2005). X. He, D. Cai, S. Yan, and H. J. Zhang, In *Proceedings of the Tenth IEEE International Conference on Computer Vision*, pp 1208-1213, 2005. M. Schuld, I. Sinayskiy, and F. Petruccione, [Phys. Rev. A **94**, 022342 (2016)](https://doi.org/10.1103/PhysRevA.94.022342). C. Durr and P. Hoyer, [arXiv:9607014](http://arxiv.org/abs/arXiv:quant-ph/9607014). H. Buhrman, R. Cleve, J. Watrous, and R. de Wolf, [Phys. Rev. Lett. **87**, 167902 (2001)](https://doi.org/10.1103/PhysRevLett.87.167902). C. Durr, M. Heiligman, P. Hoyer, and M. Mhalla, [SIAM J. Comput. **35**, 1310, (2006)](https://doi.org/10.1137/050644719). K. Miyamoto, M. Iwamura, and K. Kise, [arXiv:1907.03315 (2019)](http://arxiv.org/abs/arXiv:1907.03315). The set $\mathcal{N}_i$ contains $K$ vector states which also reside in the $M$ quantum states set. P. Rebentrost, A. Steffens, I. Marvian, and S. Lloyd, [Phys. Rev. A **97**, 012327 (2018)](https://doi.org/10.1103/PhysRevA.97.012327). Here, we decompose the covariance matrix $G_i$ by the singular value decomposition of matrix $A_i$. In this situation, one can perform QSVD of $A_i$ to obtain the coressponding singular value. Alternatively, one can directly apply quantum phase estimation of $G_i$. However, an extra computation expense of the elements of $G_i$ is paid by marix mulitiplication algorithm [@MM1990] on classical computer. Thus, our decomposition of covariance matrix $G_i$ is a computationally-friendly scheme. V. Giovannetti, S. Lloyd, and L. Maccone, [Phys. Rev. Lett. **100**, 160501 (2008)](https://doi.org/10.1103/PhysRevLett.100.160501). D. W. Berry, G. Ahokas, R. Cleve, and B. C. Sanders, [Commun. Math. Phys. **270**, 359 (2007)](https://doi.org/10.1007/s00220-006-0150-x). A. M. Childs and R. Kothari, in *Theory of Quantum Computation, Communication, and Cryptography: 5th Conference, TQC 2010, Leeds, UK, April 13-15, 2010, Revised Selected Papers,* edited by W. van Dam, V. M. Kendon, and S. Severini (Springer, Berlin, 2011), pp. 94¨C103. G. Ortiz, J. E. Gubernatis, E. Knill, and R. Laflamme, [Phys. Rev. A **64**, 022319 (2001)](https://doi.org/10.1103/PhysRevA.64.022319). J. Romero, R. Babbush, J. R. McClean, C. Hempel, P. J. Love, and A. Aspuru-Guzik, [Quantum Sci. Technol. 4, 014008 (2019)](https://doi.org/10.1088/2058-9565/aad3e4). A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, [Nature **549**, 242 (2017)](https://doi.org/10.1038/nature23879). Y. Du, M.-H. Hsieh, T. Liu, and D. Tao, <arXiv:1810.11922>. K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, [Phys. Rev. A **98**, 032309 (2018)](https://doi.org/10.1103/PhysRevA.98.032309). S. Lloyd and C. Weedbrook, [Phys. Rev. Lett. **121**, 040502 (2018)](https://doi.org/10.1103/PhysRevLett.121.040502). P.-L. Dallaire-Demers and N. Killoran, [Phys. Rev. A **98**, 012324 (2018)](https://doi.org/10.1103/PhysRevA.98.012324). J.-G. Liu and L. Wang, [Phys. Rev. A **98**, 062324 (2018)](https://doi.org/10.1103/PhysRevA.98.062324). B. N. Parlett, *The Symmetric Eigenvalue Problem* (Society for Industrial and Applied Mathematics, Philadelphia, 1998). Y. Shen, X. Zhang, S. Zhang, J.-N. Zhang, M.-H. Yung, and K. Kim, [Phys. Rev. A **95**, 020501 (2017)](https://doi.org/10.1103/PhysRevA.95.020501). L.-W. Wang and A. Zunger, [J. Chem. Phys. **100**, 2394 (1994)](https://doi.org/10.1063/1.466486). D. Coppersmith and S. Winograd, [J. Symb. Comput. **9**, 251 (1990)](http://dx.doi.org/10.1016/s0747-7171(08)80013-2). N. Liu and P. Rebentrost, [Phys. Rev. A **97**, 042315 (2018)](https://doi.org/10.1103/PhysRevA.97.042315). Z. Zhao, J. K. Fitzsimons, and J. F. Fitzsimons, [Phys. Rev. A **99**, 052331 (2019)](https://doi.org/10.1103/PhysRevA.99.052331). VQGE source code. <https://github.com/jmliang24/VQGE-source-code-EXAMPLE1.git>. G. H. Golub and C. F. Van Loan, *Matrix computations,* Vol. 3 (JHU Press, 2012). F. Dornaika and A. Bosaghzadeh, [IEEE T. Cybern. **43**, 921 (2013)](https://doi.org/10.1109/tsmcb.2012.2218234). E. Polizzi, [Phys. Rev. B **79**, 115112 (2009)](https://doi.org/10.1103/PhysRevB.79.115112). G. Carleo and M. Troyer, [Science **355**, 602 (2017)](https://doi.org/10.1126/science.aag2302).
--- abstract: | Performance of machine learning approaches depends strongly on the choice of misfit penalty, and correct choice of penalty parameters, such as the threshold of the Huber function. These parameters are typically chosen using expert knowledge, cross-validation, or black-box optimization, which are time consuming for large-scale applications. We present a principled, data-driven approach to simultaneously learn the model parameters and the misfit penalty parameters. We discuss theoretical properties of these joint [inference]{} problems, and [develop]{} algorithms for their solution. We show synthetic examples of automatic parameter tuning for piecewise linear-quadratic (PLQ) penalties, and use the approach to develop a self-tuning [robust PCA]{} formulation for background separation. author: - | Peng Zheng zhengp@uw.edu\ Department of Applied Mathematics\ University of Washington\ Seattle, WA 98195-4322, USA Aleksandr Y. Aravkin saravkin@uw.edu\ Department of Applied Mathematics\ University of Washington\ Seattle, WA 98195-4322, USA Karthikeyan Natesan Ramamurthy knatesa@us.ibm.com\ IBM T.J. Watson Research Center\ Yorktown Heights, NY USA bibliography: - 'nips.bib' title: Shape Parameter Estimation --- Introduction ============ [r]{}[0.5]{}-0.18in ![image](lr_qh-eps-converted-to.pdf){width="6cm"} When designing machine learning formulations, choice of penalty plays a key role in the accuracy of the inferred model, and robustness of the learning procedure. Consider Figure \[1D example\], where data from a simple linear regression has been contaminated with asymmetric outliers. The data generating mechanism is shown in black. The linear regression model for the data $\{y_i, a_i\}$ is given by $$\label{eq:LG} y = {\left\langle a_i,x \right\rangle} + \epsilon_i,$$ with $\epsilon_i$ assumed i.i.d. Gaussian variables. The maximum likelihood formulation is equivalent to the least square problem, $$\min_x \frac{1}{2}\|Ax - y\|^2.$$ This assumption is violated in Figure \[1D example\]; the data are corrupted with asymmetric errors, and contain outliers. The least squares fit, shown in blue dash, fails to detect the true data generating mechanism. To learn effectively in these cases, we consider a parameterized family of penalties $\rho(x;\theta)$, where $x$ are model parameters and $\theta$ control the shape of the penalty. The family is rich enough to allow the kinds of errors in Figure \[1D example\], and we learn $x$ and $\theta$ simultaneously using an extended statistical model. Two immediate examples of $\theta$ are the [robustness threshold]{} $\kappa$ in the Huber penalty, and the slope $\tau$ in the asymmetric quantile penalty, see Figure \[fig:Qhub\]. Selecting the appropriate $\theta$ is important. The quantile Huber case was considered by [@ramamurthy2015automatic]. [r]{}[0.5]{} [p[3.6cm]{}p[3.6cm]{}]{} ; ; ; coordinates [(-1,.5) (1,.5)]{}; & ; ; (20,30) node [$-\tau$]{}; (330,30) node [$1-\tau$]{}; \ (a) Huber ($\kappa$) &(b) quantile ($\tau$). For example, the fit with the correctly set quantile penalty is shown in red dash in Figure \[1D example\]. The value of $\tau$ was obtained automatically from the data using a statistical model detailed in Section \[sec:model\], and did not require cross-validation. The main focus of this paper is data-drive approaches for simultaneously selecting $\theta$ and solving for $x$, without cross-validation or prior/additional information. Related work ------------ Meta-parameters are classically estimated using cross-validation or grid search. These methods typically require multiple solutions of any given learning problem, where a held-out dataset is used to evaluate each configuration. More recently, Bayesian optimization [@snoek; @hutter; @bergstra; @fastBO] and random search [@randSearch; @randSA] have come to the forefront as two leading techniques that can be used to obtain meta-parameters in a very wide range of contexts. All of these techniques can also be used for the problem class we consider. However, applying these approaches is always more computationally expensive than solving a single instance of a learning problem; both random search and Bayesian optimization require many instance evaluations. In contrast, for the narrower context of shape parameter estimation, we solve a [*single*]{} extended problem to simultaneously fit the $x$ and $\theta$. The most relevant works related to this paper focus on the relation between the quantile penalty and the asymmetric Laplace distribution(ALD) [@yu2001bayesian; @Tu2017; @bera2016asymmetric]. [@bera2016asymmetric] jointly estimate the model and the shape parameters for quantile penalty, and [@Tu2017] infer the joint posterior distribution of these parameters. Contributions ------------- We develop a maximum-likelihood approach to simultaneously learn both the model and shape parameters for a broad class of penalties, of which the quantile is one example. The likelihood is obtained by interpreting each penalty as a statistical density, with normalization constant depending on the shape parameters $\theta$. The modeling innovation is to systematically incorporate the log of the normalization constant into the joint inference problem: $$\min_{x,\theta\in\mathcal{D}} \rho(x;\theta) + g(x) + l(\theta). \label{eq:mainFormulation}$$ Here, $g(x)$ is any regularization term on $x$, while $l(\theta)$ is the log of the normalization constant that arises from the statistical model, and ensures the model remains statistically valid as $\rho$ is adapted. Our second contribution is algorithmic. We consider first-order schemes, and show how to apply the PALM [@bolte2014proximal] algorithm to problem . The PALM algorithm is limited to penalties $\rho$ that are smooth in $(x,\theta)$, and so we design a new second-order interior point algorithm for problems with non-smooth coupling. The approach and algorithms are illustrated using synthetic and real data. Roadmap ------- The paper proceeds as follows. In Section \[sec:model\] we derive the maximum likelihood model for joint inference in $x$ and $\theta$ and characterize theoretical properties of the resulting objectives from an optimization perspective. In Section \[sec:algo\], we consider first- and second-order algorithms for the structured but generally nonconvex and nonsmooth objective . Section \[sec:synthetic\] illustrates the convergence rates of the methods, as well as behavior of the shape-tuned estimates, using synthetic data. In Section \[sec:real\], we develop self-tuning RPCA approaches, and apply them to real data. Statistical Model and Properties of Joint Objective {#sec:model} =================================================== Penalties in learning formulations have underlying statistical assumptions. In this section we first review the relationship between penalties and corresponding residual distributions. We then use this relationship to develop a [joint]{} maximum likelihood approach for [model and]{} shape parameter inference, and characterize properties of the resulting objective function. Statistical view ---------------- Recall the quantile penalty in Figure \[fig:Qhub\]. If we choose $\tau$ to be close to 1, we penalize the negative errors a lot more than the positive. Equivalently, we assume that the distribution of the errors $\epsilon_i$ is biased towards positive errors. The relationship between penalties and associated densities can be made precise. Given a penalty $\rho(r; \theta)$, we assume $\epsilon_i$ in are i.i.d. samples from the distribution with density $$\label{eq:penalty-density} p(r;\theta) = \frac{1}{n_c(\theta)}\exp[-\rho(r;\theta)],\quad \text{where} \quad n_c(\theta) = \int_\mathbb{R} \exp[-\rho(r;\theta)]\,dr.$$ The term $n_c(\theta)$ is a normalization constant that ensures that $\rho(r,\theta)$ can be interpreted as a density as in . We can now formulate the [*joint*]{} maximum likelihood problem in $(x,\theta)$, or equivalently minimize its negative log: $$\label{eq:obj} \min_{x,\theta\in\mathcal{D}} \sum_{i=1}^m \rho(y_i - {\left\langle a_i,x \right\rangle};\theta) + g(x) + m\log[n_c(\theta)].$$ The parameter $\theta$ may be restricted to a domain $\mathcal{D}$; for example, the slope parameter $\tau$ must be between $0$ and $1$ (see Figure \[fig:Qhub\]). The term $g(x)$ is an optional regularization function, e.g. $\lambda \|x\|_1$ or indicator of $x \in \mathcal{C}$ for some set $\mathcal{C}$. The objective in the quantile example used to obtain the penalty-tuned fit in Figure \[1D example\] is given by $$\label{eq:quantile} \min_{x, \tau \in [0,1]} q_{\tau} (Ax-b) + m\log\left(\frac{1}{\tau} + \frac{1}{1-\tau}\right),$$ with $q_\tau$ the asymmetric 1-norm, and $m$ the length of the residual. In this special case, $\log(n_c)$ is available in closed form, is smooth in the interior of its domain, and acts as a barrier function for the interval $[0,1]$ that favors $\tau = 0.5$. It’s also a strongly convex function, but has [*no global quadratic upper bound*]{}, violating a key assumption often required by optimization algorithms. In the remainder of this section, we characterize theoretical properties of the objective . Theoretical properties ---------------------- Smoothness, convexity, and quadratic upper bounds are at the center of algorithm design, and understanding these properties guide the choice of algorithms for . \[asp:smoothness\] To ensure the validity of the statistical viewpoint, we require $\rho$ to satisfy: 1. $\rho(r;\theta)\ge0$, for every $\theta\in{\mathcal{D}}$ and $r\in{\mathbb{R}}$ ([**non-negativity**]{}) 2. For any $\theta\in\mathcal{D}$, $n_c(\theta) = \int_\mathbb{R} \exp[-\rho(r;\theta)]\,dr < \infty$ ([**integrability**]{}) 3. For any $\theta_0\in\mathcal{D}$, $\rho(r;\theta)$ is $C^2$ around $\theta_0$ for almost every $r\in\mathbb{R}$ ([**smoothness in $\theta$**]{}) Under these assumptions, we can obtain formulas for the first and second derivatives of $n_c(\theta)$. \[th:smoothness\] For $n_c(\theta)$in , suppose Assumption \[asp:smoothness\] holds and for $\theta_0\in\mathcal{D}$, there exist functions $g_k(r)$, $k=1,2$, such that, 1. for any unit vector $v$, $|{\left\langle \nabla_\theta\exp[-\rho(r;\theta)],v \right\rangle}|\le g_1(r)$ for any $\theta$ around $\theta_0$, 2. for any unit vector $v$, $\left|{\left\langle \nabla_\theta^2\exp[-\rho(r;\theta)]v,v \right\rangle}\right|\le g_2(r)$ for any $\theta$ around $\theta_0$, 3. $\int_\mathbb{R} g_k(r)\,dr < \infty$, $k=1,2$. then $n_c(\theta)$ is $C^2$ continuous around $\theta_0$ and, $$\label{eq:nc_form} \nabla n_c(\theta_0) = \int_\mathbb{R} \nabla_\theta\exp[-\rho(r;\theta_0)]\,dr,\quad\nabla^2 n_c(\theta_0) = \int_\mathbb{R}\nabla_\theta^2\exp[-\rho(r;\theta_0)]\,dr.$$ The proof is straightforward, and included in the supplementary materials. The derivative formulas  are used for first- and second-order methods to infer $x$ and $\theta$. The parametrization conditions in $\theta$ are satisfied by all commonly used piecewise linear quadratic (PLQ) examples [@JMLR:v14:aravkin13a], including Huber and quantile penalties in Figure \[fig:Qhub\]. The theorem applies more generally to densities that are not log-concave. For example, the Student’s $t$ density and associated penalty satisfy Assumption \[asp:smoothness\] and other assumptions of Theorem \[th:smoothness\] for $\nu > 1$. In the quantile case , the term $\log[n_c(\theta)]$ is convex. We characterize sufficient conditions for convexity of $\log[n_c(\theta)]$ for a general class of penalties $\rho$. \[th:convexity\] Consider same definition of $n_c(\theta)$ in Theorem \[th:smoothness\], and suppose Assumption \[asp:smoothness\] holds. We have the following results: 1. If $\rho(r;\theta)$ is jointly convex in $r$ and $\theta$, then $\log[n_c(\theta)]$ is a concave function of $\theta$. 2. If $\rho(r;\theta)$ is concave with respect to $\theta$ for every $r$, then $\log[n_c(\theta)]$ is a convex function. This result follows from [@boyd2004convex Chapter 3.5]. Theorems \[th:smoothness\] and \[th:convexity\] tell an interesting story. The log-normalization constant $\log[n_c(\theta)]$ is nearly always smooth; even when the loss $\rho$ is nonsmooth in $x$. The inference problem  is [*never jointly convex*]{} in $(x,\theta)$; in particular looking for a jointly convex formulation $\rho(x;\theta)$ guarantees $\log[n_c(\theta)]$ will be [*concave*]{}. This is intuitive, as we are attempting to learn both the model and error structure at the same time. Objective  is, in general, nonsmooth and nonconvex; but it has a fairly simple structure that is exploited to design first and second order methods in the next section. [ To understand how non-convex is, we apply partial minimization, and consider the function $$\varrho(\theta) = \min_{x} \sum_{i=1}^m \rho(y_i - {\left\langle a_i,x \right\rangle};\theta) + m\log[n_c(\theta)].$$ This is the [*value function*]{} of the shape parameters, after $x$ has been minimized. For simple examples, $\theta$ may have dimension 1 or 2, we can plot either the graph or the level sets of this function. We generate the samples $\epsilon_i$ from distribution defined by quantile Huber function with $\kappa = 1$ and $\tau = 0.05$, and plot $\varrho(\theta)$ in Figure \[fig:levelset\].]{} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![\[fig:levelset\] Left panel: graph of value function $\varrho(\tau)$. Right panel: level sets of quantile huber (QH) value function $\varrho(\tau, \kappa)$. Blue dots show optimal parameter estimates, while red dots show true parameters.](qlevelset-eps-converted-to.pdf "fig:"){width="7cm"} ![\[fig:levelset\] Left panel: graph of value function $\varrho(\tau)$. Right panel: level sets of quantile huber (QH) value function $\varrho(\tau, \kappa)$. Blue dots show optimal parameter estimates, while red dots show true parameters.](qHlevelset-eps-converted-to.pdf "fig:"){width="7cm"} \(a) Graph of quantile value function $\varrho(\tau)$ \(b) level sets of the QH value function $\varrho(\tau, \kappa)$. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- From Figure \[fig:levelset\] (a), we can see that for the quantile penalty, the value function $\varrho$ appears to be quasi-convex for this example; and we can expect to find the unique global minimum in $x$ and $\tau$, since computing the projection require solving a convex problem. When $\theta$ comprises both $\tau$ and $\kappa$ in (b), the joint objective is clearly noncovex and may be more challenging for a local search (the level sets are stretched and bent). Nonetheless, there is a unique global minimum that is close to the true parameters; and this minimum was found by a local search. First- and Second-Order Algorithms. {#sec:algo} =================================== In this section, we consider first- and second-order methods for problems of type . When $\rho$ is smooth in $x$ and $\theta$, we show how to apply the Proximal Alternating Linearized Minimization (PALM) algorithm [@bolte2014proximal]. The development is straightforward, but the log normalization constant $\log[n_c(\theta)]$ must be treated carefully, as its gradient does not have a global Lipschitz constant. Requiring smoothness in $\rho$ is restrictive; and in particular eliminates the quantile example . The quantile penalty is not smooth, but it is piecewise linear. Interior point methods have been shown to be effective for convex problems of moderate scale (thousands of variables and data points) where $\rho$ and $g$ are nonsmooth piecewise linear-quadratic penalties [@JMLR:v14:aravkin13a]. Examples include symmetric penalties (1-norm, quadratic) as well as asymmetric penalties, such as quantile and quantile Huber [@aravkin2014orthogonal]. All of these penalties are shown in detail in the supplementary materials. Our main algorithmic contribution is to extend this approach to the joint nonconvex inference problem . PALM for inference and shape estimation {#sec:PALM} --------------------------------------- The PALM algorithm [@bolte2014proximal] can be used to minimize any problem of form $$\min_{x,\theta} H(x,\theta) + r_1(x) + r_2(\theta),$$ where $H$ is $C^1$, with globally Lipschitz partial gradients, while the functions $r_1$ and $r_2$ are required to be only proper lower semicontinuous (in particular not necessarily convex, finite valued, or smooth). Even though $\log[n_c(\theta)]$ is smooth (see Theorem \[th:smoothness\]), it must be relegated to $r_2(\theta)$, since otherwise it can easily violate the Lipschitz assumptions on $H$. Therefore, to apply PALM to , we take $$\label{eq:palm_detail} H(x,\theta) = \sum_{i=1}^m \rho(y_i - {\left\langle a_i,x \right\rangle};\theta), \quad r_1(x) = g(x), \quad r_2(\theta) = \delta_\mathcal{D}(\theta) + m\log[n_c(\theta)].$$ Here $\delta_{\mathcal{D}}$ is the indicator function for the set $\mathcal{D}$, ensuring $\theta \in \mathcal{D}$, and $g$ is any ‘prox-friendly’ regularizer for $x$. The PALM algorithm is detailed in Algorithm \[alg:PALM\]. The steps $c_k$ and $d_k$ are obtained from Lipschitz constants of the (partial) gradients of $H$. **Initialize**: $x^0$, $\theta^0$ [$x^{k+1}$ $\gets$ ${\text{prox}}_{\frac{1}{c_k}r_1}\left(x^k - \frac{1}{c_k}\nabla_x H(x^k,\theta^k)\right)$]{} [$\theta^{k+1}$ $\gets$ ${\text{prox}}_{\frac{1}{d_k}r_2}\left(\theta^k - \frac{1}{d_k}\nabla_\theta H(x^{k+1},\theta^k)\right)$]{} [**Detail:**]{} The prox operator of $\log[n_c(\theta)]$ is not available in closed form for any examples of interest. However, the prox operator can be efficiently computed using the results of Theorem \[th:smoothness\]: $$\label{eq:prox_compute} {\text{prox}}_{\frac{1}{d_k}r_2}(\phi) = \arg\min_{\theta\in \mathcal{D}} \frac{1}{2d_k}\|\theta - \phi\|^2 + \log[n_c(\theta)].$$ In all examples of interest, $\theta$ is low dimensional; so we compute  using Newton’s method or an interior point method (when $\mathcal{D}$ must be accounted for). This requires $\nabla \log[n_c(\theta)]$ and $\nabla^2 \log[n_c(\theta)]$, which are calculated numerically using formulas . The PALM algorithm is well suited to large-scale shape inference problems with smooth coupling of $x$ and $\theta$ in $\rho$. We use it for the self-tuning RPCA experiments in Section \[sec:real\]. Interior point method for self-tuning piecewise linear-quadratic penalties {#sec:IP} -------------------------------------------------------------------------- [r]{}[0.4]{} [p[3.6cm]{}]{} ; ; ; coordinates [(-0.45,0.405) (1.05,2.205)]{}; \ quantile Huber ($\tau$, $\kappa$). The restriction that $\rho$ must be smooth in $(x,\theta)$ is unsatisfying, given that one of the simplest examples of self-tuning penalties comes from the nonsmooth quantile loss. Here, we develop an interior point method for the quantile problem , as well as any other analogous problems for shape parameter estimation with PLQ penalties. The class includes many familiar losses (Huber, quantile, quantile Huber, $\ell_2$ and $\ell_1$). While many of these are nonsmooth, they all have smooth conjugate representations. [@JMLR:v14:aravkin13a] used these representations to solve convex PLQ problems, including Lasso, support vector machine, and Huber regression. We extend the approach to solve nonconvex extended problems of form $ \min_{x,\theta \in \mathcal{D}} \rho(x;\theta) + \log[n_c(\theta)]. $ The approach is limited to moderate problem dimensions[^1], but converges at a superlinear rate, and solves problems with nonsmooth coupling in $(x,\theta)$. We first review conjugate representations of PLQ penalties. The quantile Huber penalty (Figure \[fig:QuantileQHub\]) is the convex conjugate of the function $\frac{1}{2}\|u\|^2 + \delta_{[-\kappa \tau, \kappa(1-\tau)]}(u)$, [For more examples, see the Appendix]{}. PLQ functions are closed under sums and affine compositions, and the generic PLQ object has can be expressed as the conjugate of $\frac{1}{2}u^TMu + \delta_{U}(u)$, evaluated at some input $Br-\bar b$ [@JMLR:v14:aravkin13a]: $$\label{eq:PLQdual} \rho(r, \theta;B,\bar{b},C,\bar{c},M) = \sup_{u} \left\{u{^{\mathsf{T}}}(Br-\bar{b}) - \frac{1}{2} u{^{\mathsf{T}}}Mu \mid C{^{\mathsf{T}}}u\le \bar{c} \right\}$$ where $M\succeq0$, and $U:=\{u \mid C{^{\mathsf{T}}}u\le \bar{c}\}$ is a polyhedral set with $0 \in U$. To incorporate shape penalty estimation, we allow $\bar b$ and $\bar c$ to be affine functions of $\theta$, and assume $\mathcal{D}$ is also polyhedral: $$\bar{b} = G{^{\mathsf{T}}}\theta + b,\quad\bar{c} = H{^{\mathsf{T}}}\theta + c, \quad \mathcal{D} = \{\theta\mid S{^{\mathsf{T}}}\theta\le s\}.$$ Our goal now is to now solve a [*saddle point*]{} system that includes primal variables $x$, conjugate variables $u$, and shape parameters $\theta$: $$\label{eq:PLQprimaldual} \min_{x,S{^{\mathsf{T}}}\theta\le s}\sup_{C{^{\mathsf{T}}}u \le H{^{\mathsf{T}}}\theta + c}\left\{u{^{\mathsf{T}}}[B(Ax-y)-G{^{\mathsf{T}}}\theta - b]-\frac{1}{2}u{^{\mathsf{T}}}M u\right\} + m\log[n_c(\theta)]$$ For example, the self-tuning quantile penalty  (with $\theta =\tau$) gives $$\min_{x, \scriptsize\begin{bmatrix} 1 \\ -1 \end{bmatrix}\tau \leq \begin{bmatrix}1 \\ 0 \end{bmatrix}} \quad \sup_{\scriptsize \begin{bmatrix}1 \\ -1 \end{bmatrix} u \leq -\begin{bmatrix}1 \\ 1\end{bmatrix} \tau +\begin{bmatrix} 1 \\ 0\end{bmatrix}} u{^{\mathsf{T}}}(Ax-b) + m\log\left(\frac{1}{\tau} + \frac{1}{1-\tau}\right).$$ Interior point (IP) methods apply damped Newton to a relaxation of the optimality conditions , see [@KMNY91; @NN94; @Wright:1997]. [ The relaxation can be derived by approximating indicator functions of the constraints using a log-barrier function with parameter $\mu$: $$\delta_{\{(u,\theta)\mid C{^{\mathsf{T}}}u \le H{^{\mathsf{T}}}\theta + c\}}(u,\theta) \approx -\mu{\mathbf{1}}{^{\mathsf{T}}}\log(c + H{^{\mathsf{T}}}\theta-C{^{\mathsf{T}}}u).$$ Note that as $\mu \downarrow 0$, the barriers approach true indicator functions for $U$. The barrier parameter $\mu$ is aggressively decreased to a specified optimality criterion as the optimization proceeds. For fixed $\mu$, there is an associated approximate objective for , given by $$\begin{aligned} \label{eq:PLQprimaldualapp} \min_{x,S{^{\mathsf{T}}}\theta\le s}\sup_{u}\left\{u{^{\mathsf{T}}}[B(Ax-y)-G{^{\mathsf{T}}}\theta - b]-\frac{1}{2}u{^{\mathsf{T}}}M u + \mu{\mathbf{1}}{^{\mathsf{T}}}\log(c + H{^{\mathsf{T}}}\theta-C{^{\mathsf{T}}}u)\right\}\\ + m\log[n_c(\theta)]-\mu{\mathbf{1}}{^{\mathsf{T}}}\log(s-S{^{\mathsf{T}}}\theta) \end{aligned}$$]{} And we apply the Lagrangian dual formulation for this objective, $$\begin{aligned} \mathcal{L}_\mu(d_1,q_1,x,u,\theta) = u{^{\mathsf{T}}}[B(Ax-y)-G{^{\mathsf{T}}}\theta - b]-\frac{1}{2}u{^{\mathsf{T}}}M u + \mu{\mathbf{1}}{^{\mathsf{T}}}\log(c + H{^{\mathsf{T}}}\theta-C{^{\mathsf{T}}}u)\\ + m\log[n_c(\theta)]-\mu{\mathbf{1}}{^{\mathsf{T}}}\log(s-S{^{\mathsf{T}}}\theta) + q_1{^{\mathsf{T}}}(d_1+S{^{\mathsf{T}}}\theta-s) \end{aligned}$$ where $q_1$ is the dual variable and $d_1$ is the slack variable. By introducing another pair of dual-slack variable $q_2$ and $d_2$ for log-barrier function, $$d_2 = c - C{^{\mathsf{T}}}u + H{^{\mathsf{T}}}\theta,\quad q_2 = \mu D_2^{-1}{\mathbf{1}}$$ [ where all the capital letters represent diagonal matrices with corresponding little letters vector as the diagonal. We could form the KKT system of , $$\label{eq:KKT} F_\mu(z) = {\begin{bmatrix} D_1q_1 - \mu{\mathbf{1}}\\ d_1+S{^{\mathsf{T}}}\theta-s\\ D_2q_2 - \mu{\mathbf{1}}\\ B(Ax-y) - G{^{\mathsf{T}}}\theta - b -Mu -Cq_2\\ A{^{\mathsf{T}}}B{^{\mathsf{T}}}u\\ -Gu + m\nabla\log[n_c(\theta)] + Sq_1 + Hq_2. \end{bmatrix}}$$ The Jacobian matrix $\nabla F_\mu$ of the system is given by $$\label{eq:KKTJacobian} \def\arraystretch{1.5} \nabla F_\mu(z) = {\left}[\begin{array}{c|c|c|c|c|c} Q_1& D_1 & & & & \\\hline I & & & & & S{^{\mathsf{T}}}\\\hline & & D_2 & -Q_2C{^{\mathsf{T}}}& & Q_2H{^{\mathsf{T}}}\\\hline & & -C & -M & BA & -G{^{\mathsf{T}}}\\\hline & & & A{^{\mathsf{T}}}B{^{\mathsf{T}}}& &\\\hline & S & H & -G & &\nabla^2\log[n_c(\theta)] \end{array}{\right}]$$ Where $z = [d_1,q_1,q_2,u,x,\theta]{^{\mathsf{T}}}$. Notice that when $\mu\downarrow0$, will approach to the optimality condition of , Algorithm \[alg:IPsolve\] shows the IP method.]{} **Initialize**: $z^0$, $\mu = 1$ [$p$ $\gets$ $\nabla F_\mu(z^k)^{-1}F_\mu(z^k)$]{} [$\alpha$ $\gets$ $\text{LineSearch}(z^k,p)$, using merit function $\|F_\mu(\cdot)\|$]{} [$z^{k+1}$ $\gets$ $z^k - \alpha p$]{} [$\mu$ $\gets$ $0.1\cdot$ (Average complementarity conditions)]{} Implementability of Algorithm \[alg:IPsolve\] is analyzed in Theorem \[th:Implementability\]. \[th:Implementability\] Let $T_2 = Q_2^{-1}D_2$. Suppose the following conditions are satisfied: - $\text{null}(M)\cap\text{null}(C{^{\mathsf{T}}}) = \{0\}$ - $\text{null}(BA) = \{0\}$ - $\text{null}(\nabla^2\log[n_c(\theta)])\cap\text{null}(S{^{\mathsf{T}}})\cap\text{null}(H{^{\mathsf{T}}})\cap\text{null}(-G{^{\mathsf{T}}}+CT_2^{-1}H{^{\mathsf{T}}}) = \{0\}$ for every $\theta\in{\mathcal{D}}$. Then Algorithm \[alg:IPsolve\] is implementable; in particular $p$ in step 3 can always be found. Moreover, we could replace the third condition by a stronger assumption that is if $\log[n_c(\theta)]$ is strongly concave. The proof appears in the appendix. Synthetic Data Experiments {#sec:synthetic} ========================== We illustrate the proposed approach using a [linear]{} regression example. We consider a data set contaminated by asymmetric errors and outliers, two features captured by the quantile Huber penalty (Figure \[fig:QuantileQHub\]). The slope $\tau$ controls for the asymmetry, while the threshold $\kappa$ detects the point beyond which a residual might be classified as an ‘outlier’. The goal of the experiment is to simultaneously learn the regression model parameters $x$ as well as obtain the correct $\tau$ and $\kappa$. Simple residual analysis is not possible [*a priori*]{}, since the model parameters $x$ are also unknown. [When $\kappa > 0$ in quantile Huber, $\rho(x;\theta)$ is smooth, and we can use the PALM algorithm from Section \[sec:PALM\]. The quantile penalty is also PLQ, so we can also apply the proposed IP method from Section \[sec:IP\]. We use both and compare their performance.]{} The primal form for the quantile Huber [penalty is]{} $$\label{eq:quantileHuberPrimal} \small \rho(r;\theta) = \begin{cases} -\tau\kappa r - \frac{(\tau\kappa)^2}{2},& r < -\tau\kappa\\ \frac{1}{2}r^2,& r\in[-\tau\kappa,(1-\tau)\kappa]\\ (1-\tau)\kappa r - \frac{((1-\tau)\kappa)^2}{2},& r > (1-\tau)\kappa. \end{cases}$$ We must choose a parametrization in terms of $\theta$. One option would be to take $\theta = [\tau,\kappa]{^{\mathsf{T}}}$. But this parametrization would violate assumptions of both the first- and second-order approaches in Section \[sec:algo\]. Indeed, $\nabla_\theta \rho(r;\theta)$ would not have a global Lipschitz constant, so we could not use PALM. Similarly, we could not write the conjugate representation  using affine functions of $\theta$. Looking carefully at either  or , we instead choose $\theta_1 = \tau\kappa, \theta_2 = \tau(1-\kappa)$. The only requirement on these parameters is that they are each non-negative. The primal objective can be written as $$\min_{x,\theta\geq 0} \rho(Ax-y;\theta) + m\log[n_c(\theta)].$$ where $A\in{\mathbb{R}}^{m\times n}$ [is the design matrix]{}, $x\in{\mathbb{R}}^n$ is [the model parameter vector]{}, and $y\in{\mathbb{R}}^m$ is the observed data vector contaminated by outliers. From Theorem \[th:smoothness\], $n_c(\theta)$ is $\mathcal{C}^2$ smooth. From Theorem \[th:convexity\], [the objective in $\theta$ is the sum of a concave term $\rho(Ax-y;\theta)$ and a convex term $m\log(n_c(x))$. The joint problem in $(x,\theta)$ is nonconvex.]{} Nonetheless, both first- and second-order methods from Section \[sec:algo\] can be applied to solve the problem. We generate synthetic data with $m = 1000$, $n = 50$, and [generate the elements of $A\in \mathbb{R}^{m\times n}$ from a standard Gaussian random distribution.]{} The measurement errors are sampled from quantile Huber distributions, to [verify]{} that the approach is able to recover ‘ground truth’ values for $(\tau, \kappa)$ parameters. We denote ground truth parameters as $x_t$, $\tau_t$, $\kappa_t$, while $x^*$, $\tau^*$, $\kappa^*$ are the solutions obtained by solving . We provide two reference solutions: $x_{LS}$ is the least square solution, and $x_M$ is the solution obtained by solving $\|Ax-b\|_1$. For each $\kappa$ and $\tau$ setting, we run the simulation 10 times, and show the average of the results in Table \[tb:exp\]. Results shown are obtained by the IP method. $[\tau_t,\kappa_t]$ $[\tau^*,\kappa^*]$ $\|x^*-x_t\|/\|x_t\|$ $\|x_{LS}-x_t\|/\|x_t\|$ $\|x_M-x_t\|/\|x_t\|$ --------------------- --------------------- ----------------------- -------------------------- ----------------------- \[0.090,1.165\] 0.142 0.412 0.255 \[0.196,1.067\] 0.101 0.155 0.125 \[0.501,0.948\] 0.077 0.122 0.085 \[0.807,1.041\] 0.092 0.189 0.113 \[0.912,1.173\] 0.119 0.379 0.359 : \[tb:exp\] Joint inference of the shape and model parameters for the quantile Huber loss. ![Convergence history (iterations) for PALM (green) and interior point method (blue). Three experiments are shown, for $\tau = 0.1, 0.5$, and $0.9$. The proposed IP method converges in fewer than 20 iterations in all cases. []{data-label="conhis"}](conHis_both01-eps-converted-to.pdf "fig:"){width="4.5cm"} ![Convergence history (iterations) for PALM (green) and interior point method (blue). Three experiments are shown, for $\tau = 0.1, 0.5$, and $0.9$. The proposed IP method converges in fewer than 20 iterations in all cases. []{data-label="conhis"}](conHis_both05-eps-converted-to.pdf "fig:"){width="4.5cm"} ![Convergence history (iterations) for PALM (green) and interior point method (blue). Three experiments are shown, for $\tau = 0.1, 0.5$, and $0.9$. The proposed IP method converges in fewer than 20 iterations in all cases. []{data-label="conhis"}](conHis_both09-eps-converted-to.pdf "fig:"){width="4.5cm"} The maximum likelihood formulation correctly recovers the shape parameters $(\theta, \tau)$ in the context of solving a regression problem. Moreover, the solution $x^*$ obtained from the self-tuned regression is always better compared to reference solutions; and the improvement increases as measurement errors become more biased ($\tau$ close to $0$ or to $1$). We also compared the performance of PALM and IP, in terms of iterations. The result (for three selections of $\tau$ and $\kappa$ values) is shown in Figure \[conhis\]. The IP method takes very few iterations to converge. However, the cost of each IP iteration grows cubically in the minimum of $(n,m)$, and only quadratically for each PALM iteration. In case of nonsmooth $\rho$, PALM cannot be applied. Here, we replicate the experiment for the quantile penalty alone, to show that the IP approach indeed handles fully nonsmooth problems. We choose $m=500$, $n=50$, same way generate $A\in\mathbb{R}^{m\times n}$ and $x_t\in\mathbb{R}^n$ with Section 4. And then we generate independent samples from distribution defined by quantile function. The result is shown in the Table \[tb:exp2\]. \[table:quantile\] $\tau_t$ $\tau^*$ $\|x^*-x_t\|/\|x_t\|$ $\|x_{LS}-x_t\|/\|x_t\|$ $\|x_{l_1}-x_t\|/\|x_t\|$ ---------- ---------- ----------------------- -------------------------- --------------------------- 0.1 0.096 0.253 0.749 0.439 0.2 0.216 0.139 0.191 0.160 0.5 0.491 0.134 0.134 0.134 0.8 0.794 0.136 0.341 0.208 0.9 0.903 0.242 0.542 0.475 : \[tb:exp2\] Joint inference of the shape and model parameters for the quantile penalty. Conclusions similar to self-tuning quantile Huber can be drawn here. We recover $\tau$ accurately and when $\tau$ is close to 0 and 1, our solution is much better than least squares and $\ell_1$ norm solutions. Self-Tuning RPCA {#sec:real} ================ Robust principal component analysis (RPCA) has applications to alignment of occluded images [@peng2012rasl], scene triangulation [@zhang2012tilt], model selection [@chandrasekaran2009sparse], face recognition [@turk1991face] and document indexing [@candes2011robust]. We develop a self-tuning background separation approach. Given a sequence of images[^2], our goal is to separate the moving objects from the background. We pick $202$ images from the data set, convert them to grey scale and reshape them as column vectors of matrix $Y\in{\mathbb{R}}^{20480\times202}$. We model the data $Y$ as the sum of low rank component $L$ of $Y$ and sparse noise $S$; we expect moving objects to be captured by $S$. The stable version of RPCA is equivalent to regularized Huber regression: $$\label{eq:RPCAhuber} \min_{L,S} \frac{1}{2}\|L+S-Y\|_F^2 + \kappa\|S\|_1 + \lambda\|L\|_* = \min_{L} \rho_{\kappa}(Y-L) + \lambda \|L\|_*.$$ The equality is obtained by partially minimizing in $S$. We can simplify  further by modeling $L = U{^{\mathsf{T}}}V$, where $U$ and $V$ are matrices with $k \ll \min(m,n)$ columns. The resulting objective is given by $$\min_{U,V}\sum_{i,j}\rho({\left\langle U_i,V_j \right\rangle} - Y_{i,j};\kappa)$$ where $U\in{\mathbb{R}}^{k\times m}$ and $V\in{\mathbb{R}}^{k\times n}$ and $U_i$, $V_j$ are the column vectors. In this experiment we choose $k=2$. Shape parameter $\kappa$ play a key role for the performance of the formulation: a bad choice of $\kappa$ translates into poor separation, see Figure \[RPCA\] (a). Cross-validation is computationally expensive for RPCA, so we can instead automatically tune $\kappa$ as we fit $U$ and $V$. [r]{}[0.3]{} ; ; ; ; ; ; ; coordinates [(-0.3,2.3) (0.3,2.3)]{}; In order to get the result in Figure \[RPCA\] (b), we introduce a variance parameter $\sigma$ [for the Huber penalty]{} to automatically estimate the right scale of the residual. The joint $(\kappa, \sigma)$ parametrization is given by $$\rho(r;[\kappa,\sigma]) = \begin{cases} \kappa|r|/\sigma - \kappa^2/2, & |r|>\kappa\sigma\\ r^2/(2\sigma^2), & |r|\le\kappa\sigma, \end{cases}$$ with the resulting self-tuning RPCA formulation (solved by Algorithm \[alg:PALM\]): $$\min_{U,V,\kappa>0,\sigma>0} \sum_{i,j}\rho({\left\langle U_i,V_j \right\rangle} - Y_{i,j};[\kappa,\sigma]) + mn\log[n_c([\kappa,\sigma])].$$ The result is shown in Figure \[RPCA\](b). As the optimization proceeds, $\kappa, \sigma \rightarrow0^+$ with a fixed ratio $\alpha = \kappa/\sigma$. The self-tuning Huber becomes the scaled 1-norm, recovering the original RPCA formulation [@candes2011robust]. The result in Figure \[RPCA\](b) is an improvement over the result with $(\kappa, \sigma)$ fixed at the initial values in Figure \[RPCA\](a). ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [ ![\[RPCA\] RPCA background separation: self-tuning automatically discovers shape parameters to produce desired result. Recovered backgrounds and foregrounds are in the top and bottom rows. ](bLS202-eps-converted-to.pdf "fig:"){width="5.5cm"} ]{} [ ![\[RPCA\] RPCA background separation: self-tuning automatically discovers shape parameters to produce desired result. Recovered backgrounds and foregrounds are in the top and bottom rows. ](kLS202-eps-converted-to.pdf "fig:"){width="5.5cm"} ]{} \(a) Huber with fixed \(b) Self-tuned Huber $\kappa=2\times10^{-3},\sigma=1$. initial: $\kappa=2\times10^{-3}$, $\sigma=1$ final: $\kappa = 1.94\times10^{-2},\sigma=8.28\times10^{-4}$ [ ![\[RPCA\] RPCA background separation: self-tuning automatically discovers shape parameters to produce desired result. Recovered backgrounds and foregrounds are in the top and bottom rows. ](btLS202-eps-converted-to.pdf "fig:"){width="5.5cm"} ]{} [ ![\[RPCA\] RPCA background separation: self-tuning automatically discovers shape parameters to produce desired result. Recovered backgrounds and foregrounds are in the top and bottom rows. ](tLS202-eps-converted-to.pdf "fig:"){width="5.5cm"} ]{} \(c) Huberized Student’s $t$ with fixed \(d) Self-tuned Huberized Student’s $t$ $\kappa = 8,\sigma=0.1$ initial: $\kappa = 8,\sigma=0.1$ final: $\kappa = 7.64,\sigma=2.24\times10^{-2}$ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The weakness of the Huber is that the $\kappa$ has to work well for residuals near the origin as well as in the tail. The self-tuning approach makes it easy to [create]{} and adapt new penalties. To get additional flexibility, we introduce an inflection point; letting the ‘slope’ near the origin be different from slopes in the tails. The Huberized Student’s t penalty is shown in Figure \[fig:Tiber\], and detailed below: $$\label{eq:tiber} \hspace{-.4cm} \rho(r;[\kappa,\sigma]) = \begin{cases} \frac{2\kappa}{\sigma(\kappa^2+1)}(|r| - \kappa\sigma) + \log(1+\kappa^2), & |r| > \kappa\sigma\\ \log(1+r^2/\sigma^2),&|r|\le\kappa\sigma \end{cases}$$ Penalty  is flexible, in the sense that behavior for large residuals and small ones is decoupled. When we self-tune this new penalty, the additional flexibility indeed improves on the self-tuned Huber, recovering the results shown in Figure \[RPCA\](d). It is clear the self-tuning approach succeeds, as the Huberized Student’s $t$ result at the initial $\kappa, \sigma$ values is useless (Figure \[RPCA\](c)). In this data set, the advantage of self-tuned Huberized Student’s $t$ over self-tuned Huber may not be that obvious. We also apply our approach to Escalator data set[^3]. This data set is is less noisy than the first, but contains multiple moving objects. We select a time window with $200$ pictures, and $Y\in {\mathbb{R}}^{20800\times200}$, and apply the huber and Huberized Student’s $t$ to this data set, with results shown in Figure \[fig:esc\_result\]. We can see that there are a lot of artifacts in the self-tuned Huber penalty result, e.g., the escalator stairs and shadow of people. Instead the result for self-tuned Huberized Student’s $t$ is much cleaner and we isolate the moving people successfully. All of these results are achieved with fully automatic parameter tuning. -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [![\[fig:esc\_result\] Background separation with the Escalator data. First column is the background, second column is the foreground, third column is the binary plot of the foreground.](airport_kLS150_both-eps-converted-to.pdf "fig:"){width="14cm"}]{} \(a) self-tuned Huber initial: $\kappa = 0.02$, $\sigma = 0.02$ final: $\kappa = 7.48\times10^{-3}$, $\sigma = 3.49\times10^{-4}$ [![\[fig:esc\_result\] Background separation with the Escalator data. First column is the background, second column is the foreground, third column is the binary plot of the foreground.](airport_tLS150_both-eps-converted-to.pdf "fig:"){width="14cm"}]{} \(b) self-tuned Huberized Student’s $t$ intial: $\kappa = 10$, $\sigma = 2\times10^{-2}$ final: $\kappa = 23.2$, $\sigma = 1.66\times10^{-2}$ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [r]{}[0.37]{} ![image](dis-eps-converted-to.pdf){width="4.7cm"} Here, we use self-tuning Huber and self-tuning Huberized Student’s $t$ penalties. Looking at Figure \[fig:esc\_result\], the self-tuned Huber has stronger signal but includes more background; the picture looks better to the eye but a closer examination reveals parts of the escalator are present. The self-tuned Huberized Student’s $t$ has weaker $S$ which is harder to see; but actually gives a much better result. To get more insights into the problem, we also look at the empirical distribution of the residual. In Figure \[fig:dis\], the light blue line denote the empirical CDF for the residual ($R = Y - U{^{\mathsf{T}}}V$), red dashed line is the the best Huberized Student’s $t$ fit, blue dashed line is the best $\ell_2$ fit and green dashed line is the best $\ell_1$ fit. We can see that Huberized Student’s $t$ has a perfect fit, and in particular is much better that $\ell_1$ and $\ell_2$. Conclusions =========== We introduced a method for self-tuning error penalties, incorporates the log of the normalization constant (a function of the shape parameters) into an extended joint inference problem. Results with synthetic data as well as real data show promise. Future research includes designing innovative ‘flexible’ penalties and automatically adapting them to different applications. Proof of Theorem \[th:smoothness\] {#sec: sec1} ================================== From Assumption \[asp:smoothness\], we know that for any $\theta_0\in\mathcal{D}$, $\nabla_\theta\exp[\rho(r;\theta_0)]$ and $\nabla_\theta^2\exp[\rho(r;\theta_0)]$ exist for almost every $r\in\mathbb{R}$. For any $h$ such that $\|h\|$ is small enough to make $\theta_0 + h$ stay in the neighborhood of $\theta_0$. By applying mean value theorem, we have, $$\begin{aligned} n_c(\theta_0+h) - n_c(\theta_0) &= \int_\mathbb{R} \exp[-\rho(r;\theta_0 + h)] - \exp[-\rho(r;\theta_0)]\,dr\\ &= \int_\mathbb{R} {\left\langle \nabla_\theta\exp[-\rho(r;\bar{\theta})],h \right\rangle}\,dr\\ \Rightarrow~~\frac{n_c(\theta_0+h) - n_c(\theta_0)}{\|h\|} &= \int_\mathbb{R} {\left\langle \nabla_\theta\exp[-\rho(r;\bar{\theta})],\frac{h}{\|h\|} \right\rangle}\,dr\end{aligned}$$ where $\bar{\theta}$ lie in segment with the end points $\theta_0$ and $\theta_0 + h$. For the first and third assumptions in the theorem, we could apply the dominant convergence theorem and get, $$\begin{aligned} \lim_{h\rightarrow0}\frac{n_c(\theta_0+h) - n_c(\theta_0)}{\|h\|} &= \lim_{h\rightarrow0}\int_\mathbb{R} {\left\langle \nabla_\theta\exp[-\rho(r;\bar{\theta})],\frac{h}{\|h\|} \right\rangle}\,dr\\ &= \int_\mathbb{R} \lim_{h\rightarrow0}{\left\langle \nabla_\theta\exp[-\rho(r;\bar{\theta})],\frac{h}{\|h\|} \right\rangle}\,dr\\ &= \int_\mathbb{R} {\left\langle \nabla_\theta\exp[-\rho(r;\theta_0)],v \right\rangle}\,dr\\ &= {\left\langle \int_\mathbb{R}\nabla_\theta\exp[-\rho(r;\theta_0)]\,dr,v \right\rangle}\end{aligned}$$ where we set $h = \alpha v$ and let $\alpha \rightarrow 0^+$ and keep $v$ fix as an unit vector. From the definition of the gradient we know that, $$\nabla n_c(\theta_0) = \int_\mathbb{R}\nabla_\theta\exp[-\rho(r;\theta_0)]\,dr.$$ Follow the same steps we could also show $\nabla^2c(\theta_0)$ exists and satisfies, $$\nabla^2 n_c(\theta_0) = \int_\mathbb{R}\nabla_\theta^2\exp[-\rho(r;\theta_0)]\,dr.$$ Proof of Theorem \[th:Implementability\] ======================================== By applying row operations to we obtain a block upper triangular system, $$\def\arraystretch{1.5} \nabla F_\mu(z) \rightarrow {\left}[\begin{array}{c|c|c|c|c|c} Q_1& D_1 & & & & \\\hline &T_1 & & & & S{^{\mathsf{T}}}\\\hline & & T_2 & -C{^{\mathsf{T}}}& & H{^{\mathsf{T}}}\\\hline & & & T_3 & BA & -G{^{\mathsf{T}}}+CT_2^{-1}H{^{\mathsf{T}}}\\\hline & & & & T_4 & -A{^{\mathsf{T}}}B{^{\mathsf{T}}}T_3^{-1}(-G{^{\mathsf{T}}}+CT_2^{-1}H{^{\mathsf{T}}})\\\hline & & & & & T_5 \end{array}{\right}]$$ where, $$\begin{aligned} T_1 =& Q_1^{-1}D_1\\ T_2 =& Q_2^{-1}D_2\\ T_3 =& -M - CT_2^{-1}C{^{\mathsf{T}}}\\ T_4 =& -A{^{\mathsf{T}}}B{^{\mathsf{T}}}T_3^{-1} BA\\ T_5 =& \nabla^2\log[n_c(\theta)] - ST_1^{-1}S{^{\mathsf{T}}}- HT_2^{-1}H{^{\mathsf{T}}}\\ &- (-G+HT_2C{^{\mathsf{T}}})(T_3^{-1}+T_3^{-1}BAT_4^{-1}A{^{\mathsf{T}}}B{^{\mathsf{T}}}T_3^{-1})(-G{^{\mathsf{T}}}+CT_2^{-1}H{^{\mathsf{T}}})\end{aligned}$$ $\nabla F_\mu$ is invertible if and only if $Q_1$, $T_i$, $i=1,2,3,4,5$ are invertible. Since we guarantee $q_1,q_2,d_2>0$ through line search, $Q_1$, $T_1$ and $T_2$ are invertible. It’s easy to see that - if $\text{null}(M)\cap\text{null}(C{^{\mathsf{T}}}) = \{0\}$, $T_3$ is invertible. - if $\text{null}(BA) = \{0\}$, $T_4$ is invertible. - if $\text{null}(\nabla^2\log[n_c(\theta)])\cap\text{null}(S{^{\mathsf{T}}})\cap\text{null}(H{^{\mathsf{T}}})\cap\text{null}(-G{^{\mathsf{T}}}+CT_2^{-1}H{^{\mathsf{T}}}) = \{0\}$ and above two points hold, $T_5$ is invertible. Moreover, if $\log[n_c(\theta)]$ is strongly concave, $T_5 \prec 0$ which is invertible. The we proof the invertibility of $\nabla F_\mu$ which guarantee the implementability of Algorithm \[alg:IPsolve\]. Conjugate Representations of Various Penalties ============================================== [cccccc]{}\ ; & ; & ; & ; ; \ (a) quadratic & (b) hybrid loss, $\epsilon = 1$ & (c) logistic loss, $a=2$ & (d) hinge, $\epsilon = 0.5$\ ; ; ; coordinates [(-.24,0.0550) (0.56,0.20)]{}; & ; ; ; coordinates [(-0.5,0) (0.5,0)]{}; & ; ; ; ; ; coordinates [(-1,.5-.5\*.25) (1,.5-.5\*.25)(-.45, 0) (.45, 0)]{}; & ; \ (e) quantile Huber & (f) Vapnik, $\epsilon = 0.5$ & (g) Huber insensitive loss & (h) elastic net ($\alpha = 0.5)$ We provide examples of common penalties used in statistical modeling, machine learning, and inverse problems with their conjugate representations: 1. The least squares penalty (Fig. \[fig:SDRex\](a)) $ \frac{1}{2}x^2 = \sup_{u} \left\{ux - \frac{1}{2}u^2\right\}. $ 2. The quantile penalty (Fig. \[fig:Qhub\](b)) $q_\tau(x) = \sup_{u \in [-\tau,(1-\tau)]} \left\{ux\right\}$. 3. The hinge loss (Fig. \[fig:SDRex\](d)) $h_\epsilon(x) = \sup_{u \in [-\tau,(1-\tau)]} \left\{u(x-\epsilon)\right\}$. 4. The Huber function (Fig. \[fig:Qhub\](a)) $h_\kappa(x) = \sup_{u\in[-\kappa, \kappa]}\left\{ ux - \frac{1}{2}u^2\right\}$. 5. The quantile Huber (Fig. \[fig:SDRex\](e)) $h_{\tau, \kappa}(x) = \sup_{u \in [-\kappa\tau,\kappa(1-\tau)]} \left\{ux - \frac{1}{2}u^2\right\}$. 6. The Vapnik penalty (Fig. \[fig:SDRex\](f)) $ \rho_\epsilon(x) = \sup_{u \in [0,1]^2} \left\{ \left\langle \begin{bmatrix}1 \\ -1 \end{bmatrix}x - \begin{bmatrix}\epsilon \\ \epsilon \end{bmatrix} , u \right\rangle \right\}. $ 7. Smooth insensitive loss (Fig. \[fig:SDRex\](g)) $ \rho^h_\epsilon(x) = \sup_{u \in [0,1]^2} \left\{ \left\langle \begin{bmatrix}1 \\ -1 \end{bmatrix}x - \begin{bmatrix}\epsilon \\ \epsilon \end{bmatrix} , u \right\rangle - \frac{1}{2}u^Tu \right\} . $ 8. The elastic net penalty (Fig. \[fig:SDRex\](h)) $ \rho(x) = \sup_{u \in [0,1] \times \mathbb{R}} \left\{ \left\langle \begin{bmatrix}1 \\ 1 \end{bmatrix}x , u \right\rangle - \frac{1}{2}u^T \begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix}u\right\}. $ 9. Hybrid loss (Fig. \[fig:SDRex\](b)) $ h_{\epsilon}(x) = \sup_{u \in \left[-\epsilon^{-1}, \epsilon^{-1} \right]} \left\{xu - \left(1-\sqrt{1-(u\epsilon)^2}\right)\right\}. $ 10. Logistic loss (Fig. \[fig:SDRex\](c)) $ h_a(x) = \sup_{u \in [0,a]} \left\{xu - \frac{u}{a}\log\left(\frac{u}{a}\right) - \left(1-\frac{u}{a}\right)\log\left(1-\frac{u}{a}\right)\right\}. $ [^1]: If $A$ has dimensions $m$ and $n$, interior point methods require $O(n(m^2+n^2))$ arithmetic operations, where $n$ is the smaller dimension. This limits practical applications for large-scale problems; to go beyond $2000 \times 2000$ with modest compute, some sort of special structure or technique (sparsity, preconditioning) is typically needed. [^2]: Publicly available at <http://vis-www.cs.umass.edu/~narayana/castanza/I2Rdataset/> [^3]: Publicly available at <http://vis-www.cs.umass.edu/~narayana/castanza/I2Rdataset/>
--- abstract: 'Separability of multivariate function alleviates the difficulty in finding a minimum or maximum value of a function such that an optimal solution can be searched by solving several disjoint problems with lower dimensionalities. In most of practical problems, however, a function to be optimized is black-box and we hardly grasp its separability on ahead. In this study, we first describe a general separability condition which a function defined over an arbitrary domain must satisfy if and only if that function is separable with respect to given disjoint subsets of variables. By introducing an alternative separability condition, we propose a Monte Carlo-based algorithm to estimate the separability of a function defined over unit cube with respect to given disjoint subsets of variables. Moreover, we extend our algorithm to estimate the number of disjoint subsets and disjoint subsets themselves such that a function is separable with respect to them. Computational complexity of our extended algorithm is function-dependent and varies from linear to exponential in the dimension.' author: - | Takashi Goda\ Graduate School of Information Science and Technology,\ The University of Tokyo,\ 2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-0032\ [goda@iba.t.u-tokyo.ac.jp](goda@iba.t.u-tokyo.ac.jp) title: On the separability of multivariate functions --- Introduction {#intro} ============ Whether a given multivariate function is separable or not is one of the important measures of the difficulty in optimization. This can be easily understood through the following argument. Let $f({\boldsymbol{x}})$ be a function of $s$ variables, i.e., ${\boldsymbol{x}}=(x_1,\ldots, x_s)$, and let ${\boldsymbol{x}}_u=(x_j)_{j\in u}$ be a subset of variables for $u\subseteq [1:s] (:=\{1,\ldots, s\})$. If $f({\boldsymbol{x}})$ is separable with respect to some ${\boldsymbol{x}}_u$ and its complement ${\boldsymbol{x}}_{-u}:={\boldsymbol{x}}_{[1:s]\setminus u}$ with $\emptyset \ne u\subset [1:s]$, that is, $f({\boldsymbol{x}})=f_1({\boldsymbol{x}}_u)+f_2({\boldsymbol{x}}_{-u})$, we can reduce one high-dimensional optimization problem to two disjoint optimization ones with lower dimensionalities. The values of ${\boldsymbol{x}}_{-u}$ can be fixed while searching an optimal solution of $f_1({\boldsymbol{x}}_u)$, and vice versa. If $f_1({\boldsymbol{x}}_u)$ and $f_2({\boldsymbol{x}}_{-u})$ are further separable with respect to some subsets ${\boldsymbol{x}}_{v}$ and ${\boldsymbol{x}}_{w}$ with $\emptyset \ne v\subset u$ and $\emptyset \ne w\subset -u$, respectively, for instance, we can reduce to four disjoint optimization problems with even lower dimensionalities. As an extreme case, $f({\boldsymbol{x}})$ might be expressed simply as a sum of $s$ one-dimensional functions, i.e., $f({\boldsymbol{x}})=\sum_{j=1}^{s}f_j(x_j)$. Then, the $s$-dimensional optimization problem can be decomposed into $s$ one-dimensional ones. If $f({\boldsymbol{x}})$ is not separable with respect to any subset of variables, on the other hand, we have to search a whole $s$-dimensional space all at once. The performances of optimization algorithms, especially of heuristics and meta-heuristics, often depend on separability of the function. For instance, as discussed in [@Sal96], the performance of the genetic algorithm deteriorates if we rotate the coordinate of the separable benchmark functions, which makes the functions non-separable. Thus, in order to cover a wide class of functions, we generally compose a set of benchmark functions from many separable and non-separable functions for the performance comparison of different optimization algorithms, see such as [@GMLH09; @LMH11]. What matters in many practical problems, however, is that a function to be optimized is black-box so that we hardly grasp a priori its separability. If the function is separable, we cannot exploit the advantage of the algorithms which perform better for non-separable functions. Otherwise if the function is non-separable, we should avoid to use the algorithms which perform well only for separable functions. Therefore, we can claim that the separability of the function to be optimized is one of the central issues in choosing a suitable optimization algorithm. Motivated by the above concern, we investigate the separability of multivariate functions in this study. Our approach is based on the functional decompositions given in the literature, see for example [@ES84; @Hoe48; @RASS99; @Sob90]. These decompositions were recently generalized by Kuo *et al.* [@KSWW10]. After introducing the preliminaries on those decompositions in the next section, we first derive a general separability condition which a function defined on an arbitrary domain must satisfy if and only if that function is separable with respect to given disjoint subsets of variables in Section \[general\]. As special cases, it includes the conditions for a function to be separable with respect to one subset of variables and its complement, or to be separable with respect to all the variables. In order to construct a computable algorithm to estimate the separability, we derive an alternative separability condition in Section \[separable\], which is valid for the functions in $L^2([0,1]^s)$. Using this alternative condition, we propose a Monte Carlo-based algorithm for the separability estimation. Moreover, we extend our proposed algorithm to estimate the number of disjoint subsets and disjoint subsets themselves such that a function is separable with respect to them. We show that computational complexity of our extended algorithm is function-dependent and varies from linear to exponential in the dimension. Background and notation {#background} ======================= General decomposition formula {#back:1} ----------------------------- In the following, we always write $[1:s]=\{1,\ldots, s\}$. For a given subset $u$ of $[1:s]$, we denote by $-u$ the complement of $u$, that is, $-u=[1:s]\setminus u$, and denote by $|u|$ the cardinality of $u$. Now we consider a decomposition of a function of $s$ variables $f({\boldsymbol{x}})\in F$, where $F$ is a linear space of real functions defined on a domain $D\subseteq {\mathbb{R}}^s$, into the following form $$\begin{aligned} f({\boldsymbol{x}}) = \sum_{u\subseteq [1:s]}f_u({\boldsymbol{x}}) .\end{aligned}$$ We note that the right-hand side consists of $2^s$ terms with each term $f_u({\boldsymbol{x}})$ depending only on the subset of variables ${\boldsymbol{x}}_u$. According to [@KSWW10 Theorem 2.1], $f_u({\boldsymbol{x}})$ can be generally expressed as $$\begin{aligned} f_u({\boldsymbol{x}}) := \left( \prod_{j\in u}(I-P_j)\right) P_{-u}(f)({\boldsymbol{x}}) , \label{eq:decomp1}\end{aligned}$$ where $\{ P_j: j=1,\ldots, s\}$ is a set of commuting projections on $F$ defined on a domain $D\subseteq {\mathbb{R}}^s$ such that $P_j(f)({\boldsymbol{x}})=f({\boldsymbol{x}})$ if $f({\boldsymbol{x}})$ does not depend on $x_j$ and that $P_j(f)({\boldsymbol{x}})$ does not depend on $x_j$. Further, we define $P_u= \prod_{j\in u}P_j$ for $u\subseteq [1:s]$ and denote by $I$ the identity operator. We can rewrite (\[eq:decomp1\]) into the following recursive relation $$\begin{aligned} f_u({\boldsymbol{x}}) := P_{-u}(f)({\boldsymbol{x}})-\sum_{v\subset u}f_v({\boldsymbol{x}}) , \label{eq:decomp2}\end{aligned}$$ where, for $u=\emptyset$, we define $$\begin{aligned} f_{\emptyset}({\boldsymbol{x}}) := P_{[1:s]}(f)({\boldsymbol{x}}) .\end{aligned}$$ Since $f_{\emptyset}({\boldsymbol{x}})$ is a constant, we simply write $f_{\emptyset}$ in the following. We show two important examples of $P_j$. One is called *anchored decomposition*, see such as [@RASS99; @Sob03], which fixes $x_j$ at $t_j$ $$\begin{aligned} P_j(f)({\boldsymbol{x}}) = f(x_1,\ldots, x_{j-1},t_j,x_{j+1},\ldots, x_s) .\end{aligned}$$ where the anchor ${\boldsymbol{t}}=(t_1,\ldots, t_s)$ lies in $D$. The other with $D=[0,1]^s$ is called *analysis of variance (ANOVA) decomposition*, see such as [@ES84; @Hoe48; @Sob90], which integrates out $x_j$ $$\begin{aligned} P_j(f)({\boldsymbol{x}}) = \int_{0}^{1}f(x_1,\ldots, x_{j-1},t_j,x_{j+1},\ldots, x_s)dt_j . \label{eq:proj_anova}\end{aligned}$$ The latter has often been used in the context of global sensitivity analysis, which measures the relative importance of each subset of variables on the variation of function, see such as [@CMO97; @Primer; @Sob90; @Sob01]. Since we also use this decomposition in this study, the next subsection is devoted to explaining it in more detail. ANOVA decomposition and Sobol’ indices {#back:2} -------------------------------------- For any square integrable function $f({\boldsymbol{x}})\in L^2([0,1]^s)$, each term $f_u({\boldsymbol{x}})$ can be obtained by using (\[eq:decomp2\]) and (\[eq:proj\_anova\]) as $$\begin{aligned} f_u({\boldsymbol{x}}) = \int_{[0,1]^{s-|u|}}f({\boldsymbol{x}})d{\boldsymbol{x}}_{-u} -\sum_{v\subset u}f_v({\boldsymbol{x}}) ,\end{aligned}$$ where, for $u=\emptyset$, we have $$\begin{aligned} f_{\emptyset} = \int_{[0,1]^s}f({\boldsymbol{x}}) d{\boldsymbol{x}},\end{aligned}$$ which is simply the expectation of $f({\boldsymbol{x}})$. This decomposition satisfies the following important properties $$\begin{aligned} \int_{0}^{1}f_u({\boldsymbol{x}})dx_j=0 ,\end{aligned}$$ for $j\in u$ with $|u|>0$, and $$\begin{aligned} \int_{[0,1]^s}f_u({\boldsymbol{x}})f_v({\boldsymbol{x}}) d{\boldsymbol{x}}=0 ,\end{aligned}$$ if $u\ne v$. The former can be proved by induction on $|u|$. The latter immediately follows from the former by considering the integration with respect to $x_j$ for any $j\in (u\cup v)\setminus(u\cap v)$. Using this decomposition and its properties, the variance of $f({\boldsymbol{x}})$, which will be denoted by $\sigma^2$, can be expressed as $$\begin{aligned} \sigma^2 & = & \int_{[0,1]^s}f({\boldsymbol{x}})^2 d{\boldsymbol{x}}-\left(\int_{[0,1]^s}f({\boldsymbol{x}})d{\boldsymbol{x}}\right)^2 \\ & = & \int_{[0,1]^s}\left( \sum_{\emptyset \ne u\subseteq [1:s]}f_u({\boldsymbol{x}})^2\right) d{\boldsymbol{x}}\\ & = & \sum_{\emptyset \ne u\subseteq [1:s]}\sigma^2_u ,\end{aligned}$$ where we have defined $$\begin{aligned} \sigma^2_u=\int_{[0,1]^s}f^2_u({\boldsymbol{x}}) d{\boldsymbol{x}}.\end{aligned}$$ This equality implies that the subset of variables ${\boldsymbol{x}}_u$ with larger $\sigma^2_u$ affects more on the variance of the function. In other words, the function $f({\boldsymbol{x}})$ is more sensitive to the change of values of $x_u$ with larger $\sigma^2_u$. That is why the ANOVA decomposition plays a central role in the global sensitivity analysis. Sobol’ indices were first introduced by Sobol’ [@Sob90] and has recently been generalized by Owen [@Owe12] to measure the relative importance of a subset of variables. For $\emptyset \ne u\subseteq [1:s]$, let us define $$\begin{aligned} \underline{\tau}^2_u & = & \sum_{\emptyset \ne v\subseteq u}\sigma_v^2 ,\end{aligned}$$ and $$\begin{aligned} \overline{\tau}^2_u & = & \sum_{v\cap u\ne 0}\sigma_v^2 .\end{aligned}$$ Here, $\underline{\tau}^2_u$ is a sum of $\sigma_v^2$ for $v$ contained in $u$, while $\overline{\tau}^2_u$ is a sum of $\sigma_v^2$ for $v$ which touches $u$. It is obvious that we have $0\le \underline{\tau}^2_u \le \overline{\tau}^2_u \le \sigma^2$. We often normalize these quantities such as $\underline{\tau}^2_u/ \sigma^2$ and $\overline{\tau}^2_u/ \sigma^2$. From the definition, we have the following identity $$\begin{aligned} \underline{\tau}^2_{-u}+\overline{\tau}^2_u=\sigma^2 .\end{aligned}$$ General separability condition {#general} ============================== In this section, we introduce a general separability condition, which must be satisfied for any separable function $f({\boldsymbol{x}})\in F$ with respect to given $m$ disjoint subsets of variables ${\boldsymbol{x}}_{u_1},\ldots, {\boldsymbol{x}}_{u_m}$ where ${\boldsymbol{x}}_{u_j}=(x_i)_{i\in u_j}$. Here we mean by $m$ disjoint subsets that a set $\{u_1,\ldots, u_m\}$ satisfies the following properties: $u_j\ne \emptyset$ for $j=1,\ldots, m$, $$\begin{aligned} u_i\cap u_j=\emptyset ,\end{aligned}$$ if $i\ne j$, and $$\begin{aligned} \bigcup_{j=1}^{m}u_j=[1:s] .\end{aligned}$$ Then the following theorem gives a general separability condition. \[theorem1\] For $m, s\in {\mathbb{N}}$ such that $m\le s$, let $u_1,\ldots, u_m$ be $m$ disjoint subsets of $[1:s]$. A function $f({\boldsymbol{x}})\in F$ is separable with respect to ${\boldsymbol{x}}_{u_1},\ldots, {\boldsymbol{x}}_{u_m}$ if and only if the following equation holds for any ${\boldsymbol{x}}\in D$ $$\begin{aligned} \left(\prod_{j=1}^{m}\left( I-P_{-u_j}\right)\right)(f)({\boldsymbol{x}}) = 0 . \label{eq:sep1}\end{aligned}$$ In order to prove Theorem \[theorem1\], we need the following lemma. \[lemma1\] For $m, s\in {\mathbb{N}}$ such that $m\le s$, let $u_1,\ldots, u_m$ be $m$ disjoint subsets of $[1:s]$. Then, we have $$\begin{aligned} \prod_{j=1}^{m}\left( I-P_{-u_j}\right) = I+(m-1)P_{[1:s]}-\sum_{j=1}^{m}P_{-u_j}.\end{aligned}$$ We note that $P_{-u_i}\cdot P_{-u_j}=P_{[1:s]}$ for $i\ne j$ since $u_i$ and $u_j$ are disjoint with each other. By using this fact and the following identity $$\begin{aligned} \prod_{j=1}^{m}(a_j+b_j) = \sum_{v\subseteq I_m}\left( \prod_{j\in -v}a_j\right) \left( \prod_{j\in v}b_j\right) ,\end{aligned}$$ we have $$\begin{aligned} \prod_{j=1}^{m}(I-P_{-u_j}) & = & \sum_{v\subseteq [1:m]}\left( \prod_{j\in -v}I\right) \left( \prod_{j\in v}-P_{-u_j}\right) \\ & = & \sum_{v\subseteq [1:m]}(-1)^{|v|}\left( \prod_{j\in v}P_{-u_j}\right) \\ & = & I-\sum_{j=1}^{m}P_{-u_j}+\sum_{\substack{v\subseteq [1:m]\\ |v|\ge 2}}(-1)^{|v|}\left( \prod_{j\in v}P_{-u_j}\right) \\ & = & I-\sum_{j=1}^{m}P_{-u_j}+\left(\sum_{\substack{v\subseteq [1:m]\\ |v|\ge 2}}(-1)^{|v|}\right) P_{[1:s]} .\end{aligned}$$ In the last term, we have $$\begin{aligned} \sum_{\substack{v\subseteq [1:m]\\ |v|\ge 2}}(-1)^{|v|} & = & \sum_{v\subseteq [1:m]}(-1)^{|v|}-\sum_{\substack{v\subseteq [1:m]\\ |v|\le 1}}(-1)^{|v|} \\ & = & (1-1)^{m}-1+m .\end{aligned}$$ Thus, the result follows. Now we are ready to prove Theorem \[theorem1\]. (Theorem \[theorem1\]) As shown in the proof of [@KSWW10 Theorem 2.1], $P_{-u}(f)({\boldsymbol{x}})=\sum_{v\subseteq u}f_{v}({\boldsymbol{x}})$. Applying this relation and Lemma \[lemma1\], we have for the left-hand side of (\[eq:sep1\]) $$\begin{aligned} \left(\prod_{j=1}^{m}\left( I-P_{-u_j}\right)\right)(f)({\boldsymbol{x}}) & = & \left(I+(m-1)P_{[1:s]}-\sum_{j=1}^{m}P_{-u_j}\right)(f)({\boldsymbol{x}}) \\ & = & f({\boldsymbol{x}})+(m-1)f_{\emptyset}-\sum_{j=1}^{m}\sum_{v_j\subseteq u_j}f_{v_j}({\boldsymbol{x}}) \\ & = & f({\boldsymbol{x}})-\left(f_{\emptyset}+\sum_{j=1}^{m}\sum_{\emptyset \ne v_j\subseteq u_j}f_{v_j}({\boldsymbol{x}})\right) .\end{aligned}$$ Given that this equals zero for any ${\boldsymbol{x}}\in D$, we can rewrite (\[eq:sep1\]) into $$\begin{aligned} f({\boldsymbol{x}}) = f_{\emptyset}+\sum_{j=1}^{m}\sum_{\emptyset \ne v_j\subseteq u_j}f_{v_j}({\boldsymbol{x}}) .\end{aligned}$$ Since $f_{\emptyset}$ is a constant and $u_1,\ldots, u_m$ are disjoint with each other, this equation implies that $f({\boldsymbol{x}})$ is separable with respect to ${\boldsymbol{x}}_{u_1},\ldots, {\boldsymbol{x}}_{u_m}$. The proof of the reverse direction is trivial. Hence, the result follows. Our general separability condition (\[eq:sep1\]) consists only of function $f({\boldsymbol{x}})$ and projections $(P_u)_{u\subseteq [1:s]}$ and does not include any representation $(f_u({\boldsymbol{x}}))_{u\subseteq [1:s]}$. We emphasize here that the condition (\[eq:sep1\]) is not equal to $(I-P_{-u_j})(f)({\boldsymbol{x}})=0$ for at least one of $j$ with $1\le j\le m$, which only gives $$\begin{aligned} f({\boldsymbol{x}}) = \sum_{v_j\subseteq u_j}f_{v_j}({\boldsymbol{x}}) .\end{aligned}$$ Thus, $(I-P_{-u_j})(f)({\boldsymbol{x}})=0$ for some $j$ is just a sufficient condition for $f({\boldsymbol{x}})$ to be separable with respect to ${\boldsymbol{x}}_{u_1},\ldots, {\boldsymbol{x}}_{u_m}$. In the following, we describe the separability conditions for two special cases, both of which are important in practice. \[cor1\] A function $f({\boldsymbol{x}})\in F$ is separable with respect to ${\boldsymbol{x}}_u$ and ${\boldsymbol{x}}_{-u}$ if and only if the following equation holds for any ${\boldsymbol{x}}\in D$ $$\begin{aligned} \left( I+P_{[1:s]}-P_{u}-P_{-u}\right) (f)({\boldsymbol{x}}) = 0.\end{aligned}$$ It immediately follows by inserting $m=2$, $u_1=u$ and $u_2=-u$ into (\[eq:sep1\]) and by applying Lemma \[lemma1\]. \[cor2\] A function $f({\boldsymbol{x}})\in F$ is separable with respect to all the variables if and only if the following equation holds for any ${\boldsymbol{x}}\in D$ $$\begin{aligned} \left( I+(s-1)P_{[1:s]}-\sum_{j=1}^{s}P_{-\{j\}}\right) (f)({\boldsymbol{x}}) = 0.\end{aligned}$$ It also immediately follows by inserting $m=s$ and $u_j=\{j\}$ for $j=1,\ldots, s$ into (\[eq:sep1\]) and by applying Lemma \[lemma1\]. Separability estimation of multivariate functions {#separable} ================================================= In the previous section, we have shown the general separability condition, which is a necessary and sufficient condition for $f({\boldsymbol{x}})$ to be separable with respect to given disjoint subsets of variables. It is quite difficult, however, to confirm whether a given black-box function $f({\boldsymbol{x}})$ satisfies this condition or not. Hence, in this section, we propose a computational algorithm based on Monte Carlo method to estimate the separability of $f({\boldsymbol{x}})$. The key ingredient lies in the use of ANOVA decomposition and Sobol’ indices. We need to restrict $f({\boldsymbol{x}})\in L^2([0,1]^s)$, while in many practical problems $D\subseteq {\mathbb{R}}^s$ can be replaced by $[0,1]^s$ using suitable transformation of variables and $f({\boldsymbol{x}})$ satisfies this restriction. The following theorem shows an alternative separability condition for $f({\boldsymbol{x}})\in L^2([0,1]^s)$, which will be used later in proposing a computational algorithm to estimate the separability of $f({\boldsymbol{x}})$. \[theorem2\] For $m, s\in {\mathbb{N}}$ such that $m\le s$, let $u_1,\ldots, u_m$ be $m$ disjoint subsets of $[1:s]$. A function $f({\boldsymbol{x}})\in L^2([0,1]^s)$ is separable with respect to ${\boldsymbol{x}}_{u_1},\ldots, {\boldsymbol{x}}_{u_m}$ if and only if the following equation holds $$\begin{aligned} \label{eq:seq_anova1} \sum_{j=1}^{m}\underline{\tau}^2_{u_j}=\sigma^2 .\end{aligned}$$ From the definition of $\underline{\tau}^2_{u}$, it is possible to rewrite (\[eq:seq\_anova1\]) into $$\begin{aligned} \sum_{j=1}^{m}\sum_{\emptyset \ne v_j\subseteq u_j}\sigma_{v_j}^2=\sum_{\emptyset \ne v\subseteq [1:s]}\sigma_v^2\end{aligned}$$ This equation implies that for any subset $v$ which is not a subset of $u_j$ for $j=1,\ldots, m$, we have $\sigma^2_v=0$ and thus $f_v({\boldsymbol{x}}):=0$. Therefore, $f({\boldsymbol{x}})$ can be expressed as $$\begin{aligned} f({\boldsymbol{x}}) = f_{\emptyset}+\sum_{j=1}^{m}\sum_{\emptyset \ne v_j\subseteq u_j}f_{v_j}({\boldsymbol{x}}) .\end{aligned}$$ The proof of the reverse direction is trivial. Hence, the result follows. Now we introduce the following notation. \[def1\] For $m, s\in {\mathbb{N}}$ such that $m\le s$, let $u_1,\ldots, u_m$ be $m$ disjoint subsets of $[1:s]$. We define a separability index with respect to $u_1,\ldots, u_m$, which is denoted by $\gamma_{u_1,\ldots, u_m}^2$, as follows. $$\begin{aligned} \gamma_{u_1,\ldots, u_m}^2=\sigma^2-\sum_{j=1}^{m}\underline{\tau}^2_{u_j} .\end{aligned}$$ It is trivial from the definition that $\gamma_{u_1,\ldots, u_m}^2$ range from 0 to $\sigma^2$. Further, we emphasize that the condition of $\gamma_{u_1,\ldots, u_m}^2=0$ is substituted for the condition of $\sum_{j=1}^{m}\underline{\tau}^2_{u_j}=\sigma^2$ in Theorem \[theorem2\]. Our goal is to construct an algorithm which estimates $\gamma_{u_1,\ldots, u_m}^2$ of a black-box function $f({\boldsymbol{x}})$ computationally. In order to obtain a computable form for estimation of $\gamma_{u_1,\ldots, u_m}^2$, we use the integral form of $\underline{\tau}^2_u$, see for example [@Owe12; @Sal02] $$\begin{aligned} \underline{\tau}^2_u=\int_{[0,1]^{2s}}f({\boldsymbol{x}})\left(f({\boldsymbol{x}}_u,{\boldsymbol{z}}_{-u})-f({\boldsymbol{z}})\right)d{\boldsymbol{x}}d{\boldsymbol{z}},\end{aligned}$$ and that of $\sigma^2$ $$\begin{aligned} \sigma^2=\int_{[0,1]^{2s}}f({\boldsymbol{x}})\left(f({\boldsymbol{x}})-f({\boldsymbol{z}})\right)d{\boldsymbol{x}}d{\boldsymbol{z}},\end{aligned}$$ where ${\boldsymbol{x}}$ and ${\boldsymbol{z}}$ are identically and independent distributed in $[0,1]^s$, and the $s$-dimensional vector $({\boldsymbol{x}}_u,{\boldsymbol{z}}_{-u})$ denotes ${\boldsymbol{y}}=(y_1,\ldots,y_s)$ in which $y_j=x_j$ for $j\in u$ and $y_j=z_j$ for $j\in -u$. Then, we have the following form of $\gamma_{u_1,\ldots, u_m}^2$. $$\begin{aligned} \gamma_{u_1,\ldots, u_m}^2 & = & \int_{[0,1]^{2s}}f({\boldsymbol{x}})\left(f({\boldsymbol{x}})-f({\boldsymbol{z}})-\sum_{j=1}^{m}\left(f({\boldsymbol{x}}_{u_j},{\boldsymbol{z}}_{-u_j})-f({\boldsymbol{z}}) \right)\right)d{\boldsymbol{x}}d{\boldsymbol{z}}\nonumber \\ & = & \int_{[0,1]^{2s}}f({\boldsymbol{x}})\left(f({\boldsymbol{x}})+(m-1)f({\boldsymbol{z}})-\sum_{j=1}^{m}f({\boldsymbol{x}}_{u_j},{\boldsymbol{z}}_{-u_j})\right)d{\boldsymbol{x}}d{\boldsymbol{z}}.\end{aligned}$$ Since the integral can be approximated by using Monte Carlo method that averages with equal weights $n$ evaluations at random points, we propose the following algorithm to estimate $\gamma_{u_1,\ldots, u_m}^2$. \[algorithm1\](Estimation of $\gamma_{u_1,\ldots, u_m}^2$) For $m, s\in {\mathbb{N}}$ such that $m\le s$, let $u_1,\ldots, u_m$ be $m$ disjoint subsets of $[1:s]$ and let $\gamma_{u_1,\ldots, u_m}^2$ be the separability index as defined in Definition \[def1\]. For $n\in {\mathbb{N}}$, we proceed as follows. 1. Generate ${\boldsymbol{x}}_i,{\boldsymbol{z}}_i\in [0,1]^s$ for $0\le i<n$ randomly. 2. Compute the approximation of $\gamma_{u_1,\ldots, u_m}^2$ $$\begin{aligned} \label{approx} \hat{\gamma}_{u_1,\ldots, u_m}^2=\frac{1}{n}\sum_{i=0}^{n-1}f({\boldsymbol{x}}_i)\left(f({\boldsymbol{x}}_i)+(m-1)f({\boldsymbol{z}}_i)-\sum_{j=1}^{m}f({\boldsymbol{x}}_{i,u_j},{\boldsymbol{z}}_{i,-u_j}) \right) .\end{aligned}$$ where ${\boldsymbol{x}}_{i,u_j}=(x_{i,l})_{l\in u_j}$ in which $x_{i,l}$ is the $l$-th component of ${\boldsymbol{x}}$, and the same notation applies to ${\boldsymbol{z}}_{i,-u_j}$. It is obvious that the computational complexity of our algorithm is linear in $m$ and $n$. Furthermore, when $f({\boldsymbol{x}})$ is separable with respect to ${\boldsymbol{x}}_{u_1},\ldots, {\boldsymbol{x}}_{u_m}$, our algorithm yields exactly zero for $\hat{\gamma}_{u_1,\ldots, u_m}^2$ because the expression in the parenthesis of (\[approx\]) is zero for any ${\boldsymbol{x}}_i,{\boldsymbol{z}}_i\in [0,1]^s$. In order to find the disjoint subsets $u_1,\ldots, u_m$ such that $\hat{\gamma}_{u_1,\ldots, u_m}^2$ is zero, however, we need to try so many possible candidates of $\{u_1,\ldots, u_m\}$ for $m=2,\ldots, s$. For making a systematic search for $m$ and $u_1,\ldots, u_m$, we use the following lemma. \[lemma2\] That $f({\boldsymbol{x}})$ is separable with respect to ${\boldsymbol{x}}_{u_1},\ldots, {\boldsymbol{x}}_{u_m}$ is equivalent to that $f({\boldsymbol{x}})$ is separable with respect to ${\boldsymbol{x}}_{u_j}$ and ${\boldsymbol{x}}_{-u_j}$ for $j=1,\ldots, m$. Since this lemma is trivial, we omit the proof. This lemma implies that it is sufficient to search $u$ one-by-one whose value of $\gamma_{u,-u}$ is zero without $u$ toughing the already found ones. Moreover, due to symmetry of $u$ and $-u$, the overall search space of $u$ can be reduced to $\emptyset \ne u\subseteq [1:s-1]$ and we can simply write $\gamma_u:=\gamma_{u,-u}$. Based on these observations, we proceed the search in the following order $$\begin{aligned} u & = & \{1\}, \\ u & = & \{2\}, \{1,2\}, \\ u & = & \{3\}, \{1,3\}, \{2,3\}, \{1,2,3\},\\ & \vdots & \\ u & = & \{s-1\}, \{1,s-1\}, \ldots, \{1,\ldots, s-1\} .\end{aligned}$$ If $\gamma_{u}$ turns out to be zero during this process, we can omit from the remaining candidates every subset that touches at least one element of $u$. For example, if $s=5$ and $f({\boldsymbol{x}})$ is separable with respect to $x_{1},{\boldsymbol{x}}_{\{2,4\}},{\boldsymbol{x}}_{\{3,5\}}$, we proceed the search as follows. $$\begin{aligned} u & = & \{1\}^*, \\ u & = & \{2\}, \\ u & = & \{3\},\{2,3\} \\ u & = & \{4\},\{2,4\}^* ,\end{aligned}$$ where $*$ means that the corresponding subset of variables is found to be separable. Consequently, we obtain $u_1=\{1\},u_2=\{2,4\}$. From Lemma \[lemma2\], we have $m=3$ and $u_3=\{3,5\}$. Hence, our extended algorithm to estimate the number of disjoint subsets $m$ and disjoint subsets themselves $u_1,\ldots, u_m$ is given as follows. \[algorithm2\](Estimation of $m$ and $u_1,\ldots, u_m$) For $s,n\in {\mathbb{N}}$, we proceed as follows. 1. Set $r=m=1$ and generate ${\boldsymbol{x}}_i,{\boldsymbol{z}}_i\in [0,1]^s$ for $0\le i<n$ randomly. 2. For each subset $v$ such that $v\subseteq [1:r-1]\setminus \bigcup_{j=1}^{m-1}u_j$, compute $\hat{\gamma}_{v\cup\{j\}}^2$ according to (\[approx\]). If one finds $v$ such that $\hat{\gamma}_{v\cup\{j\}}^2=0$, set $u_m=v\cup\{j\}$ and $m=m+1$. 3. Set $r=r+1$. If $r<s$, go to step 2. The computational complexity of our extended algorithm is function-dependent as follows. When $f({\boldsymbol{x}})$ is separable with respect to all the variables, our algorithm searches only $u=\{1\},\ldots,\{s\}$ in this order. Hence, the computational complexity is minimized and becomes linear in $s$ and $n$. When $f({\boldsymbol{x}})$ is not separable with respect to any subset of variables, on the other hand, our algorithm searches all the candidates $\emptyset \ne u\subset [1:s]$ so that the computational complexity is maximized. Since the cardinality of $u$ such that $\emptyset \ne u\subset [1:s]$ is $2^s-2$, the computational complexity remains linear in $n$ but becomes exponential in $s$. From this point, Algorithm \[algorithm2\] should work for small $s$ but becomes infeasible as $s$ increases. How to overcome this drawback is open for further research. At this moment, for large $s$, Algorithm \[algorithm1\] with $m=s$ and $u_j=\{j\}$ for $j=1,\ldots,s$ will be of use as an initial screening to estimate the separability with respect to all the variables at one time, which can be done with the computational complexity linear in $s$. Acknowledgments {#acknowledgments .unnumbered} =============== The support of Grant-in-Aid for JSPS Fellows No.24-4020 is gratefully acknowledged. [99]{} Caflisch, R.E., Morokoff, W. and Owen, A.B., Valuation of mortgage backed securities using Brownian bridges to reduce effective dimension, J. Comput. Financ., 1, 27–46 (1997). Efron, B. and Stein, C., The jackknife estimate of variance, Ann. Stat., 9, 586–596 (1981). García, S., Molina, D., Lozano, M. and Herrera, F., A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’ 2005 Special Session on Real Parameter Optimization, J. Heuristics, 15, 617–644 (2009). Hoeffding, W., A class of statistics with asymptotically normal distribution, Ann. Math. Stat., 19, 293–325 (1948). Kuo, F.Y., Sloan, I.H., Wasilkowski, G.W. and Woźniakowski H., On decompositions of multivariate functions, Math. Comput., 79, 953–966 (2010) Lozano, M., Molina, D. and Herrera, F., Editorial scalability of evolutionary algorithms and other metaheuristics for large-scale continuous optimization problems, Soft Comput., 15, 2085–2087 (2011). Owen, A.B., Variance components and generalized Sobol’ indices, arXiv:1205.1774. Rabitz, H., Alis, O.F., Shorter, J. and Shim, K., Efficient input-output model representation, Comput. Phys. Commun., 117, 11–20 (1999). Salomon, R., Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. A survey of some theoretical and practical aspects of genetic algorithms, BioSystems, 39, 263–278 (1996). Saltelli, A., Making best use of model evaluations to compute sensitivity indices, Comput. Phys. Comm., 145, 280–297 (2002). Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M. and Tarantola, S., Global Sensitivity Analysis. The Primer, John Wiley and Sons, New York (2008) Sobol’, I.M., Sensitivity estimates for nonlinear mathematical models, Matematicheskoe Modelirovanie, 2, 112–118 (1990) (in Russian), English translation in: Math. Model. Comput. Exp., 1, 407–414 (1993). Sobol’, I.M., Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates, Math. Comput. Simul., 55, 271–280 (2001). Sobol’, I.M., Theorems and examples on high dimensional model representation, Reliab. Eng. Syst. Saf., 79, 187–193 (2003).
--- bibliography: - 'thesis.bib' title: '[**Thermodynamics of QCD-inspired theories**]{}' ---
--- abstract: 'Traditional grid or neighbor-based static pooling has become a constraint for point cloud geometry analysis. In this paper, we propose DAR-Net, a novel network architecture that focuses on dynamic feature aggregation. The central idea of DAR-Net is generating a self-adaptive pooling skeleton that considers both scene complexity and local geometry features. Providing variable semi-local receptive fields and weights, the skeleton serves as a bridge that connect local convolutional feature extractors and a global recurrent feature integrator. Experimental results on indoor scene datasets show advantages of the proposed approach compared to state-of-the-art architectures that adopt static pooling methods.' author: - | Zongyue Zhao Min Liu Karthik Ramani\ Purdue University\ [zhao938@purdue.edu]{} bibliography: - 'wacv.bib' title: 'DAR-Net: Dynamic Aggregation Network for Semantic Scene Segmentation' --- Introduction {#sec1} ============ For the task of 3D geometry understanding, neural networks that directly take point clouds as input have shown advantages compared to voxel and multi-view based networks that simulate 2D scenarios [@maturana_voxnet:_2015; @charles_pointnet:_2017; @dai_scannet:_2017; @wang_voting_2015; @tchapmi_segcloud:_2017; @qi_volumetric_2016]. The trailblazer, PointNet, addressed the lack of correspondence graph by using multi-layer perceptron and a single global pooling layer, neither of which relied on local dependencies [@charles_pointnet:_2017]. The two-scale network performed well on object analysis. However, its deficiencies in a) local correspondence identification; b) intermedium feature aggregation and c) valid global information integration lead to poor performance on large-scale scene segmentation. ![Segmentation results on the S3DIS dataset. a) Input point cloud; b) Validation: color grey indicates successful prediction; c) Ground truth; d) Prediction from DAR-Net.[]{data-label="fig:visual"}](F1.png){width="47.00000%"} Analyzing the drawbacks of PointNet, several papers worked on the local deficiency by constructing mapping indices for convolutional neural networks (CNN) [@tatarchenko_tangent_2018; @boscaini_learning_2016; @li_pointcnn:_2018; @shoef_pointwise:_2019]. On the other hand, works that focused on the global integration problem gained inspiration from natural language processing and turned to deep recurrent neural networks (RNN) [@tchuinkou_r-covnet:_2018; @huang_recurrent_2018; @ferrari_3d_2018]. While various works contributed to both micro and macro ends of the scale spectrum, what left in between was less attended. Feature aggregation between local neighborhoods and global representation, if any, remain to be static and independent of the geometry context [@boscaini_learning_2016; @huang_recurrent_2018; @liu_3dcnn-dqn-rnn:_2017; @qi_pointnet++:_2017; @tchuinkou_r-covnet:_2018; @tatarchenko_tangent_2018; @ferrari_3d_2018]. For example, Tangent Convolutions [@tatarchenko_tangent_2018] used rectangular grids with uniform resolution for local mean pooling. RSNet [@huang_recurrent_2018] evenly divided the entire scene into slices and did max-pooling within each slice. 3P-RNN [@ferrari_3d_2018], despite introducing variance of receptive field sizes in the local scale, went back to the voxelization track when feeding the global recurrent network. Those rigid pooling methods adapted no information density distribution within the point cloud, leading to computational inefficiencies and poor segmentation results on less-dominating classes. Shapes with rich geometry features but occurred less are not detected effectively. We present an approach for intermedium feature aggregation to address deficiencies from traditional static pooling layers. The key concept in the aggregation process is forming a pooling skeleton whose a) size is corresponding to the individual scene scale and complexity; b) each node links a variable set of points that represent a meaningful spatial discreteness; c) each node is weighted against the index set to further utilize information distribution, and provide robustness even when the node-discreetness correlation fails. Such a skeleton is unsupervised learned prior to the training process. We construct a network, DAR-Net, to incorporate the dynamic aggregation operation with convolutional feature extraction and global integration, while handling permutation invariance in multiple scales. The network is trained on two large-scale indoor scene datasets [@dai_scannet:_2017; @armeni_3d_2016] and it shows advantage compared to recent architectures using static pooling methods and similar inputs. A sample of the semantic segmentation results on the S3DIS dataset [@armeni_3d_2016] (Sec. \[data\]) is shown in Figure \[fig:visual\]. Related Work ============ Recent contributions relevant to our work can be roughly divided into three categories: convolutional feature extraction, global integration and unsupervised pre-processing. For context completeness, traditional 3D analysis networks that do not operate on point clouds are first introduced. Prior to point clouds --------------------- Although convolutional neural networks (CNN) had achieved great success in analyzing 2D images, they cannot be directly applied to point clouds beacuse of its unorganized nature. Without a pixel-based neighborhood defined, vanilla CNNs cannot extract local information and gradually expand receptive field sizes in a meaningful manner. Thus, segmentation tasks were first performed in a way that simulate 2D scenarios – by fusing partial views represented with RGB-D images together [@afzal2014rgb; @qi_volumetric_2016; @lun20173d; @hazirbas_fusenet:_2016]. Some other work transform point clouds into cost-inefficient voxel representations on which CNN can be directly applied [@maturana_voxnet:_2015; @huang_point_2016; @dai_scannet:_2017]. While these methods did benefit from mature 2D image processing network structures, inefficient 3D data representations constrained them from showing good performance for scene segmentation, where it is necessary to deal with large, dense 3D scenes as a whole. Therefore, recent research gradually turned to networks that directly operate on point clouds when dealing with semantic segmentation for complex indoor/outdoor scenes [@ferrari_3d_2018; @tatarchenko_tangent_2018; @landrieu_large-scale_2018]. Local feature extraction ------------------------ As introduced, PointNet used multi-layer perception (which process each point independently) to fit the unordered nature of point clouds [@huang_point_2016]. Furthermore, similar approaches using $1\times1$ convolutional kernels [@li_so-net:_2018], radius querying [@qi_pointnet++:_2017] or nearest neighbor searching [@klokov_escape_2017] were also adopted. Because local dependencies were not effectively modeled, overfitting constantly occurred when these networks were used to perform large-scale scene segmentation. In addition, work like R-Conv [@tchuinkou_r-covnet:_2018] tried to avoid time-consuming neighbor searching with global recurrent transformation prior to convolutional analysis. However, scalability problems still occurred as the global RNN cannot directly operate on the point cloud representing an entire dense scene, which often contains millions of points. Tangent Convolution [@tatarchenko_tangent_2018] proposed a way to efficiently model local dependencies and align convolutional filters on different scales. Their work is based on local covariance analysis and down-sampled neighborhood reconstruction with raw data points. Despite tangential convolution itself functioned well extracting local features, their network architecture was limited by static, uniform medium level feature aggregation and a complete lack of global integration. Global Integration ------------------ Several works turned to the global scale for permutation robustness. Its simplest form, global maximum pooling, only fulfilled light-weight tasks like object classification or part segmentation [@charles_pointnet:_2017]. Moreover, RNNs constructed with advance cells like Long-Short-Term-Memory [@hochreiter_long_1997] or Gate-Recurrent-Unit [@chung_empirical_2014] offered promising results on scene segmentation [@landrieu_large-scale_2018], even for those architectures without significant consideration for local feature extraction [@huang_recurrent_2018; @liu_3dcnn-dqn-rnn:_2017]. However, in those cases the global RNNs were built deep, bidirectional or compact with hidden units, giving out a strict limitation on the direct input. As a result, the original point cloud was often down-sampled to an extreme extent, or the network was only capable of operating on sections of the original point cloud [@huang_recurrent_2018]. Unsupervised Learning --------------------- Various works in this area aimed to promote existing supervised-learning networks as auto-encoders. For example, FoldingNet [@yang_foldingnet:_2018] managed to learn global features of a 3D object through constructing a deformable 2D grid surface; PointWise [@shoef_pointwise:_2019] considered theoretical smoothness of object surface; and, MortonNet [@thabet_mortonnet:_2019] learned compact local features by generating fractal space-filling curves and predicting its endpoint. Although features provided by these auto-encoders are reported to be beneficial, we do not adopt them into our network for a fair evaluation on the aggregation method we propose. Different from the common usage of finding a rich, concise feature embedding, SO-Net [@li_so-net:_2018] learned a self-organizing map (SOM) for feature extraction and aggregation. Despite its novelty, few performance improvements were observed even when compared to PointNet or OctNet [@riegler_octnet:_2017]. Possible reasons include the lack of a deep local and global analysis. SO-Net used the SOM to expand the scale of data for local feature extraction, and conducted most of the operations on the expanded feature space. The architecture was capable of handling object analysis tasks. However, for this task each point cloud merely contained a few thousand points, making the benefit from carefully arranging tens of pooling nodes limited. We argue that SOM or other similar self-adapted maps perform better when used to contract the feature space for analyzing large-scale point clouds. Map nodes should be assigned with appropriate weights to provide more detailed differentiation and robustness. Once features are aggregated to the skeleton nodes, a thorough, deep integration process should be conducted prior to decoding. Dynamic aggregation =================== This section aims to transform pointwise local features to semi-local rich representations, and, to propagate information stored in the semi-local space back to the point cloud for semantic analysis. In this process, two things need to be properly designed. First, a pooling skeleton (intermedium information carrier) that adapts global and local geometry structures of the input point cloud. Second, reflections that map the skeleton feature space from and to the point cloud feature space. For clarity, in following paragraphs the point cloud is referred as $P_N=\{p_i\mid 0<i\leq N\}$ and its pooling skeleton is referred as $S_M=\{s_j\mid 0< j\leq M\}$. Skeleton formation {#form} ------------------ For the task of indoor scene segmentation, the scale of each individual scene varies significantly. E.g., in S3DIS dataset [@armeni_3d_2016] the most complicated scene contains $9.2\times 10^6$ points, more than 100 times larger than the least complicated one ($8.5\times 10^4$). Therefore, the size of the skeleton, indicated with the number of nodes $M$, should not be static like those work applied to object analysis [@li_so-net:_2018]. Further ablation studies (Sec. \[ABl\]) demonstrate that an empirical logarithm relationship (Figure \[fig:rec\]) between $N$ and $M$ better adapts scene complexity than stationary $M$ or stationary average receptive field size. We use a Kohonen network [@kohonen_self-organized_1982; @li_so-net:_2018] to implement the dynamic skeleton. Contrary to initialization methods suggested by [@ciampi_clustering_2000; @li_so-net:_2018], we conduct random initialization prior to normalization to provide a reasonable guess with respect to substantial spacing along different axes. Such a method provides extra robustness for the task of scene understanding, where individual inputs often contain a dimension disproportionate to others (long hallways, large conference rooms). An example of the skeleton is shown in Figure \[fig:som\]. ![Dynamic Aggregation Skeleton (Red) and Point Cloud (Yellow). (Floor, ceiling and walls in the front are removed for clarity.) Note that each chair is assigned with at least one skeleton node, while uniform structures like walls are assigned with smaller node density. Two nodes (circumscribed with blue ovals) fail to be attached to point cloud. Such a problem is addressed in Sec. \[Agg\].[]{data-label="fig:som"}](F2.png){width="40.00000%"} Feature aggregation {#Agg} ------------------- Consider an arbitrary point cloud $P_N$ and its corresponding skeleton $S_M$, dynamic aggregation maps pointwise feature space $F^{agg-i}\subset \Re^{N\times C}$ into the node-wise feature space $F^{agg-o}\subset \Re^{M\times C}$. By introducing a correspondence indicator $T_j:0<T_j\leq N$ that regulate node receptive field size and a possible global intervention factor $g$, the general expression of dynamic aggregation is shown in Eq. (\[general\]). $$\label{general} f_j^{agg-o}=f_j^{agg-o}(f_{i^j_1}^{agg-i},...,f_{i^j_{T_j}}^{agg-i},g),~0< i^j_t\leq N$$ Such general expression contains two sections that await instantiation: the dependency searching function that constructs indices $\{i_t\}$, and the pooling function dealing with an arbitrary set of inputs $f=f^{agg-o}(f_1,...,f_T,g)$. **Indexing.** Each point $p_i$ is first linked to its K-nearest neighbor nodes, referred as $\{s_{iK}\}$. As all points are iterated through this process, a global index matrix $I\subset \mathbb{N}^{N\times K}$ is formed, whose element $I(i,k)$ is the index of the k-th neighbor node point $p_i$ searched, i.e., $\forall ind \in I, 0< ind\leq M $. Dynamic aggregation indices, $\{i_t\}$, are then generated from traversing the node space: all points indexing $s_j$ will be categorized into $\{i_t^j\}=\{i\mid I(i,k)=s_j\}$. ![image](Capture.png){width="95.00000%"} **Aggregating function.** A semi-average pooling method is used to further extract information density and address skeleton construction failures. Although skeleton nodes are already arranged more compact where geometry features vary more (Sec. \[form\]), each node in those areas is still indexing relevantly more neighbor points, as rich geometry features usually fold more points in unit space (in the scale of skeleton receptive field). Therefore, assigning a larger weight to nodes correlating with more points becomes advantageous. Moreover, when a skeleton node fails to represent any geometry structure (see examples shown in Figure \[fig:som\]), traditional average or maximum pooling cannot identify the situation and passes irrelevant features forward. The semi-average pooling function, shown in Eq. (\[pooling\]), is implemented with a global descriptor $g$ that indicates average reception field size, i.e., the average amount of neighbor points a node would index. $$\label{pooling} \left\{ \begin{array}{lr} f^{agg-o}_j=\sum_j(f_{i^j_1}^{agg-i},...,f_{i^j_{T_j}}^{agg-i})/g & \\ \\ g=\sum_j|\{i_t^j\}|/M=\sum_jT_j/M & \end{array} \right.$$ Feature propagation ------------------- As the neighborhood searching process is conducted on the node space, the global index matrix $I$ can be directly used to unpool node-wise features back to the point cloud as if pooling from $\Re^{KN\times C}$ to $\Re^{N\times C}$. $$\label{unpool} f^{ppg-o}_i=\max_k\{f^{ppg-i}_{I(i,k)}\}$$ Where $f^{ppg-o}_i$ is the propagated feature corresponding to point $p_i$ and $f^{ppg-i}_{I(i,k)}$ are features corresponding to nodes indexed by point $p_i$. Note that the redundant space (size of $KN$) is only implicitly used with indices throughout the pooling-unpooling process. Hence, the dynamic aggregation approach we propose is compatible with large-scale dense point clouds. Global integration ================== Global Integration aims to model the long-range dependency in a point cloud, which can be described as a reflection mapping one node-wise feature space to another: $R:\Re^{M\times C_1}\rightarrow \Re^{M\times C_2}$. We use GRU-based RNN [@chung_empirical_2014] for permutation invariance upon unordered nodes. Features on the entire skeleton, $\{f^{agg-o}\}\subset \Re^{M\times C}$, are treated as a single-batch sequence $Q$ of length $M$: $Q[j]=f^{agg-o}_j$. In addition, as $M$ varies from scene to scene, all input sequences are padded to the same length $\max \{M\}$. The padded sequence is then fed into the recurrent network. As a result, output features on each node is relevant to input information from all nodes, creating a maximized overlapping receptive field: $$f^{rnn-o}_j=R(f^{agg-o}_1,...,f^{agg-o}_M)$$ Architecture ============ We design a convolutional-recurrent network to consort short and long range spatial dependencies with the operation of dynamic aggregation, as is shown in Figure \[fig:arc\]. The skeleton-based aggregation in this network clusters intermediate level information and compresses the feature space. Thus, the recurrent integration network is designed compact and efficient. For pre-processing, we first estimate the scene complexity to determine the appropriate skeleton size for each scene. The skeleton is then unsupervised clustered for a preliminary understanding of the semi-local structure distribution. We adopt tangent convolutions [@tatarchenko_tangent_2018] for local pointwise feature extraction. The encoded local features are then dynamically aggregated to the skeleton, as an intermediate scale of information abstraction. Furthermore, node-wise features that independently correspond to an intermediate receptive field are treated with a global RNN, which implicitly learns long-range knowledge. Globally integrated information is propagated back to the point cloud for concatenations with local features and hierarchically decoding. In the end, $1\times1$ pointwise convolutions are used to generate semantic prediction results. Experiments =========== In this section, we first report a few implementation details and evaluation criteria, then present best segmentation results DAR-Net generates. More experiments are discussed in the ablation study section. Datasets and details {#data} -------------------- The performance of dynamic segmentation and DAR-Net is evaluated in the task of indoor scene segmentation. Two commonly used large-scale datasets, Stanford Large-Scale 3D Indoor Spaces (S3DIS) [@armeni_3d_2016] and ScanNet [@dai_scannet:_2017] are adopted to conduct the experiments. **S3DIS** dataset includes more than 200 dense indoor scenes gathered from three different buildings, each scene contains up to more than nine million points in 13 classes. For this dataset, we use a widely-used A5 train-test split  [@ferrari_3d_2018; @huang_recurrent_2018; @tchapmi_segcloud:_2017; @charles_pointnet:_2017]. **ScanNet** dataset contains over 1,500 indoor scans labeled in 20 classes. We use the standard train-test spilt provided by [@dai_scannet:_2017]. **Implementation details.** We introduce multiple levels of feature carriers other than the most compact $M$ space. For clarity, point clouds down-sampled to a certain resolution $r$ will be denoted as $P^r$. For computational purposes, we use coordinates and color information on $P^5$ as raw inputs. Coordinates are then used to generate the dynamic pooling skeleton and conduct covariance analysis, for estimating normals and reconstructing local neighborhoods. As a result, input channels of the feature extractor include depth to the tangential plane, z-coordinate, estimated normals and RGB information, all of which are batch-normalized to $[0,1]$ [@tatarchenko_tangent_2018]. Feature extractors encode $p^5$ to a rich representation on $P^{20}$ with 128 channels [@tatarchenko_tangent_2018]. Features on $P^{20}$ are then aggregated to the skeleton space $S^M$, which is concise enough for a global integration network to handle ($M\leq 256$). For best performance (Sec. \[ABl\]), the RNN is designed to be single-directional, single-layered with 256 hidden units. Its 128 output channels are then propagated to $P^{20}$ and fed back to convolutional decoders. All our reported results are based on original point clouds. As the network only gives out segmentation on the down-sampled $P^5$ space, a nearest neighbor searching between $P^5$ and $P$ is conducted to extrapolate predictions. The only data augmentation method we adopt is rotation about the z-axis, in order to reduce invalid information from normal vector directions. We use individual rooms as training batches. Following the suggestions of [@tatarchenko_tangent_2018], we pad each room to an uniform batch size throughout the network for computational purposes. Padded data stays out of indexing and has no effect. All supervised sections are trained as a whole using the cross-entropy loss function and an Adam optimizer with an initial learning rate of $5\times10^{-4}$ [@kingma2014adam]. The unsupervised clustering network is trained separately. The Kohonen map is initialized for each individual scene to accommodate its shape and complexity. Qualitative results suggest that the usage of prime components, either as additional inputs or for initial guesses, hurts the robustness of the algorithm. Therefore, the Kohonen maps are initialized randomly with respect to the nodes spacing, and trained to convergence with the initial learning rate of 0.4. **Measures.** For quantitative reasoning, we report mean (over classes) intersection over union (**mIoU**), class-wise **IoU** and mean accuracy over class (**mA**). We do not use the indicator of overall accuracy as it fails to measure the actual performance for scene segmentation, where several classes (floor, ceiling, etc.) are dominating in size yet easy to identify. In addition, all results are calculated over the entire dataset, i.e., if a certain class does not occur in a certain scene, we do NOT pad accuracy for misleadingly better results. Main results {#Main Results} ------------ Results in this section are obtained under the following settings: receptive field indicator $K=3$, aggregation skeleton is log-activated with $\max\{M\}=256$, and aggregation method is as Eq. (\[pooling\]). Segmentation results on the two indoor scene datasets are shown in Table \[table1\] and \[table2\], respectively. We compare our results against both commonly-used benchmarks [@charles_pointnet:_2017; @tchapmi_segcloud:_2017; @qi_pointnet++:_2017] and state-of-the-art networks that focused on local feature extraction [@tatarchenko_tangent_2018] or global recurrent analysis [@huang_recurrent_2018]. The results show the advantage of using dynamic aggregation to coordinate local and global analysis as a whole. For evaluation completeness we include some most recent networks [@dai_3dmv:_2018; @huang_texturenet:_2019] that provide high performance. However, the input channels fed to these networks are further treated. For the S3DIS dataset, class-wise IoU results demonstrate that our network achieve better prediction on classes that are spatially discrete (\[table\], \[door\]) or with rich, compact geometry features (\[bookcase\], \[clutter\]), which matches theoretical benefits of forming a self-adaptive pooling skeleton. On the other hand, such performance improvements come with a minor cost on classes that have more uniform geometrical structures (less information density), like \[floor\] and \[wall\]. Failure on segmenting class \[beam\] is natural due to the train-test spilt, as beams in the train-set (Area 1-4, 6) shows a different pattern than that of the test-set (Area 5). We do not use a cross-six-fold validation [@ferrari_3d_2018] to address this matter, as the difference between training data and test data appropriately models real-world applications. For the ScanNet dataset, class-wise IoU give out similar demonstrations. Discrete objects (\[sofa\], \[bathtub\], \[toilet\], \[door\]) are better detected whereas structures containing more points yet less information (\[wall\], \[floor\]) are partially omitted. We argue that such characteristics are desired, especially for real-world applications like robotic vision or automatic foreground object detection. Sample segmentation results on the S3DIS dataset are shown in Figure \[pic:s3dis\]. As T-Conv [@tatarchenko_tangent_2018] did not report class-wise IoU, their results are visualized as benchmarks here for a thorough comparison. As suggested in the text, our network performs particularly well on detecting complicated geometry features and spatial discreteness. Ablation study {#ABl} -------------- In this section, we report results from adjusting the dynamic aggregation approach. Unless otherwise specified, all experiments are conducted on the S3DIS dataset. **Node receptive field.** The local receptive field size $T_j$, although varying among nodes, can be generally indicated with an average value $g=\sum_jT_j/M=KN/M$. Because $K$ serves as a linear coefficient, its effect is first evaluated, as is shown in Table \[table3\]. The global integration network demands a limited skeleton size $M_{max}$ for computational purposes. Fixing $M\equiv M_{max}$ is clearly not desirable. As $N$ varies through two orders in indoor scene datasets [@armeni_3d_2016], keeping $M$ unchanged leads to a harmful variance on the average receptive field size $g=KN/M$. However, rigidly setting a uniform $g\equiv G$ for all scenes is also not desirable as it fails to take structure complexity into account. An office room and a long hallway may contain the same amount of points, but the former one naturally requires more detailed inspection (Figure \[pic:s3dis\]). Without introducing hand-crafted histogram descriptors, a reasonable solution is to adapt a greedy approach: always assigning more nodes (more detailed inspection) than the linear relationship $M=(K/G)\times N$ suggests. Experimental studies show that an approximate logarithm function, $M=-6+70\times\log(N)$, best serves for this purpose as shown in Table \[table4\]. Still, learning-based approaches can be introduced in the future to explore the parameter space represented by Figure \[fig:rec\]. ![Schematic representation of receptive field size at $K=3$. This figure only represent average trends for demonstration clarity, actual receptive field size varies.[]{data-label="fig:rec"}](untitled.png){width="40.00000%"} **Aggregation method.** We compare the proposed semi-average aggregation function (Eq. \[pooling\]) with traditional average/maximum pooling functions. The results shown in Table \[table5\] indicate advantage from weighting each skeleton node against its receptive field size. No significant influence is observed when changing the unpooling method on the S3DIS dataset. However, experiments on the ScanNet dataset suggests otherwise, as is shown in Table \[table6\]. This phenomenon may be due to scans in this dataset are half-open with less uniform geometrical structures, or that the scale of this dataset being larger and more diverge. Conclusion ========== We present an approach of dynamic aggregation to introduce variance on the extent of multi-level inspection. By introducing self-adapted receptive field size and node weights, dynamic aggregation provides a deep understanding on structures that contain richer geometrical information. We design a network architecture, DAR-Net, to coordinate such intermedium aggregation method with local and global analysis. Experimental results on large-scale scene segmentation indicate DAR-Net outperforms previous network architectures that adopted static feature aggregation.
--- abstract: 'The SU($n$) Heisenberg model represented by exchange operators is studied by means of high-temperature series expansion in three dimensions, where $n$ is an arbitrary positive integer. The spin-spin correlation function and its Fourier transform $S(\mathbf{q})$ is derived up to $O[(\beta J)^{10}]$ with $\beta J$ being the nearest-neighbor antiferromagnetic exchange in units of temperature. The temperature dependence of $S(\mathbf{q})$ and next-nearest-neighbor spin-spin correlation in the large $n$ cases show that dominant correlation deviates from $\mathbf{q}=(\pi,\pi,\pi)$ at low temperature, which is qualitatively similar to that of this model in one dimension. The Néel temperature of SU(2) case is precisely estimated by analyzing the divergence of $S(\pi,\pi,\pi)$. Then, we generalize $n$ of SU($n$) to a continuous variable and gradually increases from $n=2$. We conclude that the Néel ordering disappears for $n>2$.' author: - Noboru Fukushima title: ' Vanishing Néel Ordering of SU($n$) Heisenberg Model in Three Dimensions ' --- Introduction ============ It is known that properties of quantum spin systems tend to approach those of their corresponding classical spin systems as the spin magnitude increases. However, this is not necessarily the case for a sequence of models in which the number of multipolar-interaction terms increases as the spin magnitude increases. Consequently the higher-spin systems may have stronger quantum effects in this case. In other words, such additional terms can break a classical correspondence down even in high dimensions. These “large-spin-magnitude” systems mentioned above include systems in which one unit has more than two degrees of freedom, such as orbitally degenerate systems[@Kugel82; @FK; @KF]. In such systems, many coupling constants appear in general. However, experimental information about multipolar couplings is limited. As a starting point to explore such systems, understanding of one of the extreme limits must be useful. Therefore in this paper, we investigate an SU($n$) symmetric case [@Uimin70; @Sutherland75; @Klumper99; @Affleck86; @Kawakami92; @Batchelor03; @Yamashita98; @Frischmuth99; @Fukushima02; @Fukushima03; @lima; @vdb; @RVBsu4; @fermiMF; @shen; @Penc03; @Ohkawa85; @Shiina97; @chen72] in three dimensions, where $n$ is the total number of internal degrees of freedom, by means of high-temperature series expansion (HTSE). We consider a simple cubic lattice. Let each site take one of the $n$ colors denoted by $|\alpha\rangle$ with $\alpha = 1, 2, \cdots, n$. Using the Hubbard operator $X^{\alpha\beta} := |\alpha\rangle\langle\beta|$, the exchange operator is expressed as $$P_{\mathbf{i},\mathbf{j}} := \sum_{\alpha=1}^n \sum_{\beta=1}^n X_\mathbf{i}^{\alpha\beta} X_\mathbf{j}^{\beta\alpha}.$$ Colors of sites $\mathbf{i}$ and $\mathbf{j}$ are exchanged when $P_{\mathbf{i},\mathbf{j}}$ is applied. Then, an SU($n$) symmetric Hamiltonian reads $${\cal H} := J \sum_{\langle \mathbf{i},\mathbf{j}\rangle} P_{\mathbf{i},\mathbf{j}}, \label{hamil}$$ where the summation is taken over all the nearest neighbor pairs. We consider the antiferromagnetic case, $J>0$. Let us show the relations with spin operators explicitly for some of the special cases below. \(i) When $n=2$, this model is reduced to the ordinary Heisenberg model with $s=1/2$ by relation $2P_{\mathbf{i},\mathbf{j}}-1=4 \bm{s}_\mathbf{i}\cdot \bm{s}_\mathbf{j}$. \(ii) The SU(3) case corresponds to $s=1$, competing quadratic and biquadratic exchange interaction,[@PaPanicolaou88] $$1+P_{\mathbf{i},\mathbf{j}} = \bm{s}_\mathbf{i}\cdot \bm{s}_\mathbf{j}+(\bm{s}_\mathbf{i}\cdot \bm{s}_\mathbf{j})^2. \label{eq:su3rep}$$ \(iii) The SU(4) Heisenberg model is related to spin 3/2 systems but more often discussed in the context of orbital- and spin-degenerate systems[@Yamashita98; @Frischmuth99; @Fukushima02; @Fukushima03; @lima; @vdb; @RVBsu4; @fermiMF; @shen; @Penc03; @Ohkawa85; @Shiina97]. In particular, the model in three dimensions has been used as an effective model of CeB$_6$ to explain magnetic-field dependence of the transition temperature of an antiferro-orbital ordering. [@Ohkawa85; @Shiina97] The four local states can be represented by $|+)|+]$, $|+)|-]$, $|-)|+]$, $|-)|-]$, where $|\pm)$ and $|\pm]$ represent an orbital state and a spin state, respectively. The pseudo-spin operators are defined by $t^z|\pm)=\pm\frac12|\pm)$, $t^\pm|\mp)=|\pm)$, $s^z|\pm]=\pm\frac12|\pm]$, $s^\pm|\mp]=|\pm]$, and the exchange operator for $n=4$ is rewritten as $$P_{\mathbf{i},\mathbf{j}} = \bm{t}_\mathbf{i} \cdot\bm{t}_\mathbf{j} + \bm{s}_\mathbf{i}\cdot \bm{s}_\mathbf{j} +4 (\bm{t}_\mathbf{i}\cdot\bm{t}_\mathbf{j}) (\bm{s}_\mathbf{i}\cdot \bm{s}_\mathbf{j}) +\frac{1}{4}.$$ The Pauli matrices $\bm{\tau}=2\bm{t}$ and $\bm{\sigma} =2\bm{s}$ may simplify this expression, [*i.e.*]{}, $$4 P_{\mathbf{i},\mathbf{j}} = \bm{\tau}_\mathbf{i}\cdot\bm{\tau}_\mathbf{j} + \bm{\sigma}_\mathbf{i}\cdot \bm{\sigma}_\mathbf{j} + (\bm{\tau}_\mathbf{i}\cdot\bm{\tau}_\mathbf{j}) (\bm{\sigma}_\mathbf{i}\cdot \bm{\sigma}_\mathbf{j}) +1. \label{paulirep}$$ Note that there is a different representation of SU($n$) Heisenberg model studied in detail using Quantum Monte Carlo method (QMC)[@Harada02]. However, the QMC for the Hamiltonian (\[hamil\]) suffers from minus sign problems in more than one dimensions [@Harada02; @Frischmuth99] If the number of competing order parameters is large and frustration exists, the transition temperature can greatly be reduced from the mean-field value even in three dimensions. That will be the case with the Hamiltonian (\[hamil\]). First, it is isotropic with respect to $n^2-1$ independent interacting components. Furthermore, it contains frustration as most clearly seen in Eq. (\[paulirep\]) of the SU(4) case. Each of the 15 components, $\tau^\alpha$, $\sigma^\beta$, $\tau^\gamma \sigma^\delta$, attempts an antiparallel alignment. However, it cannot be attained simultaneously, [*e.g.*]{}, simultaneous Néel states of $\tau^z$ and $\sigma^z$ produce a ferromagnetic alignment of the product $\tau^z \sigma^z$. Such frustration is not so explicit in the SU(3) case in Eq. (\[eq:su3rep\]) as in the SU(4) case, yet appearance of the minus-sign problem in the QMC indicates that similar frustration lies in it.[@Harada02] This frustration should become stronger as $n$ increases. One can see in one dimension how the system finds a compromise against this frustration. The SU($n$) Heisenberg model in one dimension can be exactly solved.[@Uimin70; @Sutherland75; @Klumper99; @Affleck86; @Kawakami92; @Batchelor03] An important point is that the ground state has a quasi–$n$-site periodicity. The SU(4) case is numerically studied in more detail; ground-state properties by the density matrix renormalization group[@Yamashita98], and thermodynamic properties by the QMC[@Frischmuth99] and the HTSE[@Fukushima02; @Fukushima03]. The real-space spin-spin correlation function as a function of spin-spin distance has a positive value every four sites. Its Fourier transform has a outstanding cusp at $q=\pi/2$ and a small cusp at $q=\pi$. Regarding the temperature dependence, as temperature decreases correlation with two-site periodicity develops, and then at lower temperature another correlation occurs with $n$-site periodicity. [@Fukushima02] Such peculiar correlation in one dimension could suggest an exotic ordering in higher dimensions. However, not much is known about the antiferromagnetic SU($n$) Heisenberg model in three dimensions. In fact, its ferromagnetic variant is studied in Ref.  by the HTSE for the uniform susceptibility that needs less effort to be calculated than susceptibility of other wave-numbers. To our knowledge, our calculation in this paper is the first HTSE aiming at antiferromagnetic exchange of this model. In this study, we investigate the model systematically by changing parameter $n$ of SU($n$), and report some unique features of spatial correlation at finite temperature. Namely, in Sec. \[sec:corfunc\] we show that the behavior of correlation functions is similar to that in one dimension, and in Sec. \[sec:neeltem\] that the Néel temperature disappears as $n$ increases from $n=2$. Temperature dependence of correlation functions {#sec:corfunc} =============================================== What should be calculated here is $\langle P_{\mathbf{i},\mathbf{j}} \rangle$, which is related to a correlation function. For example, when $n=4$, fifteen components in Eq. (\[paulirep\]) contribute equally, and thus $ \langle \tau_\mathbf{i}^\alpha \tau_\mathbf{j}^\alpha\rangle = \langle \sigma_\mathbf{i}^\beta \sigma_\mathbf{j}^\beta\rangle =\langle \tau_\mathbf{i}^\gamma \sigma_\mathbf{i}^\delta \tau_\mathbf{j}^\gamma \sigma_\mathbf{j}^\delta \rangle = \langle 4 P_{\mathbf{i},\mathbf{j}} -1 \rangle /15 $. For general $n$, we can define correlation function between sites $\mathbf{i}$ and $\mathbf{j}$ as $$S_{\mathbf{i}-\mathbf{j}} := \langle X_\mathbf{i}^{\alpha\beta} X_\mathbf{j}^{\beta\alpha} \rangle= \frac{1}{n^2-1} \left( \langle P_{\mathbf{i},\mathbf{j}} \rangle -\frac{1}{n} \right),$$ for $\mathbf{i}\neq \mathbf{j}, \alpha \neq \beta$, which does not depend on $\alpha$ nor $\beta$ because of the SU($n$) symmetry. Its Fourier transform is denoted by $S(\mathbf{q})$. The high-temperature expansion is performed by expanding the Boltzmann factor $e^{-\beta {\cal H}}$ in $\beta$. In practice, the series coefficients in the thermodynamic limit are exactly obtained by a linked-cluster expansion.[@Domb3] To obtain the series expansion of $\langle P_{i,j} \rangle$ up to $O[(\beta J)^M]$, we need to calculate ${\rm Tr}[({\cal H}_{\rm L})^m]$ and ${\rm Tr}[P_{i,j}({\cal H}_{\rm L})^m]$ for $0\le m\le M$, where ${\cal H}_{\rm L}$ is the Hamiltonian in a linked cluster. In calculating the traces, we use a property of the permutation operator, and calculation of the traces is reduced to a combinatorial problem to count the number of circular permutations in a product of permutations. [@Handscomb64; @chen72; @Fukushima03] Here, an important point of our analysis is that the series coefficients are obtained as polynomials of $n$, for example, $$\begin{aligned} S(\pi,\pi,\pi)&=& \frac{1}{{n}} + \frac{6}{{ n^2}}(\beta J) + \frac{36}{{ n^3}}(\beta J)^2 \nonumber \\ &&+ \frac{\left( 216 - 22\,{ n^2} \right)}{{ n^4}}(\beta J)^3 +\ldots.\end{aligned}$$ Namely, the order of the series for every $n$ is the same. We have obtained the series for $S_{\mathbf{i}-\mathbf{j}}$ up to $O[(\beta J)^{9}]$ for arbitrary $\mathbf{i}- \mathbf{j}$, and consequently $S(\mathbf{q})$ up to $O[(\beta J)^{9}]$ for arbitrary $\mathbf{q}$. As a special interest, $S(\pi,\pi,\pi)$ is obtained up to $O[(\beta J)^{10}]$. The series are extrapolated using the Padé approximation (PA). First of all, in order to see the temperature-dependent nature of the spatial correlation, we analyze the *next*-nearest-neighbor correlation function $S_{110}$ as we have done in one dimension in Ref. . Figure \[fig:nncor\] shows the results. Both the axes are scaled so that the high-temperature limit of every $n$ matches. Here, we have simultaneously plotted extrapolation from several different choices of the PA, and the difference between them approximately represents an error of the extrapolation. The lowest order of the series of $S_{\mathbf{i}-\mathbf{j}}$ has a Néel-order-type correlation in any $n$. That is, the series of $S_{i_xi_yi_z}$ starts with $O[(\beta J)^{|i_x|+|i_y|+|i_z|}]$ with sign $(-1)^{|i_x|+|i_y|+|i_z|}$. Therefore, $S_{110}>0$ at high temperature. However, as antiferromagnetic correlation of each interacting component becomes larger, each short-range order disturbs another because of the frustration. The change of the sign of $S_{110}$ suggests that the correlation acquire a longer period at low temperature. For $n\ge6$, it is clear that $S_{110}$ changes the sign at a low temperature that increases with $n$ in this scale. For smaller $n$, the relevant temperature range is below the converged region, and it is difficult to conclude from this data. Such a change of correlation at low temperature also appears in the Fourier transform of the correlation function. A naive extrapolation of the series for $S(\mathbf{q})$ shows bad convergence for several $\mathbf{q}$. This is probably because information at position $\mathbf{x}$ is lacking when $\cos(\mathbf{x}\cdot \mathbf{q})\simeq 0$. In order to avoid it, we extrapolate the series of a complex function $ \sum_{\mathbf{x}\ge0} \langle X_j^{\alpha\beta} X_{j+x}^{\beta\alpha} \rangle e^{-i\mathbf{x}\cdot\mathbf{q}} w(\mathbf{x})$, and after that take the real part of the extrapolated function. Here, the summation is taken for $x\ge0$, $y\ge0$, $z\ge0$, and $w(\mathbf{x})$ is two to the power of the number of nonzeros in $x,y,z$. This method also makes use of its imaginary part, and the convergence becomes better than the naive extrapolation. As we have seen in the next-nearest-neighbor correlation function, the analysis becomes easier as $n$ increases. Therefore, we show $S(\mathbf{q})$ of the SU(16) case in Fig. \[fig:sq\] for a diagonal direction in the $\mathbf{q}$-space, $ S(q,q,q) $. Here, its high temperature limit, which does not depend on $\mathbf{q}$, is subtracted. As temperature decreases, antiferromagnetic correlation develops and $S(\pi,\pi,\pi)$ increases as the curve at $T=2J$ shows. However, with further decrease of temperature, $S(\pi,\pi,\pi)$ starts decreasing and $S(\mathbf{q})$ with other $\mathbf{q}$ increases. It is not very clear from Fig. \[fig:sq\] if the maximum of $S(\mathbf{q})$ starts moving from $(\pi,\pi,\pi)$. Hence, let us show another quantity. Note that if the second derivative of $S(\mathbf{q})$ at $\mathbf{q}=(\pi,\pi,\pi)$ changes sign, the position of the maximum clearly moves. Its temperature dependence for the SU(16) model is plotted in Fig. \[fig:ddsqn16\]. It shows a change of the sign around $T\sim J$. In fact, this temperature of changing sign seems to hardly depend on $n$ of SU($n$). The result above suggests that the Néel order disappear at least in the SU($n$) model with large $n$. Namely, order with wave number $\mathbf{q}\neq(\pi,\pi,\pi)$, or disorder, should appear. Then, the next question is, at which $n$ the Néel order disappears. Néel Temperature {#sec:neeltem} ================ It is known that the SU(2) Heisenberg model has the Néel order at low temperature. Therefore in this section, we gradually increases $n$ of SU($n$) from $n=2$. As a preparation for that, we find a reliable way to estimate a transition temperature first, using the SU(2) model. The Néel temperature $T_{\rm N}$ can be characterized by divergence of $S(\pi,\pi,\pi)$, namely, by a singularity of $S(\pi,\pi,\pi)$ as a function of $\beta J$. In order to analyze such a singularity, we use so-called D-log-Padé approximation (DLPA), [*i.e.*]{}, the PA for the logarithmic derivative. In using the DLPA, transformation of the expansion variable may improve the convergence of extrapolation. We choose a transformation[@Domb13; @Pan99] $$\beta J=\frac{x}{1-{ a}^2 x^2}, \label{eq:trans}$$ where $a$ is an adjustable parameter that improves convergence. In fact, the singularity closest to the origin in the original series is near the imaginary axis, while $T_{\rm N}$ corresponds to a singularity on the real axis. With this transformation, singularities on the real axis approach to the origin and those near the imaginary axis go away. Since the DLPA can estimate a position of the nearest singularity most accurately, errors of the DLPA can become smaller by this transformation. In order to find an optimal $a$, we calculate $T_{\rm N}$ as a function of $a$ for a couple of choices of the DLPA, and we adopt $a$ at which difference among different DLPAs is the smallest. Figure \[fig:valtran\] shows $T_{\rm N}$ as a function of $a$ for three different choices of the DLPA. Here, $[m/n]$ denotes the DLPA with a polynomial of order $m$ over a polynomial of order $n$. Since \[4/4\] is from the series one-order lower than the others, we choose $a$ at which \[5/4\] and \[4/5\] are the closest, namely, $a=1.04$. Then, $T_{\rm N}/J$ obtained here is 1.898, which is close to those in the literature, $1.892$ by the QMC[@Sandvik98] and $1.888$ by the HTSE[@Oitmaa]. In addition, we have obtained a critical exponent simultaneously. We assume that the critical exponent does not depend on the spin magnitude, and compare it with that of the [*classical*]{} antiferromagnetic Heisenberg model, in which $S(\pi,\pi,\pi)$ is identical to the staggered susceptibility equivalent to the ‘uniform susceptibility of the ferromagnetic model’. Hence we can compare the critical exponent $\gamma$. The estimation from our calculation above is $\gamma=1.399$, which is close to 1.396 by a Monte Carlo method[@Campostrini02], 1.406 by HTSE[@Butera97], and 1.388 by a field theory[@Jasch01], of the classical O(3) model. Therefore, we trust this way of analyzing $T_{\rm N}$, and use it also for the SU($n$) model with $n>2$. Also for $n>2$, the variable transformation (\[eq:trans\]) should work because the singularity closest to the origin in the original series is near the imaginary axis. Originally $n$ is an integer because it is the number of internal degrees of freedom. However, after obtaining the series as an analytic function of $n$, we can regard $n$ as a continuous variable that has a physical meaning only when it happens to take integer values. In fact, as $n$ changes continuously, the properties of the series change also continuously. Therefore, we gradually increases $n$ from $n=2$ and see $n$-dependence of $T_{\rm N}$. Figure \[fig:ndep\] shows the results. Here, we fix $a=1.04$, and we plot \[5/4\] and \[4/5\] together. The optimal $a$ may depend on $n$. However, since the difference between the two curves is very small, we regard it as an optimal $a$. As $n$ increases, $T_{\rm N}$ decreases almost linearly, and at $n\sim2.45$ the singularity corresponding to $T_{\rm N}$ runs away from the real axis to a complex value with a finite imaginary part. For larger $n$, we do not find any singularity on the antiferromagnetic side of the real axis. Therefore, we conclude that the value of $n$ at which Néel order disappears lies in the range $2<n<3$. In addition, we have also analyzed $S(\mathbf{q})$ with $\mathbf{q}\neq(\pi,\pi,\pi)$. However, we have not found any symptom of ordering with $\mathbf{q}\neq(\pi,\pi,\pi)$ in the temperature region that we can reach by the present order of the series. summary ======= In summary, we have performed high-temperature series expansions for the SU($n$) Heisenberg model in three dimensions with arbitrary $n$. First of all, we have calculated a *next*-nearest-neighbor correlation function. At least at large $n$, it changes the sign at low temperature, which suggests that the correlation should not be like Néel order, but have a longer period. Analysis of the Fourier transform of the correlation function also supports that the ground state of the large-$n$ SU($n$) Heisenberg model does not have the Néel-order correlation. Then, we have turned to an approach from $n=2$. Since the SU(2) Heisenberg model has the Néel order, it should disappear at a certain $n$. We have first found a reliable way to estimate a transition temperature by analyzing the divergence of the $(\pi,\pi,\pi)$ component of the correlation function. Next, we generalize $n$ to a continuous variable and increase $n$ gradually from $n=2$. We have concluded that the Néel ordering disappears for $n>2$. The author would like to thank Y. Kuramoto for directing him to this topic and for stimulating discussions. He also appreciates useful comments on the manuscript from A. Honecker. This work was supported in part by the Deutsche Forschungsgemeinschaft (DFG). Early stages of this work were supported by the Japan Society for the Promotion of Science and by the Max-Planck Institute for the Physics of Complex Systems. Different stages of this work were supported by the DFG and the Technical University Braunschweig. Parts of the numerical calculations were performed on the [cfgauss]{} at the computing centers of the TU Braunschweig. [100]{} K. I. Kugel and D. I. Khomskii, Sov. Phys. Usp. [ **25**]{}, 231 (1982) N. Fukushima and Y. Kuramoto, J. Phys. Soc. Jpn. [**67**]{}, 2460 (1998). Y. Kuramoto and N. Fukushima, J. Phys. Soc. Jpn. [**67**]{}, 583 (1998). G.V. Uimin, JETP Lett. [**12**]{}, 225 (1970). B. Sutherland, Phys. Rev. B [**12**]{}, 3795 (1975). A. Fujii, A. Klümper, Nucl. Phys. [**B546**]{}, 751 (1999). I. Affleck, Nucl. Phys. [**B265**]{}, 409 (1986). N. Kawakami, Phys. Rev. B [**46**]{}, 3191 (1992). M. T. Batchelor, X.-W. Guan, N. Oelkers, K. Sakai, Z. Tsuboi, A. Foerster, Phys. Rev. Lett. [**91**]{}, 217202 (2003). Y. Yamashita, N. Shibata and K. Ueda, Phys. Rev. B [**58**]{}, 9114 (1998). B. Frischmuth, F. Mila and M. Troyer, Phys. Rev. Lett. [**82**]{}, 835 (1999). N. Fukushima and Y. Kuramoto, J. Phys. Soc. Jpn. [**71**]{}, 1238 (2002). N. Fukushima, J. Stat. Phys. **111**, 1049 (2003). Y. Q. Li, M. Ma, D. N. Shi, and F. C. Zhang, Phys. Rev. Lett. [**81**]{}, 3527 (1998). M. van den Bossche, P. Azaria, P. Lecheminant, F. Mila, Phys. Rev. Lett. [**86**]{}, 4124 (2001). M. van den Bossche, F.-C. Zhang, and F. Mila, Eur. Phys. J. B [**17**]{}, 367-370 (2000). A. Mishra, M. Ma, and F. C. Zhang, Phys. Rev. B [**65**]{}, 214411 (2002). S. Q. Shen, Phys. Rev. B [**66**]{}, 214516 (2002). K. Penc, M. Mambrini, P. Fazekas, F. Mila, Phys. Rev. B [**68**]{}, 12408 (2003). F. J. Ohkawa: J. Phys. Soc. Jpn. [**54**]{}, 3909 (1985). R. Shiina, H. Shiba and P. Thalmeier, J. Phys. Soc. Jpn. [**66**]{}, 1741 (1997). H. H. Chen and R. K. Joseph, J. Math. Phys. [**13**]{}, 725 (1972). N. Papanicolaou, Nucl. Phys. B [**305**]{}, 367 (1988). K. Harada and N. Kawashima, Phys. Rev B [**65**]{}, 52403 (2002). G. S. Rushbrooke, G. A. Baker, Jr., and P. J. Wood, in *Phase Transitions and Critical Phenomena*, edited by C. Domb and M. S. Green (Academic, London, 1974), Vol. 3, p. 245. D. C. Handscomb, Proc. Camb. Phyl. Soc. [**60**]{}, 115 (1964). A. J. Guttmann, in [*Phase Transitions and Critical Phenomena*]{}, edited by C. Domb and J. L. Lebowitz (Academic, San Diego, 1989), vol. 13, p. 1. K.-K. Pan, Phys. Rev. B [**59**]{}, 1168 (1999). A. W. Sandvik, Phys. Rev. Lett. [**80**]{}, 5196 (1998). J. Oitmaa and W. Zheng, preprint (cond-mat/0409041). M. Campostrini, M. Hasenbusch, A. Pelissetto, P. Rossi, E. Vicari, Phys. Rev. B [**65**]{}, 144520 (2002). P. Butera and M. Comi, Phys. Rev. B [**56**]{}, 8212 (1997). F. Jasch and H. Kleinert, J. Math. Phys. [**42**]{}, 52 (2001).